index int64 0 18.8k | text stringlengths 0 826k | year stringdate 1980-01-01 00:00:00 2024-01-01 00:00:00 | No stringlengths 1 4 |
|---|---|---|---|
1,400 | An Average Case A Tom Bylander Laboratory for Artificial Intelligence Research Department of Computer and Information Science The Ohio State University Columbus, Ohio 43210 byland@cis.ohio-state.edu Abstract I present an average case analysis of propositional STRIPS planning. The analysis assumes that each possible precondition (likewise postcondition) is equally likely to appear within an operator. Un- der this assumption, I derive bounds for when it is highly likely that a planning instance can be efficiently solved, either by finding a plan or prov- ing that no plan exists. Roughly, if planning in- stances have n conditions (ground atoms), g goals, and O(nfi) p o erators, then a simple, efficient algorithm can prove that no plan exists for at least 1 - 6 of the instances. If instances have St(n(ln g)(ln g/S)) operators, then a simple, effi- cient algorithm can find a plan for at least 1 - 6 of the instances. A similar result holds for plan mod- ification, i.e., solving a planning instance that is close to another planning instance with a’known plan. Thus it would appear that propositional STRIPS planning, a PSPACE-complete problem, is hard only for narrow parameter ranges, which complements previous average-case analyses for NP-complete problems. Future work is needed to narrow the gap between the bounds and to con- sider more realistic distributional assumptious and more sophisticated algorithms. Introduction Lately, there has been a series of worst-case complexity results for planning, showing that the general problem is hard and that several restrictions are needed to guar- antee polynomial time (Backstrom and Klein, 1991; Bylander, 1991; Bylander, 1993; Chapman, 1987; Erol et al., 1992a; Erol et al., 199213). A criticism of such worst-case analyses is that they do not apply to the average case (Cohen, 1991; Minsky, 1991). Recent work in AI has shown that this criticism has some merit. Several experimental results have shown that specific NP-complete problems are hard only for narrow ranges (Cheeseman et al., 1991; Minton ei al., 1992; Mitchell et ad., 1992) and even the problems within these ranges can be efficiently solved (Selman et 480 Bylander al., 1992). Theoretical results also support this conclu- sion (Minton et al., 1992; Williams and Hogg, 1992). However, it must be noted that all this work makes a strong assumption about the distribution of instances, namely that the probability that a given constraint appears in a problem instance is independent of what other constraints appear in the instance. This paper presents an average-case analysis of propositional STRIPS planning, a PSPACE-complete problem (Bylander, 1991). Like the work on NP- complete problems, I make a strong distributional assumption, namely that each possible precondition (likewise postcondition) is equally likely to appear within an operator, and that the probability of a given operator is independent of other operators. Under this assumption, I derive bounds for when it is highly likely that a planning instance can be efficiently solved, ei- ther by finding a plan or proving that no plan exists. Given that planning instances have n conditions (ground atoms) and g goals, and that operators have T preconditions on average and s postconditions on av- erage, I derive the following results. If the number of operators is at most ((2n - s)/s) 8, then a simple, ef- ficient algorithm can prove that no plan exists for at least 1 - S of the instances. If the number of opera- tors is at least eres”/“(2n/s)(2 + lng)(ln g/S), then a simple, efficient algorithm can find a plan for at least 1 - 6 of the instances. If T and s are small, e.g., the number of pre- and postconditions remains fixed as n increases, then these bounds are roughly O(n 6) and Q( n( In g) (In g/S)), respectively. A similar result holds for plan modification. If the initial state or goals are different by one condition from that of another planning instance with a known plan, and if there are at least e’+“(2n/s)(ln l/S) operators, then it is likely (1 - S) that adding a single operator converts the old plan into a solution for the new in- stance. Thus it would appear that propositional STRIPS planning is hard only for narrow parameter ranges, which complements previous average-case analyses for NP-complete problems. Future work is needed to nar- row the gap between the bounds and to consider more From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. realistic distributional assumptions and more sophisti- cated algorithms. The rest of the paper is organized as follows. First, definitions and key inequalities are presented. Then, the average-case results are derived. Preliminaries This section defines propositional STRIPS planning, describes the distribution of instances to be analyzed, and presents key inequalities. Propositional STRIPS Planning An instance of propositional STRIPS planning is spec- ified by a tuple (P, O,Z, G), where: P is a finite set of ground atomic formula, called the conditions; 0 is a finite set of operators; the preconditions and postconditions of each operator are satisfiable sets of positive and negative conditions; Z & P is the initial state; and G, the goals, is a ative conditions. satisfiable set of positive and neg- Each subset S C P is a state; p E P is true in state S if p E S, otherwge p is false in state S. If the precon- ditions of an operator are satisfied by state S, then the operator can be applied, and the resulting state is de- termined by deleting the negative postconditions from S and adding the positive postconditions (cf. (Fikes and Nilsson, 1971)). A solution plan is a sequence of operators that transforms the initial state into a goal state, i.e., a state that satisfies the goals. Distributional Assumptions Let n be the number of conditions. Let o be the num- ber of operators. Let r and s respectively be the ex- pected number of pre- and postconditions within an operator. Let g be the number of goals. For given n, o, r, s, and g, I assume that random planning instances are generated as follows: For each condition p E P, p is a precondition of an operator with probability r/n. If p is a precondition, it is equally likely to be positive or negative. For postconditions, s/n is the relevant probability. For each condition p E P, p E Z (the initial state) with probability .5. For the goals, g conditions are selected at random and are set so that no goal is satisfied in the ini- tial state. This latter restriction is made for ease of exposition. It must be admitted that these assumptions do not approximate planning domains very well. For example, there are only b clear conditions for a blocks-world in- stance of b blocks compared to O(b2) on conditions. However, every blocks-world operator refers to one or more clear conditions, i.e., a given clear condition ap- pears more often within the set of ground operators than a given on condition. Also, there are correlations between the conditions, e.g., clear(A) is more likely to appear with on(A, B) than with on(C, D). Similar violations can be found for any of the standard toy domains. Ultimately, the usefulness of these assumptions will depend on how well the threshold bounds of the anal- ysis classify easiness and hardness of real planning domains. Until then, I shall note that the assump- tions are essentially similar to previous work on NP- complete problems as cited in the introduction, but for a different task (planning) in a harder complexity class (PSPACE-complete). Also, the assumptions permit a clean derivation of interesting bounds, which suggest that hard planning instances are localized to a narrow range of the number of operators (the o parameter). Finally, the gap between the assumptions and reality will hopefully spur further work to close the gap. Algorithm Characteristics Each algorithm in this paper is incomplete but sound, i.e., each algorithm returns correct answers when it returns yes or no, but might answer “don’t know.” Specifically, “success” is returned if the algorithm finds a solution plan, “failure” is returned if the algorithm determines that no plan exists, and “don’t know” is returned otherwise. The performance of a given algorithm is character- ized by an accuracy parameter 6, 0 < S < 1. Each result below shows that if the number of operators o is greater than (or less than) a formula on n, r, s, g, and 6, then the accuracy of the algorithm on the corre- sponding distribution (see Distributional Assumptions section) will be at least 1 - S. Inequalities I freely use the z and y: following inequalities. For nonnegative e -x/(1-x) < 1 - c for 0 < 2 < 1 1-x<e-T - x/(1 + 2) 5 1 - e-” l-e --I < 2 xjj/(l + Gy) 5 1 - (1 - x)Y for 0 5 x < 1 1 - (1 - x)Y < zy/( 1 - 2) for 0 5 x < 1 The first two inequalities are easily derivable from (Cormen et al., 1990). The last four inequalities are derivable from the first two. hen Plan Nonexistence is Efficient If there are few operators, it becomes unlikely that the postconditions of the operators cover all the goals, i.e., that some goal is not a postcondition of any operator. This leads to the following simple algorithm: Plan Generation 481 POSTS-COVER-GOALS for each goal if the goal is not a postcondition of any operator then return failure return don’t know The following theorem characterizes when POSTS- COVER-GOALS works. Theorem f For random planning instances, if o 5 P - s)/s) fi, th en POSTS-COVER-GOALS will de- termine that no plan exists for at least 1 - 6 of the instances. Proof: The probability that there exists a goal that is not a postcondition of any operator can be developed as follows. Consider a particular goal to be achieved: s/2n 1 - s/2n (1 - sj2n)O 1 - (1 - s/2n)O (1 - (1 - s/2n)“)g probability that an operator achieves the goal1 probability that an operator doesn’t achieve the goal probability that no operator achieves the goal probability that some operator achieves the goal probability that every goal is achieved by some operator It can be shown that: (1 - (1 - s/2n)0)g 5 (so/(2n - s))g which is less than 6 if: 0 < ((2n - s)/s) $5 Thus, if the above inequality is satisfied, then the probability that some goal is not a postcondition of any operator is at least 1 - 6. 0 For fixed S and increasing n and g, the above bound approaches (2n - s)/s. If s is also fixed, the bound is w-4. Naturally, more complex properties that are effi- cient to evaluate and imply plan non-existence could be used, e.g., the above algorithm does not look at preconditions. Any algorithm that also tests whether there are postconditions that cover the goals will have performance as good and possibly better than POSTS- COVER-GOALS. When Finding Plans is Efficient With a sufficient number of operators, then it becomes likely that some operator will make progress towards the goal. In this section, I consider three algorithms. One is a simple forward search from the initial state to a goal state, at each state searching for an operator ’ For arithme . tic expressions within this paper, multipli- cation has highest precedence, followed by division, loga- rithm, subtraction, and addition. E.g., 1 - s/h is equiva- lent to 1 - wcw)~ that decreases the number of goals to be achieved. The second is a backward search from the goals to a smaller set of goals to the initial state. The third is a very simple algorithm for when the initial state and goals differ by just one condition. Forward Search Consider the following algorithm: I FORWARD-SEARCH(S,O) if 5: is satisfied by S, then return success repeat if 0 is empty then return don’t know randomly remove an operator from 0 until applying an operator satisfies more goals let S’ be the result of applying the operator to S return FORWARD-SEARCH(S’,O) If FORWARD-SEARCH(Z, 0) is called, then each op- erator in 0 is considered one at a time. If applying an operator increases the number of satisfied goals, the current state S is updated. FORWARD-SEARCH suc- ceeds if it reaches a goal state and is noncommittal if it runs out of operators. FORWARD-SEARCH only considers each operator at most once. I do not propose that this “feature” should be incorporated into practical planning algorithms, but it does simplify the analysis. Specifically, there is no need to consider the probability that an operator has some property given that it is known that the oper- ator has some other property. Despite this handicap, FORWARD-SEARCH is surprisingly robust under certain conditions. First, I demonstrate a lemma for the num- ber of operators that need to be considered to increase the number of satisfied goals. Lemma 2 Consider random planning instances ex- cept that d of the g goals are not satisfied. If at least ereJ(g--d)ln (1 + 2n/sd)(ln l/S) operators are considered, then, for at least 1 - 6 of the instances, one of those operators will increase the number of satisfied goals. Proof: The expression for the probability can be de- veloped as follows: (1 - r/2n)n probability that a state satisfies the preconditions of an operator, i.e., each of n conditions is not a precondition with probability l- r/n; alternatively, a condi- tion is a matching precondition with probability r/2n (1 - s/2n)gBd probability that the postcondi- tions of an operator are consis- tent with the g -d goals already achieved 482 Bylander (1 - s/2n)d (1 - (1 - s/2n)d) probability that the postcondi- tions do not achieve any of the d remaining goals, i.e., for each goal, it is not a postcondition with probability 1 - s/n; alter- natively, it is a precondition of the wrong type with probability s/2n probability that the postcondi- tions achieve at least one of the d remaining goals Thus, the probability p that a particular operator can be applied, will not clobber any satisfied goals, and will achieve at least one more goal is: p = (1 - r/2n)n( 1 - s/2n)gBd( 1 - (1 - s/21,)“) 1 -p is the probability that the operator is unsatisfac- tory, and (1 - p)” is the probability that o operators are unsatisfactory. If (1 - p)” 5 S, then there will be some satisfactory operator with probability at least 1-S. This inequality is satisfied if o 2 (l/p)(ln l/S) because in such a case: (1 - p)” < e--P0 5 e- In ‘1’ = 5 All that remains then is to determine an upper bound on l/p, i.e., a lower bound on p. For each term ofp: (1 - r/2n)” 2 e--rnl(2n-r) > eBT (1 - s/2n)gWd > e -s(g-d)/(2n-s) > e-“(g-d)/n (1 - (1 - s/2n)d) 2 sd/(2n + sd) Inverting cl these terms leads to the bound of the lemma. To describe FORWARD-SEARCH, the expression in the lemma must be summed for each d from 1 to g, which leads to the following theorem: Theorem 3 For random planning instances, if 0 > eredg/“(2n/s)(3/2 + lng)(lng/6) - then FORWARD-SEARCH will find a plan for at least l- S of the instances after considering the above number of operators. Proof: For g goals, the number of satisfied goals will be increased at most g times. If each increase occurs with probability at least 1 - S/g, then g increases (the most possible) will occur with probability at least 1-S. Thus, Lemma 2 can be applied using S/g instead of 6. Summing over the g goals leads to: 9 x eres(g-d)/n(l + 2n/sd)(lng/S) d=l = cresgl’a(ln g/S) ( (kessdln) + ($eysd~“(2n/sd))) 5 e’e~~~(ln g/b) ((-g-s+) + (wg lid)) For the two sums: 9 c J 9 e-sdln < e - -sz~ndx 5 n/s d=l 0 9 lz, 1 dL l+ g(l/x)dz= l+lng d=l J 1 which leads to: 5 n/s + (2n/s)(l + lng) = (2n/s)(3/2 + lng) Combining all terms results in the bound of the the- orem. cl The bound is exponential in the expected numbers of pre- and postconditions. Naturally, as operators have more preconditions, it becomes exponentially less likely that they can be applied. Similarly, as oper- ators have more postconditions, it becomes exponen- tially less likely that the postconditions are consistent with the goals already achieved. Note though that if g 5 n/s, then e sgln < e, so the expected number of postconditions s is not as important a factor if the number of goals are small. Backward Search Consider the following algorithm ward from a set of a goals: for searching back- BACKWARD-SEARCH(G,O) ifG=0th en return success while 0 # 0 randomly remove an operator from 0 let R and S be its pre- and postconditions if G is consistent with S, and KG-S)+RI< PI then return BACKWARD-SEARCH((G-S)+R,O) return don’t know Like FORWARD-SEARCH, BACKWARD-SEARCH makes a single pass through the operators, but in this case, BACKWARD-SEARCH starts with the goals and looks for an operator whose preconditions results in a smaller number of goals. In fact, if BACKWARD- SEARCH succeeds, then it will have discovered a se- quence of operators that achieves a goal state from any initial state, although note that the first oper- ator in this sequence (the last operator selected by BACKWARD-SEARCH) must not have any precondi- tions; otherwise I(G - S) + RI would be non-zero. Hav- ing such an operator is probably unrealistic; neverthe- less, the results below suggest that reducing a set of goals into a much smaller set of goals is often possi- ble, which, of course, can then be followed by forward search. Plan Generation 483 I first introduce a lemma for the number of operators needed to find one operator that reduces the number of goals. Space limitations prevent displaying the com- plete proof. Lemma 4 For random planning instances with r 5 n/2 and s 5 n/2, if at least the following number of operators are considered: e2’e”g/“((3n + wssm 114 then, for 1 - 6 of the instances, reduce the number of goals. some operator will Proof Sketch: The following expression gives the probability p that, for a random operator, the precon- ditions are a subset of the goals, the postconditions are consistent with the goals, and there is one goal equal to a postcondition, but not in the preconditions. p = (1 - r/n)+g ((1 - r/2n)g(l - s/271)9- (1 - r/2n - s/n + (3rs/4n2))g) Bounding this expression leads to the bound of the lemma. 0 Similar to Theorem 3, this expression needs to be summed for g goals down to 1 goal. This is done to prove the next theorem (proof omitted). Theorem 5 For random planning instances with r 2 n/2 and s 5 n/2, if 0 2 e2’(3n/s)(4esgln + 3(ln 9) + 3)(1n g/q then BACKWARD-SEARCH will Jind a plan for at least 1 - S of the instances after considering the above num- ber of operators. Comparing the two bounds for FORWARD-SEARCH and BACKWARD-SEARCH, the bound for BACKWARD- SEARCH is worse in that it has a larger constant and has a e2r term as opposed to a er term for the FORWARD-SEARCH bound. Because BACKWARD- SEARCH does not use the initial state, some increase would be expected. However, the BACKWARD-SEARCH bound is better in that one component is additive, i.e., Ok sgln + In g); whereas the corresponding subexpres- sion for the FORWARD-SEARCH bound is O(esgln In g). The reason is that esgln (see Lemma 4) is maximum when g is at its maximum, while the maximum value for (3n + sg)/sg is when g is at its minimum. Of course, it should be mentioned that rather crude inequalities are used in both cases to derive simplified expressions. A careful comparison of the probabilities derived within the Lemmas would perhaps be a more direct route for comparing the algorithms, but I have not done this yet. Plan Modification So far I have considered the problem of generating a plan from scratch. In many cases, however, the cur- rent planning instance is close to a previously solved instance, e.g., (Hammond, 1990; Kambhampati and Hendler, 1992). Consider a simplified version of plan modification, specifically, when the initial state or set of goals of the current planning instance differs by one condition from a previously solved instance. In this case, the new instance can be solved by showing how the new initial state can reach the old initial state, or how the old goal state can reach a new goal state. Within the framework of random planning instances then, I shall analyze the problem of reaching one state from another when the two states differ by one condition, i.e., there are n goals, and all but one goal is true of the initial state. The worst-case complexity of this problem, like the problem of planning from scratch, is PSPACE- complete (Nebel and Koehler, 1993). However, the following theorem shows that it is usually easy to solve this problem if there are sufficient operators. Theorem 6 For random planning instances in which there are n goals, where n - 1 goals are true of the initial state, if: 0 > eres(2n/s)(ln l/S) then, for at least 1 - S of the instances, solves the instance in one step. some operator Proof: First, I develop the probability p that a ran- dom operator solves a random instance. The prob- ability that the preconditions are consistent with the initial state is (1 - &)“. The probability that the post- conditions are consistent with the n - 1 goals already achieved is (1 - s/(2n))+l. I n addition, the probabil- ity that the goal to be achieved is a postcondition is s/(2n).2 Thus: p = (1 - r/2n)n( 1 - s/2n)“-ls/2n Lower bounds for p are: PB -rn/(2n-r)e- 8nl(2n-S)s/2n 2 e-‘ems s/2n The probability that none of o operators solves the instance is (1 - p)“. If o satisfies the inequality stated in the theorem, then: (1 - p)” 5 e--PO 5 e- In l/a = 6 which proves the theorem. Cl Thus, for fixed r, s, and S, a linear number of op- erators suffice to solve planning instances that differ by one condition from previously solved instances. So, for at least the distribution of planning instances con- sidered here, plan modification is easier than planning from scratch by roughly O(ln2g). 2This does not scale up to the case of attaining g goals by a single operator. The probability that the postconditions of a random operator contain the g goals is (s/2n)g, i.e., exponentially small in the number of goals. 484 Bylander Remarks I have shown that determining plan existence for propositional STRIPS planning is usually easy if the number of operators satisfy certain bounds, and if each possible precondition and postcondition is equally likely to appear within an operator, independently of other pre- and postconditions and other operators. As- suming that the expected numbers of pre- and post- conditions are fixed, then it is usually easy to show that instances with n conditions and O(n) operators are unsolvable, and it is usually easy to find plans for instances with n conditions, g goals, and Q(?z 1n2g) op- erators. In addition, plan modification instances are usually easy to solve if there are a(n) operators. The constants for the latter two results are exponential in the expected numbers of pre- and postconditions. This work complements and extends previous average-case analyses for NP-complete problems. It complements previous work because it suggests that random planning instances are hard only for a narrow range of a particular parameter, in this case, the num- ber of operators. It extends previous work because the worst-case complexity of propositional STRIPS planning is PSPACE-complete, thus, suggesting that PSPACE-complete problems exhibit similar threshold phenomena. This work also provides theoretical support for re- active behavior. A main tenet of reactive behavior is that sound and complete planning, besides being too inefficient, is often unnecessary, i.e., states can be mapped to appropriate operators without much looks head. The analysis of the FORWARD-SEARCH algo- rithm, which only does a limited one-step lookahead, shows that this tenet is true for a large subset of the planning problem. Further work is needed to narrow the gap between the bounds derived by this paper and to analyze more realistic distributions. In particular, the assumption that pre- and postconditions are independently se- lected is clearly wrong. Nevertheless, it would be inter- esting to empirically test how well the bounds of this paper classify the hardness of planning problems. References Allen, J.; Hendler, J.; and Tate, A., editors 1990. Readings in Planning. Morgan Kaufmann, San Ma- teo, California. Backstrom, C. and Klein, I. 1991. Parallel non-binary planning in polynomial time. In Proc. Twelfth Int. Joint Conf. on Artificial Intelligence, Sydney. 26% 273. Bylander, T. 1991. Complexity results for planning. In Proc. Twelfth Int. Joint Conf. on Artificial Intel- ligence, Sydney. 274-279. Bylander, T. 1993. The computational complexity of propositional STRIPS planning. Artificial Intelli- gence. To appear. Chapman, D. 1987. Planning for conjunctive goals. Artificial Intelligence 32(3):333-377. Also appears in (Allen et al., 1990). Cheeseman, P.; Kanefsky, B.; and Taylor, W. M. 1991. Where the really hard problems are. In Proc. Twet’fth Int. Joint Conf. on Artificial Intelligence, Sydney. 331-337. Cohen, P. R. 1991. A survey of the eighth national conference on artificial intelligence: Pulling together or pulling apart ? AI Magazine 12(1):17-41. Cormen, T. H.; Leiserson, C. E.; and Rivest, R. L. 1990. Introduction to Algorithms. MIT Press, Cam- bridge, Massachusetts. Erol, K.; Nau, D. S.; and Subrahmanian, V. S. 1992a. On the complexity of domain-independent planning. In Proc. Tenth National Conf. on Artificial Intelli- gence, San Jose, California. 381-386. Erol, K.; Nau, D. S.; and Subrahmanian, V. S. 1992b. When is planning decidable? In Proc. First Int. Conf. on AI Planning Systems, College Park, Maryland. 222-227. Fikes, R. E. and Nilsson, N. J. 1971. STRIPS: A new approach to the application of theorem proving to problem solving. Artificial Intelligence 2(3/4):189- 208. Also appears in (Allen et ad., 1990). Hammond, K. J. 1990. Explaining and repairing plans that fail. Artificial Intelligence 45( l-2):173-228. Kambhampati, S. and Hendler, J. A. 1992. A validation-structure-based theory of plan modifica- tion and reuse. Artificial Intelligence 55(2-3):193-258. Minsky, M. 1991. Logical versus analogical or sym- bolic versus connectionist or neat versus scruffy.‘ AI Magazine 12(2):34-51. Minton, S.; Johnston, M. D.; Philips, A. B.; and Laird, P. 1992. Minimizing conflicts: A heuristic re- pair method for constraint satisfaction and scheduling problems. Artificial Intelligence 58:161-205. Mitchell, D.; Selman, B.; and Levesque, H. 1992. Hard and easy distributions of sat problems. In Proc. Tenth National Conf. on Artificial Intelligence, San Jose, California. 459-465. Nebel, B. and Koehler, J. 1993. Plan modification versus plan generation: A complexity-theoretic per- spective. In Proc. Thirteenth Joint Int. Conf. on Ar- tificial Intelligence, Chambery, France. To appear. Selman, B.; Levesque, H.; and Mitchell, D. 1992. A new method for solving hard satisfiability problems. In Proc. Tenth National Conf. on Artificial Intelli- gence, San Jose, California. 440-446. Williams, C. P. and Hogg, T. 1992. Using deep struc- ture to locate hard problems. In Proc. Tenth National Conf. on Artificial Intelligence, San Jose, California. 472-477. Plan Generation 485 | 1993 | 72 |
1,401 | Granularity in Multi-Method Soowon Lee and Paul S. Rosenbloom Information Sciences Institute and Computer Science Department University of Southern California 4676 Admiralty Way Marina de1 Rey, CA 90292 { swlee,rosenbloom} @isi.edu Abstract Multi-method planning is an approach to using a set of different planning methods to simultane- ously achieve planner completeness, planning time efficiency, and plan length reduction. Although it has been shown that coordinating a set of meth- ods in a coarse-grained, problem-by-problem man- ner has the potential for approaching this ideal, such an approach can waste a significant amount of time in trying methods that ultimately prove in- adequate. This paper investigates an approach to reducing this wasted effort by refining the gran- ularity at which methods are switched. The ex- perimental results show that the fine-grained ap- proach can improve the planning time significantly compared with coarse-grained and single-method approaches. Introduction The ability to find a low execution-cost plan efficiently over a wide domain of applicability is the core of domain-independent planning systems. The key issue here is how to construct a single planning method, or how to coordinate a set of different planning methods, that has sufficient scope and efficiency. Our approach to this issue begins with the observation that no sin- gle method will satisfy both sufficiency and efficiency, with the implication therefore that a coordinated set of planning methods will be needed. We have constructed a system that can utilize six dif- ferent planning methods, based on the notion of bias in planning. A planning bias is any constraint over the space of plans considered that determines which por- tion of the entire plan space can be the output of the planning.’ The six planning methods used vary along two independent bias dimensions: goal-protection and *This work was sponsored by the Defense Advanced Re- search Projects Agency (DOD) and the Office of Naval Re- search under contract number N00014-89-K-0155. 1 The specification here assumes that the plan space con- tains only totally-ordered sequences of operators, but it goal-flexibility. The goal-protection dimension deter- mines whether or not a protection bias is used, that eliminates plans in which an operator undoes an ini- tial goal conjunct that is either true a priori or es- tablished by an earlier operator in the sequence. The goal-flexibility dimension determines the degree of flex- ibility the planner has in using new subgoals. Two biases, directness and linearity, are used along this di- mension. Directness eliminates plans in which opera- tors are used to achieve preconditions of other oper- ators, rather than just top-level goal conjuncts. Lin- earity eliminates plans in which operators for different goal conjuncts are interleaved. The 3x2 methods arise from the cross-product of these two dimensions: (di- rectness, linearity, or nonlinearity) x (protection, or no-protection).2 These single-method planners are implemented in the context of the Soar architecture (Laird, Newell, & Rosenbloom, 1987). Plans in Soar are represented as sets of control rules that jointly specify which operators should be executed at each point in time (Rosenbloom, Lee, & Unruh, 1990). Planning time for these methods is measured in terms of decisions, the basic behavioral cycle in Soar. This measure is not quite identical to the more traditional measure of number of planning operators executed, but should still correlate with it relatively closely. The six implemented methods have previously been compared empirically in terms of planner complete- ness, planning time, and plan length over a test set of 100 randomly generated 3- and 4-conjunct problems in the blocks-world domain. The predominant result obtained so far from the experiments with these meth- ods is that planning time and plan length are both inversely correlated with the applicability of the plan- does not rule out a search strategy that incrementally spec- ifies an element of the plan space by refining a partially- ordered plan structure. 2The term “nonlinearity” in this context implies that it is allowable to interleave operators in service of different goal conjuncts. It does not necessarily mean that either partial-order or least-commitment planning are being used. 486 Lee From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. ning method; that is, the more restricted the method, the less time it takes to solve the problems that it can solve, and the shorter are the plans generated. The most restricted method (the method with directness and protection) could solve 68 of them, in an aver- age of 16.3 decisions each, producing plans contain- ing an average of 1.8 operators (Lee & Rosenbloom, 1992). The least restricted method (nonlinear planning without goal protection) could solve all 100 problems; however, planning time and plan length averaged over the same 68 problems solvable by the most restricted method were considerably worse - an average of 39.0 decisions to produce plans containing on average 3.3 operators. This trade-off between completeness and efficiency implies that the planning system would be best served if it could always opt for the most restricted method adequate for its current situation. In a first step towards this ideal, we have begun exploring multi- method planners that start by trying highly restricted methods, and then successively relax the restrictions until a method is found that is sufficient for the prob- lem. The intuition behind this is based on iterative deepening (Korf, 1985) - if the proportion of prob- lems solvable at a particular level of restriction is large enough, and the ratio of costs for successive levels is large enough, there should be a net gain. Over the set of 100 blocks-world problems, this has yielded broadly applicable multi-method planners (actually, complete for the blocks-world) that on average generat#e shorter plans than are produced by corresponding (complete) single-method planners, with marginally lower plan- ning times (from 39.9 to 52.5 decisions for single- method planners versus from 33.4 to 42.2 decisions for multi-method planners). However these results do not necessarily mean that, for all situations, there exists a multi-method plan- ner which outperforms the most efficient single-method planner. In fact, the performance of these planners de- pends on the biases used in the multi-method planners and the problem set used in the experiments. For ex- ample, if the problems are so complex that most of the problems are solvable only by the least restricted method, the performance loss by trying inappropriate earlier methods in multi-method planners might be rel- atively considerable. On the other hand, if the prob- lems are so trivial that it takes only a few decisions for the least restricted method to solve the problems, the slight performance gain by using more restricted meth- ods in multi-method planners might be overridden by the complexity of the meta-level processing required to coordinate the sequence of primitive planners. These results suggest that multi-method planning is a promising approach, but that further work is neces- sary to establish whether robust gains are possible over a wide range of domains. The work reported here is one step in this direction, in which we investigate re- ducing the wasted effort in multi-method planners by refining the granularity at which the individual plan- ning methods can be switched. This approach has been implemented, and initial experiments in two domains show significant gains in planning time with respect to both single-method and the earlier, coarser-grained, multi-method planners. Fine-grained ulti-method The approach to multi-method planning described so far starts with a restricted method and switches to a less restricted method whenever the current method fails. This switch is always made on a problem-by- problem basis. However, this is not the only granular- ity at which methods could be switched. The family of multi-method planning systems can be viewed on a granularity spectrum. While in coarse-grained multi- method planners, methods are switched for a whole problem when no solution can be found for the prob- lem within the current method, in fine-grained multi- method planners, methods can be switched at any point during a problem at which a new set of subgoals is formulated, and the switch only occurs for that set of subgoals (and not for the entire problem). At this finer level of granularity it is conceivable that the plan- ner could use a highly-restricted and efficient method over much of a problem, but fall back on a nonlinear method without protection for those critical subregions where there are tricky interactions. With this flexibility of method switching, fine- grained multi-method planning can potentially out- perform both coarse-grained multi-method planning and single-method planning. Compared with coarse- grained multi-method planning, it can save the effort of backtracking when the current method can not find a solution or the current partial plan violates the bi- ases used in the current method. Moreover, it can save the extra effort of using a less restricted method on later parts of the problem, just because one early part requires it. As compared with single-method plan- ning, a fine-grained multi-method planner can utilize biases which would cause incompleteness in a single- method planner - such as directness or protection in the blocks-world domain - while still remaining com- plete. The result is that a fine-grained multi-method planner can potentially be more efficient than a single- method planner that has the same coverage of solvable problems. One way to construct an efficient multi-method plan- ner is to order the single method planners accord- ing to increasing coverage and decreasing efficiency, an approach called monotonic multi-method planning. In this paper, we focus on a special type of mono- tonic multi-method planner, called a strongly mono- tonic multi-method planner, which is based on the de- Plan Generation 487 II Decisions Planner MI (directness, protection) MS (linearity, protection) Ma (protection) A& (directness) MS (linearity) s3 A A2 A5 12.50 - - 13.00 18.90 - 13.21 26.91 - 14.48 - - 14.81 24.47 24.84 16.23 40.85 40.96 Table 1: The performance of the six single-method planners for the three problem sets defined by the scopes of the planners. liberate selection and relaxation of effective biases. In the next section, we provide a formal definition of a monotonic multi-method planner, and define a crite- rion for selecting effective biases from experiments with single-method planners. Selecting Effective Biases Let Mki(hi E (1, . . . . 6)) be a single-method planner, as defined in Section 1. A fine-grained multi-method planner that consists of a sequence of n different single- method planners is denoted as Mkl-kZ+...+k,, , and the corresponding coarse-grained multi-method planner is denoted as Mkl+Mka--)...--fMkR. Let A be a sam- ple set of problems, and let Aki C A be the subset of A which are solvable in principle by Mki. The func- tions s( MI,; , A,) and I( Mki, A,) represent respectively the average cost that Mk; requires to succeed and the average length of plans generated by Mki, for the prob- lems in A, C Aki. Let i%fk,, be a null planner which cannot solve any problems; that is, Ako = 4. A multi-method planner which consists of &!k, , Mk,, “‘f Mk,,, is called monotonic if the following three conditions hold for each pair of Mk,-l and Mk,, for 2 5 i 5 n: (1) Aki-1 & As, 9 (2) s(Mk,-, y Ak,-,> 5 +fk,, Ak+), for j h i, and (3) l(Mk,.el, Ak,-1) 5 ‘(Mki, Akjwl), for j 5 i. 3 The straightforward way to build monotonic multi-method planners is to run each of the individual methods on a set of training problems, and then from the resulting data to gener- ate all method sequences for which monotonicity holds. The approach we have taken here is to generate only a subset of this full set; in particular, we have focused only on multi-method planners in which later methods embody subsets of the biases incorporated into earlier methods, and in which the biases themselves are all positive. Let Bk,; be the set of biases used in Mk;. A bias b is called positive in a problem set A and a method 3This is a slight redefinition of monotonicity from (Lee & Rosenbloom, 1992) with a minor correction. 488 Lee set {Mk,}, if for each pair of Mk, and MkY in {Mk,} such that Bk, = Bk, i- {b}, s(Mk=, Ak,) L s(Mky, Ak,) and /(Mk=, Akj) 5 /(Mk,, AkJ), for every j 5 2. A multi-method planner which consists of Mkl, Mk,, . .., Mk,, is called strongly monotonic if Bk,-i > Bk,, for 2 5 i 5 n, and Bk,-l - Bk, consists of positive biases only, for 2 5 i < 72. From this definition, if a multi- method planner% strongly monotonic, it is monotonic, while the reverse is not necessarily true. To generate a strongly monotonic multi-method planner, it is necessary to determine which biases are positive in the domain. Table 1 illustrates the aver- age number of decisions, s(Mk, , AkJ), and average plan lengths, /(Mk*, Ak,> for the six single-method planners and the problem sets defined by the scope of these plan- ners over a training set of 30 randomly generated 3- and 4-conjunct problems in the blocks-world domain. In this domain, Ad is the same as A1 because if a prob- lem is not solvable with protection, it also is not solv- able with directness. Ag is the same as A6 because both it15 and Ms are complete in this domain, though MS may not be able to generate an optimal solution. A2 and A3 are different sets in principle, because prob- lems such as Sussman’s anomaly cannot be solved by a linear planner with protection (M2) but can be by a nonlinear planner with protection (MS). However, among the 30 problems, these “anomaly” problems did not occur, yielding A2 = A3 for this set of problems. The results imply that directness and protection are positive in this domain, while linearity is not, since I(A4541) > @46,Al) and l(M5,A2) > i(M6,A2). If we use linearity as an independent bias - so that one set of multi-method planners is generated using it and one set without it - and vary directness and protec- tion within the individual multi-method planners, we get a set of 10 strongly monotonic multi-method plan- ners (four three-method planners and six two-method planners). Decisions Plan length Planner 4 A2 A3 A5 AI A2 A3 A5 M5 22.21 29.41 29.48 29.22 3.00 3.78 3.83 3.82 & 33.40 47.12 48.06 47.93 2.90 3.88 4.07 4.14 Average 38.58 3.98 Ml+Mz--fMg 13.26 24.69 25.07 26.13 1.82 2.48 2.54 2.58 M1+h4’a+h16 13.26 26.34 26.55 28.91 1.82 2.52 2.54 2.59 MpMq-)Ms 13.26 26.16 26.41 26.79 1.82 2.85 2.92 2.94 h!f1+Mq--+hd6 13.26 36.78 37.40 37.30 1.82 2.91 2.99 3.02 Ml--f& 13.26 25.68 25.86 26.04 1.82 2.96 3.02 3.03 Ml+& 13.26 31.54 31.85 31.77 1.82 2.89 2.94 2.97 Mz--tMs 19.54 27.89 28.18 29.34 1.85 2.43 2.49 2.58 M3 + h& 21.22 28.46 28.41 30.67 2.00 2.52 2.52 2.57 &-fM5 16.85 27.81 27.95 28.38 1.82 2.83 2.88 2.93 Ma-fM6 16.85 33.33 33.59 34.47 1.82 2.83 2.85 2.95 Average 29.98 2.82 M l-+2-+5 8.63 12.87 13.00 13.01 1.82 2.80 2.84 2.90 M l-+3--6 8.63 13.38 13.43 13.56 1.82 2.53 2.53 2.59 Ml-4-5 8.63 13.19 13.29 13.25 1.82 3.25 3.32 3.34 M 1+4-6 8.63 13.48 13.73 13.63 1.82 2.87 2.96 2.97 M 1-5 8.63 12.21 12.36 12.51 1.82 2.63 2.73 2.81 Ml-6 8.63 13.22 13.27 13.23 1.82 2.68 2.69 2.73 M 2-5 19.19 23.75 23.76 23.80 2.56 3.07 3.11 3.16 M 3-6 16.62 23.45 23.56 24.22 2.03 2.56 2.57 2.71 M 4+5 13.57 17.24 17.30 17.38 2.44 3.71 3.77 3.77 M 4+6 14.10 19.28 19.58 19.83 2.41 3.33 3.43 3.46 Average 16.44 3.04 Table 2: Single-method and coarse-grained multi-method vs. fine-grained multi-method planning in the blocks- world domain. xperirnental esults Table 2 compares the strongly monotonic fine-grained multi-method planners with the corresponding coarse- grained multi-method planners and (complete) single- method planners over a test set of 100 randomly gener- ated 3- and 4-conjunct blocks-world problems (this test set is disjoint from the 30-problem training set used in developing the multi-method planners). Z-tests on this data reveal that fine-grained multi-method planners take significantly less planning time than both single- method planners (r=5.35, p<.Ol) and coarse-grained multi-method planners (a=6.72, p<.Ol), This likely stems from fine-grain multi-method planners prefer- ring to search within the more efficient spaces defined by the biases - thus tending to outperform single- method planners - but being able to recover from bias failure without throwing away everything already done for a problem (thus tending to outperform coarse- grained multi-method planners). Fine-grained multi-method planners also generate significantly shorter plans than single-method planners (~3.42, p<.Ol). They generate slightly longer plans than coarse-grained multi-method planners; however, no significance is found at a 5% level (z=I .77). These results likely arise because, whenever possible, both types of multi-method planners use the more restrictive methods that yield shorter plan lengths, while there may be little difference between the methods that ul- timately succeed for the two types of multi-method planners. Table 3 illustrates the performance of these three types of planners over a test set of 100 randomly gener- ated 5-conjunct problems in the machine shop schedul- ing domain (Minton, 1988). In this domain, no pre- condition subgoals are required because there is no operator which achieves any of the unmet precondi- tions. Thus both directness and linearity are irrelevant. However, there are strong interactions among the op- erators, so protection violations are still relevant. In consequence, the entire table of six planners reduces to only two distinct planners for this domain: with or without protection. Plan Generation 489 Decisions Plan length Planner Al A4 Al A4 Mdfdb 31.47 33.97 4.13 4.47 Ml+M4, MeM5, MS-MS 26.17 35.91 2.43 3.58 Ml -4, M2-5, M3--6 18.71 19.07 2.87 3.29 Table 3: Single-method and coarse-grained multi-method vs. fine-grained multi-method planning in the scheduling domain. Blocks-world domain I Scheduling domain 1 1 Decisions Decisions Figure 1: Performance of single-method planners (+), coarse-grained multi-method planners (o), and fine- grained multi-method planners (*) in the blocks-world domain. As with the blocks-world domain, the z-tests in the scheduling domain indicate that fine-grained plan- ners dominate both single-method planners (~=10.91, p<.Ol) and coarse-grained planners (%=8.95, p<.Ol) in terms of planning time. Fine-grained planners also generate significantly shorter plans than do the single-method planners (2=6.49, p<.Ol). They gen- erate slightly shorter plans than coarse-grained multi- method planners; however, no significance is found at a 5% level (x=1.28). Figures 1 and 2 plot the average number of decisions versus the average plan lengths for the data in Tables 2 and 3. These figures graphically illustrate how the coarse-grained approach primarily reduces plan length in comparison to the single-method approach, and how the fine-grained approach primarily improves efficiency in comparison to the coarse-grained approach. Related Work The basic approach of bias relaxation in multi-method planning is similar to the shift of bias for inductive con- Figure 2: Performance of single-method planners (+), coarse-grained multi-method planners (o), and fine- grained multi-method planners (*) in the scheduling domain. cept learning (Russell & Grosof, 1987; Utgoff, 1986). In the planning literature, this approach is closely re- lated to an ordering modification which is a control strategy to prefer exploring some plans before oth- ers (Gratch & DeJong, 1990). Bhatnagar & Mostow (1990) described a relaxation mechanism for over- general censors in FAILSAFE-:!. Wowever, there are a number of differences, such as the type of constraints used, the granularity at which censors are relaxed, and the way censors are relaxed. Steppingstone (Ruby & Kibler, 1991) tries constrained search first, and moves on to unconstrained search, if the constrained search reaches an impasse (within the boundary of ordered subgoals) and the knowledge stored in memory cannot resolve the impasse. This approach is also related to the traditional partial-order planning, where heuristics are used to guide the search over the space of partially or- dered plans without violating planner completeness (McAllester & Rosenblitt, 1991; Barrett & Weld, 1993; Chapman, 1987). F or example, using directness in fine- grained multi-method planners is similar to preferring 490 Lee the nodes which reduce the size of t,he set of open conditions when a new step is added. Relaxing bias in fine-grained multi-method planners only when it is necessary is similar to the least-commitment approach which adds ordering constraints only if a threat to a causal link is detected. Conclusion In this paper, we have provided a way to select a set of positive biases for multi-method planning and inves- tigated the effect of refining the granularity at which individual planning methods could be switched. The experimental results obtained so far in the blocks-world and machine-shop-scheduling domains imply that (1) fine-grained multi-method planners can be significantly more efficient than single-method planners in terms of planning time and plan length, and (2) fine-grained multi-method planners can be significantly more ef- ficient than coarse-grained multi-method planners in terms of planning time. Another way to enhance the multi-method planning framework would be to extend the set of biases avail- able to include ones that limit the size of the goal hier- archy (to reduce the search space), limit the length of plans generated (to shorten execution time), and lead to learning more effective rules (to increase transfer) The bias selection approach used here is based on preprocessing a set of training examples in order to (Etzioni, 1990). Investigations of these topics are in develop fixed sequences of biases (and methods). A more dynamic, run-time approach would be to learn, progress. while doing, which biases (and methods) to use for which classes of problems. If such learned information can transfer to the later problems, much of the effort wasted in trying inappropriate methods, as well as the effort for preprocessing, may be reduced (as demon- strated in (Rosenbloom, Lee, & Unruh, 1993)). References Barrett, A., & Weld, D. S. (1993). Partial-order plan- ning: Evaluating possible efficiency gains. Ar1iJiciad Intelligence. To appear. Bhatnagar, N., & Mostow, J. (1990). Adaptive search by explanation-based learning of heuristic cen- sors. Proceedings of the Eighth National Con,ference on Artificial Intelligence (pp. 895-901). Boston, MA: AAAI Press. Chapman, D. (1987). Planning for conjunctive goals. Artificial Intelligence, 32, 333-377. Etzioni, 0. (1990). Why Prodigy/EBL works. Pro- ceedings of the Eighth National Conference on Artifi- cial Intelligence (pp. 916-922). Boston, MA: AAAI Press. Gratch, J. M., & DeJong, G. F. (1990). A framework for evaluating search control strategies. Proceedings of the Workshop on Innovative Approaches to Planning, Scheduling, and Control (pp. 337-347). San Diego, CA: Morgan Kaufmann. Korf, R. E. (1985). Depth-first iterative-deepening: An optimal admissible tree search. Artificial Intelli- gence, 27, 97-109. Laird, J. E., Newell, A., & Rosenbloom, P. S. (1987). Soar: An architecture for general intelligence. Artifi- cial Intelligence, 33, l-64. Lee, S., & Rosenbloom, P. S. (1992). Creating and coordinating multiple planning methods. Proceedings of the Second Pacific Rim International Conference on Artificial Intelligence (pp. 89-95). McAllester, D., & Rosenblitt, D. (1991) Systematic Nonlinear Planning. Proceedings of the Ninth National Conference on Artificial Intelligence (pp. 634-639). Anaheim, CA: AAAI Press. Minton, S. (1988). L earning effective search control knowledge: An explanation-based approach. (Ph.D. Thesis). Department of Computer Science, Carnegie- Mellon University. Rosenbloom, P. S., Lee, S., & Unruh, A. (1993). Bias in planning and explanation-based learning. S. Chipman & A. L. Meyrowitz (Eds.) Foundations of Knowledge Acquisition: Cognitive Models of Complex Learning. (pp. 269-307). Hingham, MA: Kluwer Aca- demic Publishers. (Also available in S. Minton (Ed.) Machine Learning Methods for Planning and Schedul- ing. San Mateo, CA: Morgan Kaufmann. In Press.) Rosenbloom, P. S., Lee, S., & Unruh, A. (1990). Responding to impasses in memory-driven behavior: A framework for planning. Proceedings of the Work- shop on Innovative Approaches to Planning, Schedul- ing, and Control (pp. 181-191). San Diego, CA: Mor- gan Kaufmann. Ruby, D., & Kibler, D. (1991). Steppingstone: An empirical and analytical evaluation. Proceedings of the Ninth National Conference on Artificial Intelligence (pp. 527-532). Anaheim, CA: AAAI Press. Russell, S. J., & Grosof, B. N. (1987). A declarative approach to bias in concept learning. Proceedings of the Sixth National Conference on Artificial Intelligence (pp. 505-510). Seattle, WA: Morgan Kaufmann. Utgoff, P. E. (1986). Shift of bias for inductive con- cept learning. In R. S. Michalski, J. 6. Carbonell, & T. M. Mitchell (Eds.) Machine Learning: An Artificial Intelligence Approach, Vol. II. Los Altos, CA: Morgan Kaufmann. Plan Generation 491 | 1993 | 73 |
1,402 | Threat-Removal Strategies for Mark A. hot Department of Engineering-Economic Systems Rockwell International Stan ford University 444 High St. Stanford, California 94305 Palo Alto, California 9430 1 peot@rpal.rockwell.com de2smith@rpal.rockwell.com Abstract McAllester and Rosenblitts’ (199 1) systematic nonlinear planner (SNLP) removes threats as they are discovered. In other planners such as SIPE (Wilkins, 1988), and NOAH (Sacerdoti, 1977), threat resolution is partially or completely delayed. In this paper, we demonstrate that planner efficiency may be vastly improved by the use of alternatives to these threat removal strategies. We discuss five threat removal strategies and prove that two of these strategies dominate the other three--resulting in a provably smaller search space. Furthermore, the systematicity of the planning algorithm is preserved for each of the threat removal strategies. Finally, we confirm our results experimentally using a large number of planning examples including examples from the literature. extreme positions. There are several other options, such as waiting to resolve a threat until it is no longer separable, or waiting until there is only one way of resolving the threat. Given a reasonable mix of problems, what is the best strategy or strategies? In this paper, we introduce four alternative threat removal strategies and show that some are strictly better than others. In particular, we show that delaying separable threats generates a smaller search space of possible plans than the SNLP algorithm. I Introduction McAllester and Rosenblitt (1991) present a simple elegant algorithm for systematic nonlinear planning (SNLP). Much recent planning work (Barrett & Weld, 1993; Collins & Pryor, 1992; Harvey, 1993; Kambhampati, 1993a; Penberthy & Weld, 1992; Peot & Smith, 1992) has been based upon this algorithm (or the Barrett & Weld (1993) implementation of it). In Section 2 we give preliminary definitions and a version of the SNLP algorithm. In Section 3 we introduce four different threat removal strategies and investigate the theoretical relationships between them. In Section 4 we give empirical results that confirm the analysis of Section 3. In Section 5, we discuss work related to the work in this paper. 2 Preliminaries In the SNLP algorithm, when threats arise between steps and causal links in a partial plan, those threats are resolved before attempting to satisfy any remaining open conditions in the partial plan. From a practical standpoint, we know that this is not always the most efficient course. When there are only a few loosely coupled threats in a problem, it is generally more efficient to delay resolving those threats until the end of the planning process.’ However, if there are many tightly-coupled threats (causing most partial plans to fail), those threats should be resolved early in the planning process to avoid extensive backtracking. These two options, resolve threats immediately, and resolve threats at the end, represent two Following (Kambhampati, 1992), (Barrett & Weld, 1993), (Collins & Pryor, 1992) and (McAllester & Rosenblitt, 1991), we define causal links, threats and plans as follows: Definition 1: An open condition, g, is a precondition of an operator in the plan that has no corresponding causal link. Definition 2: A causal link, L: s, % s, , protects effect g of the establishing plan step, s,, so that it can be used to satisfy a precondition of the consuming plan step, s,. Definition 3: An ordering constraint 0: s , > s 2 restricts step sl to occur after step s2. * Hacker (Sussman, 1973), Noah (Sacerdoti, 1977), and (to a certain extent) Sipe (Wilkins, 1988), delay the resolution of threats until the end of the planning process. Yang (1993) has also explored this strategy. Definition 4: A plan is a tuple (S, L, 0, G, B) where S denotes the set of steps in the plan, L denotes the set of causal links, 0 denotes the set of ordering constraints, G denotes the outstanding open conditions, and 5 denotes the set of equality and inequality constraints on variables con- tained in the plan. 492 Beot From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. b cl efinition 5: A threat T: st 0 (se --+ sc) represents a potential conflict between an effect of a step in the plan, E+, and a causal link S, 3 s, , st threatens causal link 4 s, -+ SC, if st can occur between se and sc and an effect, e, of st possibly unifies with either g or lg when the addi- tional binding constraints b are added to the plan bindings. Unification of two literals, e and g, under bindings b is denoted by e z g. If e !? g, we refer to T as a positive b b threat. If e E lg, we refer to T as a negative threat. To make our analysis easier, we will work with the following modified version of the SNLP algorithm given below. The primary difference between this algorithm and the algorithms given in (Barrett, Soderland & Weld, 1991; Collins & Pryor, 1992; and Kambhampati 1993a) is that threat resolution takes place immediately after Add-Link and Add-Step. As a result, the set of partial plans being considered never contains any plans with unresolved threats. Ian (initial-conditions, goal): 1. Initialization: Let Finish be a plan step having pre- conditions equal to the goal conditions and let Start be a plan step having effects equal to the initial conditions. Let Q be the set consisting of the single partial plan ((Start ,Finish ),0, {Start < Finish } , G, 0) , where G is the set of open conditions corresponding to the goals. 2. Expansion: While Q is not empty, select a partial plan p = (S, L, 0, G, B) and remove it from Q A. Termination: If the open conditions G are empty, return a topological sort of S instantiated with the bind- ings B. pen Condition: Select some g E G and do the following: i. Add kink: For each s E S with an effect e such that e k g and s is possibly prior to S, call Resolve-Threats(S,L+ s%Sc L 1 ,o+ (S<Sc), G-g+G,,B+b) ii.Add Steep: For each a E A with an effect e such that e kc call Resolve-Threats ( S + s, L + ,O+(s<s& G-g+G,,B+b) where A denotes the set of all action descriptions, s denotes the new plan step constructed by copying a with a fresh set of variables and G is the precondi- tions of S. 1. Let T be the set of threats between steps in S and causal links in L. 2. If T is empty add (S, L, 0, G, B) to Q 3. If T is not empty select some threat E T and do the following: emotion: If St < s, is consistent with 0 Resolve-Threats (S, L, 0+ s, < s,, G, B + b) .2 Promotion: If S, < st is consistent with 0 Resolve-Threats (S, L, 0 + s, c st, G, B + b) . Separation: For each Resolve-Threats (S, L, 0, G (Xi=yi> E b : B+ {x k =y }i-’ + (x.#y.>) kk=l ’ ’ where {x i k=Yk} - 1 denotes the set of the first i- 1 k=l equality constraints in b.3 day Strategies In the SNLP algorithm, when threats arise in a partial plan, they are immediately resolved before attempting to satisfy any remaining open conditions. There are three ways that a threat can be resolved: separation, promotion, and 2 The criterion for separation, promotion, and demotion, used in (Barrett & Weld, 1993) and (Collins & Pryor, 1992) are not mutually exclusive. In order to preserve systematic- ity, one must either restrict separation so that the threaten- ing step occurs between the producer and consumer steps, or restrict the variable bindings for promotion and demo- tion so that separation is not possible. For our purposes, the latter restriction is simpler. 3 The addition of equality constraints during separation is required to maintain systematicity [3]. Ian Generation 493 demotion. Separation forces the variable bindings in the clobbering step to be different than those in the threatened causal link. Promotion forces the clobbering step to come before the producing step in the causal link, and demotion forces the clobbering step to come after the consumer of the causal link. If all three of these are possible, there will be at least a three way branch in the search space of partial plans. In fact, it can be worse than this, because there may be many alternative ways of doing separation, and all of them must be considered. In practice, however, threat removal is often deferred in order to improve planning performance. Harvey (1993) has shown that any threat removal order may be used in the SNLP algorithm without compromising the algorithm’s systematicity. In this section, we describe four alternative threat deferral strategies and the effect of these strategies on the size of the planner search space. 3.1 Separable Delay Many of the threats that occur during planning are ephemeral. As planning continues, variables in both the clobbering step and the causal link may get bound, causing the threat to go away. This causes the promotion and demotion branches for that partial plan to go away, and causes all but one of the separation branches for the plan to go away. Thus, it would seem to make heuristic sense to postpone resolving a threat until the threat becomes definite; that is, until the bindings of the clobbering step and the causal link are such that the threat is guaranteed to occur. Thus we could modify the SNLP algorithm in the following way: esolve-Threats (S, L, 0, 6, 1. Let T = {S 8 I } be the set of unseparable threats between steps s E S and causal links I E L . These threats are those that are guaranteed to occur regardless of the addition of additional binding constraints. 2. If T is empty add (S, L, 0, G, B) to Q 3. If T is not emDtv select some threat. Stg (SesSc): T ,anddothefolliwing: A. Demotion: If st < s, is consistent with 0 Resolve-Threats (S, L, 0 + st c s,, G, B) B. Promotion: If S, c st is consistent with 0 Resolve-Threats (S, L, 0 + s, < st, G, B) 494 Peat We refer to this threat resolution strategy as DSep. Note that DSep is a complete, systematic planning algorithm that does not require the use of separation. Theorem 1: The space of partial plans generated by DSep is no larger than (and often smaller than) the space of par- tial plans generated by SNLP. (We are assuming that the two algorithms use the same strategy to decide which open conditions to work on.) Sketch of Proof: The essence of the proof is to show that each partial plan generated by DSep has a unique corre- sponding partial plan generated by SNLP. Let P = (S, L, 0, G, B) b e a partial plan generated by DSep and let Z = z,, . . . . z, be the sequence of planner opera- tions (add-Iink, add-step, demote, and promote) used by DSep to construct p. For each threat t introduced by an operation zt in Z, there are three possibilities: 1. t became unseparable and was resolved using a later Promote or Demote operation z,. 2. t is still separable and hence is unresolved in p 3. t was separable, but eventually disappeared because of binding or ordering constraints introduced by a later operation z,. For each threat t, we perform the corresponding modification to the sequence Z indicated below: 1. move z, to immediately after zt . 2. add a Separate operation for t immediately after zt 3. add the appropriate Promote, Demote, or Separate operation immediately after zt that mimics the way in which the threat is eventually resolved. In this new sequence Z’, all threats are resolved immediately after they are introduced. As a result, Z’ is now the sequence of planning operations that would have been generated by SNLP. The plan p’ = (S, L, 0’, G, B’) generated by this sequence differs from the original plan only in that 1) B’ is augmented with separation constraints for each unresolved threat in p, and 2) 0 and 0’ may differ in redundant ordering constraints, but Closure (0) = Closure(0’) . As a result, the mapping from DSep plans to SNLP plans is one to one, and the theorem follows. n Although the space of partial plans generated by DSep is smaller than that generated by SNLP, we cannot guarantee that DSep will always be faster than SNLP. There are two reasons for this: 1. There is overhead associated with delaying threats because the planner must continue to check separability. 2. When a threat becomes unseparable, DSep must check to see if demotion or promotion are possible. Because the space of partial plans may have grown considerably since the threat was introduced, there might be more of these checks than if resolution had taken place at the time the threat was introduced. (Of course, the reverse can also happen.) 3.2 ay Unforced Threats A natural extension of the DSep idea would be to delay resolving a threat until there is only one (or no) threat resolution option remaining. (This is the ultimate least- commitment strategy with regard to threats.) Thus, if demotion were the only possibility for resolving a threat, the appropriate ordering constraint would be added. Alternatively, if separation were the only option, and there was only one way of separating the variables, the appropriate not-equals constraint would be added to the plan. We refer to this threat resolution strategy as DUnf. The threat resolution procedure for DUnf is shown below: reds (S, L, 0, G, 1. Let T = st k (se % sc) be the set of threats between steps in S and causal links in L such that either: A. st < s, is consistent with 0, st < s,, and b = 0 B. s, < st is consistent with 0, st 2 s,, and b = 0. C. s, < st < s, and b contains exactly one constraint. 2. If T is empty add (S, L, 0, G, B) to Q 3. If T is not empty select some threat ,,B (s,SsJ E T and do the following: A. Demotion: If st < s, is consistent with 0 Resolve-Threats (S, L, 0+ st < s,, G, B + b) B. Promotion: If s,< st is consistent with 0 Resolve-Threats (S, L, 0 + s, < st, G, B + b) . C. Separation: For the single binding constraint, (x =Y>, Resolve-Threats(S, L, 0, G, B + (x f y)) . It might seem that this strategy would always expand fewer partial plans than DSep. Unfortunately this is not the case for at least two reasons. First of all, it is possible to construct examples where both promotion and demotion are possible for each individual threat, but the entire set of threats is unsatisfiable (a direct conclusion from the fact that the problem of determining whether a set of threats might be resolved by the addition of ordering constraints is NP-Complete (Kautz, 1993)). For such a case, the DUnf planner would choose not to work on the threats and therefore wouldn’t recognize that the plan was impossible. In contrast, SNLP and DSep would commit to either promotion or demotion for each threat, and would therefore discover that the plan was impossible earlier than would the DUnf strategy. In addition, when the DUnf strategy postpones the addition of ordering constraints to a plan, it allows plan branches to be developed that contain Add-links that might have been illegal if those ordering constraints were added earlier in the planning process. On the other hand, it is also easy to construct domains where DSep is clearly inferior to DUnf. Therefore we can conclude: Theorem 2: Neither DUnf nor DSep are guaranteed to generate a smaller search space than the other for all plan- ning problems. Another possible drawback to DUnf is that checking to see if there is only one option for resolving a threat may be costly, whereas the separability criterion used in DSep is relatively easy to check. esolvable Threats An alternative to DUnf would be to ignore a threat until it becomes impossible to resolve, and then simply discard the partial plan. We refer to this alternative as DRes: 1. Let T be the set of threats between steps in S and causal links in L. 2. For all St b (se 5 s,-) E T if either b is nonempty, st < s, is consistent with 0, or s, < st is consistent with 0, then add (S, L, 0, G, B ) to Q To see the difference between DUnf and DRes, consider a partial plan with a threat that can only be resolved by demotion. Using DUnf, we would generate a new partial plan with the appropriate ordering constraint. This ordering constraint could, in turn, prevent any number of possible add-link operations, and could reduce the possible ways of Plan Generation 495 resolving other threats. If DRes were used, this additional ordering constraint would not be present. As a result, DUnf will consider fewer partial plans than DRes. It is relatively easy to show that: Theorem 3: The space of partial plans generated by DRes is at least as large as the space generated by DUnf. Since the cost of checking to see if a threat is unresolvable is just as expensive as checking to see if a threat has only one resolution, there should be no advantage to DRes over DUnf. 3.4 Delay Threats to the End The final extreme approach is to delay resolving threats until all open conditions have been satisfied. We refer to this algorithm as DEnd. Threat resolution strategies similar to DEnd are used in (Sussman, 1973; Sacerdoti, 1977; Wilkins, 1988; and Yang 1993). The primary advantage to this approach is that there is no cost associated with checking or even generating threats until the plan is otherwise complete. In problems where there are few threats, or the threats are easy to resolve by ordering constraints, this approach is a win. However, if most partial plans fail because of unresolvable threats, this technique will generate many partial plans that are effectively dead. It is relatively easy to show that: Theorem 4: The space of partial plans generated by DRes is no larger than (and is sometimes smaller than) the space generated by DEnd. The search space relationships between the five threat removal strategies is summarized in Figure 1. SNLP DSep Figure 1: DEnd DRes / DUnf Search space relationships removal strategies. for five threat We tested the five threat resolution strategies on several problems in each of several different domains; a discrete time version of Minton’s machine shop scheduling domain (1988), a route planning domain, Russell’s tire changing domain, and Barrett & Weld’s (1993) artificial domains D”S ’ and D’S ’ (also called the ART-MD and ART- 1 D domains), and ART- 1 D-RD and ART-MD-RD, Kambhampati’s (1993a) variations on these domains. Ordinarily, the performance of a planner depends heavily on first, the order in which the partial plans are selected, and second, the order in which open conditions are selected. In order to try to filter out these effects, we tested each domain using several different strategies. We used the A* search algorithm in our testing. The ‘g’ function is the length of the partial plan, and ‘h’ is the number of open conditions. We did not include the number of threats in ‘h’ because this number varies across different threat resolution strategies. The search algorithm is engineered so that it always searches partial plans with equivalent causal structures (generated by the same chain of add-link and add-step operations) in the same order regardless of the threat resolution strategy selected. These tests demonstrate the relative search space theorems by showing that an inefficient threat resolution strategy (for example, SNLP) generates at least one (and often more than one) partial plan for each equivalent partial plan generated by one of the more efficient threat resolution strategies (for example, DSep). For selecting open conditions, we used a LIFO strategy, a FIFO strategy, and a more sophisticated least-commitment strategy. The LIFO and FIFO strategies refer to the order that open conditions ‘are attacked: oldest or youngest first. The least-commitment strategy selects the open condition to expand that would result in the fewest immediate children. For example, assume that two open conditions A and B are under consideration. Open condition A can be satisfied by adding either of two different operators to the plan. Condition B, on the other hand, can only be satisfied by linking to a unique initial condition. In this situation, the least-commitment strategy would favor working on B first. Note that the least-commitment strategy violates the assumption of Theorem 1; the action of the least- 496 Peat commitment strategy can depend on previous threat resolution actions. The planner used for these demonstrations differs from the algorithm described in this paper in that it can ignore positive threats.4 For most of our testing, we turned off detection of positive threats in order to reduce the amount of time spent in planning. In the following plots, we have made no attempt to distinguish between individual planning problems. Instead, we have plotted the relative size of the search space of each problem in the sample domain. Each line corresponds to a single problem/conjunct-ordering-strategy pair. The relative size plotted in these figures is the quotient of the number of nodes explored by the planner when using a particular threat resolution strategy and the number of nodes explored when using a reference strategy. For example, in Figure 2, all of the search spaces sizes are normalized relative to the size of the most efficient threat resolution strategy for this domain, DSep. The shapes of these plots demonstrate the superiority of the DSep and DUnf threat resolution strategies over all of the other threat resolution strategies. The pseudoconvex shape of each of these curves illustrates the dominance relationships illustrated in Figure 1. Figures 2 through 4 illustrate the relative search space size of several problems drawn from the tire changing, machine shop and route planning domains, respectively. Figures 5 and 6 illustrate the relative search space sizes for a variety of problems drawn from the ART-MD-RD and ART-lD- RD domains. The choice of threat delay strategy has no effect on the DmS1 and D’S’ domains (that is, the relative search space sizes are identical). Search limit exceeded l- 2.5 2 1.5 1 SNLP DSep DUnf DRes DEnd Figure 2: Russell’s Tire Changing Domain (3 Problems) 1.2 0.9 ! I I I I I I I e SNLP DSep DUnf DRes DEnd Figure 3: Machine Shop (3 Problems) ti z 0, LL SNLP DSep DUnf DRes DEnd Figure 4: Route Planning Domain (5 Problems) 4 and is, therefore, nonsystematic. Plan Generation 497 become unimportant and that we will realize a savings that is roughly exponential in the number of threats. SNLP DSep DUnf DRes DEnd -AI--- DUnf 4 DRes Easier Problems Harder Problems Figure 5: ART-MD-RD Planning Domain (29 Problems) Figure 7: CPU time required for ART-I D-RD Problems Figure 6: ART-lD-RD Domain (29 Problems) Note that the artificial domains are propositional. Thus DSep has exactly the same effect as the default SNLP strategy because threats are never separable. In Figure 7, we plot the CPU time required for planning using three of the threat resolution strategies on an assortment of ART-MD-RD problems. On small problems, the additional computation required for the more complicated threat resolution strategies dominates. For larger problems, however, we expect that these factors will 498 Peot Recently, Kambhampati (1993a) performed tests with several different planning algorithms, including SNLP, and argues that systematicity reduces the redundancy of the search space at the expense of increased commitment by the planner. He shows data indicating that for some classes of problems SNLP performs more poorly than planners that are nonsystematic. Although we find these results interesting (and find Kambhampati’s multi-contributor planners intriguing) we are left wondering to what extent his results would be affected by a more judicious selection of threat resolution strategies. In his tests, Kambhampati also considers a variant of SNLP where positive threats are ignored (NONLIN). It is possible to construct examples where ignoring a positive threat in SNLP results in an arbitrarily large increase in search. Conversely, we believe that the consideration of positive threats does not cost a great deal. In particular, if t is the potential number of positive threats in a partial plan, we conjecture that considering positive threats in the delayed separability algorithm will never result in more than a factor of 3t increase in the size of the search space of partial plans. Kambhampati (1993b) also observes that the DSep threat resolution strategy is identical to SNLP if the definition for threats is modified. In particular, we would only recognize a threat when the post condition of a potentially threatening operator unifies with a protected precondition regardless of any bindings that might be added to the plan. Thus, with the appropriate threat definition, separation is not required for a complete, systematic variation of SNLP. In addition, Kambhampati claims that these threat resolution strategies can be applied to planners that use multi-contributor causal structures (Kambhampati, 1992). Yang (1993) has investigated the use of constraint satisfaction methods for resolving sets of threats. In our experience, the time at which the planner resolves threats has far more impact on performance than the method used for resolving sets of threats. In (Smith & Peot, 1993), we have been investigating a more involved method of deciding when to work on threats. In particular, we show that for certain kinds of threats it is possible to prove that the threats can always be resolved at the end, and can therefore be postponed until planning is otherwise complete. The work reported in this paper is complementary since it suggests strategies for resolving threats that cannot be provably postponed. Acknowledgments This work was supported by DARPA contract F30602-91- C-0031 and a NSF Graduate Fellowship. Thanks to Will Harvey, Rao Kambhampati, and Dan Weld for their contributions to this paper. References Barrett, A., and Weld, D., Partial Order Planning: Evaluat- ing Possible Efficiency Gains, to appear in ArtiJicial Intelligence, 1993. Collins, G., and Pryor, L., Representation andpe$ormance in a partial order planner, technical report 35, The Institute for the Learning Sciences, Northwestern Uni- versity, 1992. Harvey, W., Deferring Conflict-Resolution Retains Syste- Yang, Q., A Theory of Conflict Resolution in Planning, maticity, Submitted to AAAI, 1993. Artificial Intelligence, 58 (1992) pg. 361-392. Kambhampati, S., Characterizing Multi-Contributor Causal Structures for Planning, Proceedings of the First International Conference on Artificial Intelli- gence Planning Systems, College Park, Maryland, 1992. Kambhampati, S., On the utility of systematicity: under- standing trade-offs between redundancy and commit- ment in partial-ordering planning, submitted to IJCAI, 1993a. Kambhampati, S., A comparative analysis of search space size, systematicity and performance of partial-order planners. CSE Technical Report, Arizona State Uni- versity, 1993b. Kautz, Henry. Personal communication, April 6, 1992. McAllester, D., and Rosenblitt, D., Systematic nonlinear planning, In Proceedings of the Ninth National Con- ference on ArtiJcial Intelligence, pages 634-639, Ana- heim, CA, 199 1. Minton, S., Learning EJjcective Search Control Knowledge: An Explanation-Based Approach. Ph.D. Thesis, Com- puter Science Department, Carnegie Mellon Univer- sity, 1988. Penberthy, J., S., Weld, D., UCPOP: A sound, complete, partial order planner for ADL, In Proceedings of the Third International Conference on Knowledge Repre- sentation and Reasoning, Cambridge, MA, 1992. Peot, M., and Smith, D., Conditional nonlinear planning, In Proc. First International Conference on AI Planning Systems, College Park, Maryland, 1992. Sacerdoti, E., A Structure for Plans and Behavior, Elsevier, North Holland, New York, 1977. Smith, D. and Peot, M., Postponing conflicts in nonlinear planning, AAAI Spring Symposium on Foundations of Planning, Stanford, CA, 1993, to appear. Sussman, G., A Computational Model of Skill Acquisition, Report AI-TR-297, MIT AI Laboratory., 1973. Wilkins, D., Practical Planning: Extending the Classical AI Planning Paradigm, Morgan Kauffman, San Mateo, 1988. Plan Generation 499 | 1993 | 74 |
1,403 | avid E, Smith Rockwell International 444 High St. Palo Alto, California 94301 de2smith@rpal.rockwell.com Abstract An important aspect of partial-order planning is the resolution of threats between actions and causal links in a plan. We present a technique for automatically deciding which threats should be resolved during planning, and which should be delayed until planning is otherwise complete. In particular we show that many potential threats can be provably delayed until the end; that is, if the planner can find a plan for the goal while ignoring these threats, there is a guarantee that the partial ordering in the resulting plan can be extended to eliminate the threats. Our technique involves: 1) construction of an operator graph that captures the interaction between operators relevant to a given goal, 2) decomposition of this graph into groups of related threats, and 3) postponement of threats with certain properties. 1 Introduction In (McAllester & Rosenblitt 1991), the authors present a simple elegant algorithm for systematic partial-order planning (SNLP). Much recent planning work (Barrett & Weld 1993, Collins & Prior 1992, Kambhampati 1992, Penberthy & Weld 1992, Peot & Smith 1992) has been based upon this algorithm. In the SNLP algorithm, when threats arise between steps and causal links in a partial plan, those threats are resolved before attempting to satisfy any remaining open conditions in the partial plan. In (Peot & Smith 1993) we investigate several other strategies for resolving threats. Although some of these strategies work much better than the SNLP strategy, they are all fixed, dumb strategies. In practice, we know that some threats that occur during planning are easy to resolve, while others are difficult to resolve. What we would like is a smarter threat-selection strategy that can recognize and delay resolution of the easy threats in order to concentrate effort on the difficult ones. In this paper, we present techniques for automatically deciding whether threats should be resolved during partial- order planning, or delayed until planning is otherwise complete. In particular, we show that certain threats can be provably delayed until the end; that is, if the planner can find a plan for the goal while ignoring these threats, there is Department of Engineering Economic Systems Stanford University Stanford, California 94305 peot@rpal.rockwell.com a guarantee that the partial ordering in the resulting plan can be extended to eliminate the threats. In Section 2, we construct operator graphs that capture the interaction between operators relevant to a goal and set of initial conditions. In Section 3, we develop theorems and decomposition rules that use the operator graph to decide when threats can be postponed. In Section 4 we discuss our experience with these techniques and related work. For purposes of this paper, we adopt a simple STRIPS model of action, and assume the SNLP model of planning. Many of the results and ideas can be applied to other causal-link planners such as (Kambhampati 1993, Tate 1977). Full proofs of the theorems appear in (Smith & Peot 1993). 2 Operator graphs Following (McAllester & Rosenblitt 1991), we define special Start and Finish operators for a problem: Definition 1: The Start operator for a problem is defined as the operator having no preconditions, and having all of the initial conditions as effects. The Finish operator for a problem is defined as the operator having no effects, and having all of the goal clauses as preconditions. Given these operators we construct an operator graph for a problem recursively, according to the following rules: Definition 2: An operator graph for a problem is a directed bipartite graph consisting of precondition nodes and opera- tor nodes such that: 1. There is an operator node for the Finish operator. 2. If an operator node is in the graph, there is a pre- condition node in the graph for each precondition of the operator and a directed edge from the pre- condition node to the operator node. 3. If a precondition node is in the graph, there is an operator node in the graph for every operator with an effect that unifies with the precondition and there is a directed edge from the operator node to the precondition node. 500 Smith From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. To illustrate, consider the simple set of operators below capitalized, (relations, operator names, and constants are variables are lower case): Shape(x) Prec’s: Effects: Drill(x) Prec’s: Effects: BOW, y) Prec’s: Effects: Glue(x, y) Prec’s: Effects: Object(x), 1 Fastened(x, z) Shaped(x), 1 Drilled(x) Object(x), - Fastened(x, z) Drilled(x) Drilled(x), Drilled(y) Fastened(x, y) Object(x), 1 Fastened(x, z), Object(y), 1 Fastened(y, z) Fastened(x, y) Suppose that the goal is Shaped(x) A Shaped(y) A Fastened (x , y) , and the initial conditions are: Object(A) ,Fastened(A, z) Object(B) 7Fastened(B, z) The operator graph for this problem is shown in Figure 1. Note that each operator appears at most once in the Shaped(x) Shaped (Y ), \ Figure 1: Operator graph for simple machine shop problem. Fastened and Object have been abbreviated for clarity. Circled arcs represent a bundle of arcs. graph; but a clause such as Object(x) may appear more than once, if it appears more than once as a precondition. Note that the graph can also contain cycles. If the Bolt operator had an effect Drilled (z) there would be directed edges from Bolt(x, y) to both Drilled(x) and Drilled(y), forming loops with the directed edges from Drilled(x) and Drilled(y) to Bolt(x,y). In this paper we will only consider acyclic operator graphs. The basic results and techniques also apply to cyclic graphs, but the definitions and theorems are more complicated. The full theory is given in (Smith & Peot 1993). The operator graph tells us what operators are relevant to the goal, and also tells us something about the topology of partial plans that will be generated for the problem. In this example, it tells us that if the Bolt operation appears in the plan, at least one Drill operation will precede it in the plan. The operator graph has an implicit an&or structure to it. Predecessors of an operator node are ands, since all preconditions of the operator must be achieved. Predecessors of a precondition node are ors, since they correspond to different possible operators that could be used to achieve the precondition. ove, the Glue and Bolt operators are used for only one purpose, while the Dri II and Shape operators are used for more than one purpose. This information is important for our analysis of operator threats. We therefore introduce the following notion: efinition 3: The use count of a node in the graph is defined as the number of directed paths from the node to Finish. The use count of an operator is an upper bound on the number of times the operator could appear in a partial plan. It can be infinite for graphs with cycles. 2.2 Threats So far, operator graphs only tell us which subgoals and operators may be useful for a problem. efinition 4: Let 0 be an operator node, and P be a precon- dition node in an operator graph. If some effect of Ounifies with the negation of P we say that 0 threatens P and denote this by OOP. The threats for our example are shown in Figure 2. Figure 2: Operator graph with threats (heavy lines). Plan Generation 501 2.3 Eliminating Threats Not all threats in the operator graph are important. Some of them will never actually occur during planning. In particular, consider the threat from Start to Fastened (x , y) . The initial conditions operator always hrecedes all other operators in a plan. As a result, there is no possibility that Start can ever clobber an effect produced by another operator. Therefore we can eliminate these threats from the graph. Theorem 1: Threats emanating from Start can be elimi- nated. A related class of threats are those between operators and preconditions that are successors of each other. Consider the threat from Glue(x, y) to its precondition --,Fastened(x, z). This says that gluing clobbers its precondition. This is not a problem since the clobbering follows the consumption of the precondition. As a result, this threat can be eliminated. Similar arguments can be made for the threat from Bolt (x, y) to the 7Fastened (x, z) precondition of Dri I I (x) . Note that our arguments rely on the fact that Glue and Bolt will only appear once in the final plan. If Glue(x) y) appeared more than once, there is a distinct possibility that one gluing operation might clobber the precondition of another gluing operation. As a result, we can only eliminate such threats when the use count of the operator is 1. Theorem 2: Threats from an operator to any predecessor or successor in the graph can be eliminated if the threatening operator has use count 1. A third source of superfluous threats are disjunctive branches in the operator graph. In our example, there are two different ways of achieving the subgoal Fastened (x, y) : bolting and gluing. Only one of these two alternatives will appear in any given plan. As a result, we can ignore threats that go from one branch to the other. This means that the edges from Bolt (x, y) to the --,Fastened preconditions of Glue(x) y), and from Glue(x) y) to the ,Fastened (x , z) precondition of Drill(x) can be eliminated. As with Theorem 2, we need to consider use count. Suppose that there was a second subgoal of the form Fastened(x, y). The planner might choose Bolt (x, y) for one of these subgoals, and Glue (x, y) for the other. In this case, a threat between Bolt (x, y) and the ,Fastened (x, y) precondition of Glue (x, y) could occur. Theorem 3: If a threat is between two disjunctive branches in the operator graph, and the threatening operator has use count 1, the threat can be eliminated. To decide if a threat is between two disjunctive branches we need to look at the nearest common ancestor of the threatening operator and precondition. For the threat between Bolt (x, y) and the --,Fastened (x, y) precondition of GWx, y>, the nearest common ancestor is Fa&ened(x, y). Since this is a precondition node, the threat is between disjunctive branches. After applying Theorems 1, 2, and 3, there are only four remaining threats, as shown in Figure 3. These are the only possible threats that can actually arise during planning. Figure 3: Threats been applied. remaining after the elimination theorems have In Figure 3, consider the threats Shape(x) o Drilled(x) and Shape(x) 0 Drilled(y). These threats tell us that allowing Shape operations to occur between Drill and Bolt operations may cause problems. However, in considering the graph, we can see that there is an easy solution. If we add the ordering constraint that Shape operations must occur before Dri II operations, both threats are eliminated. We could have the planner automatically add these constraints every time Shape or Drill operations were added to a partial plan. Although this strategy would work, it is more restrictive than necessary. In our example, if two different objects, A and B are used for x and y, there would be two different Shape operations, and two different Dri I I operations in the final plan. To get rid of the threats it would only be necessary that Shape(A) precede Dri I I (A) and Shape(B) precede Drill (B). The other potential threats go away by virtue of the different variable bindings for x and Y- To avoid this over-commitment, it is better to postpone the threats between Shape(x) and Drilled(x). No matter what plan is generated, we can always eliminate this threat later by imposing the necessary ordering constraints between Shape and Drill operations. The argument made above relies on two things: 1. There are ordering constraints that will resolve the threats, 2. The other potential threats do not interfere with these ordering constraints. 502 Smith The first part of this argument is straightforward; in our case, demoting Shape before Drill did the trick, since it prevents Shape from occurring between Drill and Bolt . The second part of the argument is tougher. It requires showing that none of the possible resolutions of the remaining threats will prevent the ordering of Shape operations before Drill operations. To show this, we need to consider all possible ways that the planner might choose to resolve the remaining set of threats. First consider the threat Bolt (x, y) 0 7Fastened(x, y). Since Bolt cannot come before Start, demotion is not possible for this threat. However, promotion is possible, since Shape < Bolt is consistent with the operator graph. We therefore need to consider the possibility that this constraint might be added to the operator graph. (Separation is not possible in this case. Even if it were, separation adds no ordering constraints to the graph, and therefore does not concern us.) Next consider the threat Glue (x , y ) 0 ,Fastened (x , y ) . As before, demotion is not possible since Glue cannot come before Start. However, promotion is possible, since Shape < Glue is consistent with both the operator graph, and with the constraint Shape < Bolt . Since the addition of Shape < Glue and Shape < Bolt do not interfere with Shape < Drill , condition (2) is also satisfied. The general form of this argument is summarized in the following theorem: Theorem 4: Let T be the set of threats in an operator graph, and let P be a subset of those threats. The threats in P can be postponed if there is a set of ordering constraints that resolves the threats in P for every possible resolution of the remaining threats in T - P . Proof Sketch: Suppose that SNLP ignores all threats corre- sponding to the threats P in the operator graph. Consider a final plan F produced by SNLP. Let R be the set of threats that were resolved in the construction of F. The threats in R are instances of the threats T-P in the operator graph. Thus there is some resolution of threats in T-P that corresponds to the resolution of R in the plan. By our hypothesis, there is some set of ordering constraints in the operator graph that resolves the threats in P, for each possible resolution of T-P. These ordering constraints will therefore resolve any instances of P ignored during the construction of F. As a result, there is an extension of the partial ordering of F that will resolve all of the postponed threats. Since F was an arbitrary plan, the theorem follows. Corollary 5: Let T be the set of threats in an operator graph. The (entire) set of threats T can be postponed if there is a set of ordering constraints that resolves the threats in T. In the machine shop example, we could use Corollary 5 to postpone the entire set of threats at once, since the three ordering constraints Shape < Drill , Shape < Glue, and Shape < Bolt resolve all four of the threats. In general, however, this corollary cannot be applied as frequently as Theorem 4. The reason is that this corollary requires the resolution of all threats by ordering constraints. There may be some threats that can only be resolved by separation during planning or that cannot be resolved. In these cases Corollary 5 cannot be applied, but Theorem 4 may still allow us to postpone some subset of the threats. As an example, consider the operator graph shown in Figure 4. Figure 4: Operator be postponed. graph where only two of the three threats can In this example, the two threats 2 o E and 3 o C can only be resolved by separation. However, the threat 2 0 B can always be resolved by imposing the ordering constraint 2 < 1 (demotion). Since demotion is consistent with the only possible resolution of the remaining threats, the threat 2 0 B can be postponed according to Theorem 4. Over-constraining The primary difficulty with applying Theorem 4 and Corollary 5 is that they both take time that is exponential in the number of threats being considered. In fact it can be shown that: Theorem 6: Given a partial ordering and a set of threats, it is NP-complete to determine whether there exists an exten- sion to the partial ordering that will resolve the threats. The proof of this theorem (Kautz 1992) involves a reduction of 3-SAT clauses to a partial ordering and set of threats. The complete proof can be found in (Smith & Peot 1993). Although the general problem of postponing threats is computationally hard, there are some special cases that are more tractable. The first technique that we consider involves over-constraining the operator graph. In particular, we simultaneously impose both promotion and demotion ordering constraints on the operator graph for all threats in the graph but one. We then check to see if there is an ordering constraint on the remaining threat (demotion or promotion) that is still possible in the over-constrained graph. If so, we know that the ordering constraint will work Plan Generation 503 for all possible resolutions of the remaining threats, and we can therefore postpone the threat. More precisely: remaining threat r c -+ r t. The threat t can be postponed if either: 1. 0, c 0, is consistent with the operator graph, and 0, e Predecessors (0,) in the augmented opera- tor graph. 2. 0, c 0, is consistent with the operator graph, and 0, P Successors(0,) in the augmented operator graph. Proof Sketch: Let A be the set of augmentation edges added to the graph. Every consistent way of resolving the set of remaining threats corresponds to some subset of these constraints. Suppose that case 1 holds for the above theorem. Since 0, +z Predecessors (0,) in the augmented graph, we know that it will also hold for every subset of the augmentation edges. As a result, we know that this condi- tion holds for every possible way of resolving the remain- ing threats in the graph. Furthermore, in a consistent graph, 0, e Predecessors(0,) implies that t can be resolved by demotion. As a result, Theorem 4 says that t can be post- poned. The argument for case 2 is analogous. To see how this theorem applies, consider the threat Glue (x, y) @ 7Fastened(x, z) in Figure 3. To see if this threat can be postponed, we need to augment the operator graph with all ordering constraints that resolve the remaining three threats. For the threat Bolt (x, y) 0 7Fastened(x, z) we need to add the edge Shape(x) -+ B&(x, y) to the graph, since it is the only way of resolving the threat. For the other two threats, Shape(x) @ Drilled (x) and Shape(x) @ Drilled(y), we need to add the two edges Shape(x) + Drill(x) and Bolt (x, y) -+ Shape(x). The resulting graph is shown in Figure 5. Figure 5: Machine shop example with over-constrained threats. Partial-ordering constraints are shown as grey arrows. Now consider the two possibilities for resolving Glue (x, y) o ,Fastened(x, z) . Glue cannot be ordered before Start in the operator graph, so case 1 is out. Glue can be ordered after Shape, however, so we need to consider case 2. In the augmented graph, the only successor of Glue is Finish. Since Shape is not in this set, the second condition is satisfied. Therefore we can postpone the threat Glue (x , y) o ,Fastened (x , z) . Theorem 7 can also be used to show that each of the remaining threats in the machine shop problem can be postponed. More generally, Theorem 7 can be applied in a serial fashion: after one threat is postponed, it does not need to be considered in the analysis of subsequent threats. 3.2 Threat Blocks Although Theorem 7 is considerably weaker than Theorem 4, it can be applied in time that is linear in the size of the operator graph. As a result, it can often be used to quickly eliminate many of the easiest threats from consideration. Unfortunately, there are some sets of threats where the full power of Theorem 4 is still needed. Consider the graph shown in Figure 6. The four threats shown in the top half of Figure 6: Threat graph with difficult threats. the graph can be resolved and postponed using Theorem 4 but not Theorem 7. The set of threats in the bottom half of the graph cannot be resolved using only ordering constraints, and therefore cannot be postponed. In a case like this, the top and bottom halves of the graph are independent, and we should be able to examine the threats in the two halves separately. To do this we first need some definitions. We define a block as a subset of an operator graph having a common beginning and ending. More precisely: Definition 5: Let Begin and End be two operators such that Begin is a predecessor of End in the operator graph. A block is a subset of the operator graph (ignoring threats), such that, for each node N in the block: 1. Begin occurs on all paths from S Zart to N. 2. End occurs on all paths from N to Finish. 3. Every node and edge on a path from Begin to N is in the block. 4. Every node and edge on a path from N to End is in the block. 504 Smith In the graph above, each of the four branches constitutes poned in domains with loosely-coupled operators. a block. Any two or three of these branches also constitute The techniques do little to help highly recursive a block. - efinition 6: A threat block is a block where all threats that touch any node in the block are contained completely within the block. A threat block is minimal if no subset of the block is a threat block. According to this definition, there are two minimal threat blocks in the above graph, one containing the top two and one containing the bottom two branches. Theorems 4 and 7 can now be extended to threat blocks. We restate Theorem 4 for threat blocks. Theorem 8: Let T be the set of threats in a minimal threat block and let P be a subset of those threats. The threats in P can be postponed if there is a set of ordering constraints that resolves the threats in P for every possible resolution of the remaining threats in T - P . roof Sketch: Consider the set of threats not in the threat block. If we consider every possible way of resolving these outside threats it is easy to see that the resulting ordering constraints can have no impact on any ordering decisions within the block. Thus, if the conditions of Theorem 8 hold, we can expand the set T to include the threats outside the block and Theorem 4 will apply. Using this theorem, we could examine and postpone the threats in the top half of the graph of Figure 6. It is relatively easy to find minimal threat blocks. We start with one threat, and find the common descendents and ancestors of both ends of the threat. If other threats are encountered in the process, we include the endpoints of these new threats in our search for a common ancestor and descendent. With pre-numbering of the graph, this process can be done in time linear in the size of the graph. 4.1 Implementation sets of operators. 2. The time taken to build an operator graph and ana- lyze threats is computationally insignificant in comparison to the time required to do planning. For non-trivial planning problems, this time is less than 10% of planning time, and is often much smaller than that. Our experience suggests that the speed of these procedures is not a concern and that even the use of Theorem 8 on threat blocks will probably not cause serious computational problems. We speculate that if the threats in Both (Etzioni 1993) and (Knoblock 1990, 1991) have proposed goal ordering mechanisms to reduce the number of threats that arise during planning. In particular, Etzioni and Knoblock construct and analyze graphs similar to the operator graphs developed here. Etzioni derives goal- ordering rules from this graph, while Knoblock constructs predicate hierarchies to guide a hierarchical planner. Unfortunately, both of these systems were developed for a total-order planner. In a total-order planner the order in which goals are processed affects the ordering of actions in the plan. This, in turn, determines the presence or absence of threats in the plan. In contrast, for partial-order planning, the order in which goals are processed does not determine the ordering of actions within the plan. As a consequence, goal ordering does not affect the presence or absence of threats in the plan, and cannot be used to help reduce threats. Although goal ordering can be used to reduce search in partial-order planning (Smith 1988, Smith & Peot 1992), it cannot be used to reduce the number of threats. A more detailed critique of Knoblock’s technique can be found in (Smith & Peot 1992). In our current implementation, we first attempt to eliminate as many individual threats as possible using Theorem 7. After this, we construct minimal threat blocks for the remaining threats. We then use Corollary 5 on each individual threat block. We have not yet implemented the more powerful Theorems 4 or 8, but expect to apply them only after other more tractable alternatives have failed. Our preliminary testing indicates several things: 4.3 Extensions Originally, we thought it was possible to use local analysis techniques to postpone many threats. However, all of our conjectures in this area have proven false. The one area that we think still holds promise is division into threat blocks. We think that there may be criteria that will allow threats to be broken up into smaller blocks. 1. The number of threats that can be postponed var- Another approach that we think holds promise is ies widely across problems and domains. As we variable analysis in the operator graph (Etzioni 1993). By a would expect, many more threats can be post- careful analysis of variable bindings in the operator graph, it is often possible to eliminate many phantom threats from Plan Generation 505 the graph. This, in turn, makes it more likely that other threats can be postponed. There are other possibilities for analysis of the operator graph, including analysis of potential loops. Here, the recognition and elimination of unnecessary loops among the operators can allow the postponement of additional threats. Some of these possibilities are discussed in (Smith & Peot 1993). 4.4 Final Remarks The techniques developed in this paper have a direct impact on the efficiency of the planning process. Whenever possible, they separate the tasks of selecting actions from the task of ordering or scheduling those actions. This is a natural extension of the least-commitment strategy inherent to partial-order planning. But perhaps as important as threat postponement is the ability to recognize threats that are difficult to resolve. If a block of threats cannot be postponed, the planner should pay attention to those threats early. This information could be used to help the planner avoid partial plans with difficult threat blocks. It could also be used to help determine the order in which to work on open conditions. In particular, if the planner is faced with a difficult threat block it should probably generate and resolve that portion of the plan early. In our experience, both the choice of partial plan and the choice of open condition can dramatically influence the performance of a planner. For this reason, information about difficult threat blocks could make a significant difference. Acknowledgments The idea of analyzing threats in the operator graph was motivated by the work of Craig Knoblock (Knoblock 1990, 1991). Thanks to Mark Drummond, Steve Minton, Craig Knoblock, and Oren Etzioni for comments and discussion. Thanks to Henry Kautz and David McAllester for help with the NP-completeness result. This work is supported by DARPA contract F30602-9 1 -C-O03 1. References Barrett, A. and Weld, D. 1993. Partial-Order Planning: Evaluating Possible Efficiency Gains, Technical Report 92- 05-01, Dept. of Computer Science, University of Washing- ton. Collins, G., and Pryor, L. 1992. Representation and Perfor- mance in a Partial Order Planner, Technical Report 35, The Institute for the Learning Sciences, Northwestern Univer- sity. Etzioni, 0. 1993. Acquiring Search-Control Knowledge via Static Analysis, Artificial Intelligence, to appear. Harvey, W. 1993. Deferring Threat Resolution Retains Sys- tematicity, Technical Note, Department of Computer Sci- ence, Stanford University. Kambhampati, S. 1992. Characterizing Multi-Contributor Causal Structures for Planning, In Proceedings of the First International Conference on AI Planning Systems, College Park, Maryland, 116-125. Kambhampati, S. 1993. On the Utility of Systematicity: Understanding Tradeoffs between Redundancy and Com- mitment in Partial-Ordering Planning, In Proceedings of the Thirteenth International Conference on AI, Chambery, France. Kautz, H. 1992, personal communication. Knoblock, C. 1990, Learning abstraction hierarchies for problem solving. In Proceedings of the Eight National Conference on AI, Boston, MA, 923-928. Knoblock, C. 1991. Automatically Generating Abstractions for Problem Solving, Technical Report CMU-CS-9 1- 120, Dept. of Computer Science, Carnegie Mellon University. McAllester, D., and Rosenblitt, D. 1991. Systematic non- linear planning, In Proceedings of the Ninth National Con- ference on AI, Anaheim, CA, 634-639. Penberthy, J. S., and Weld, D. 1992. UCPOP: A Sound, Complete, Partial Order Planner for ADL, In Proceedings of the Third International Conference on Knowledge Rep- resentation and Reasoning, Cambridge, MA. Peot, M., and Smith, D. 1992. Conditional Nonlinear Plan- ning, In Proceedings of the First International Conference on AI Planning Systems, College Park, MD, 189- 197. Peot, M., and Smith, D. 1993. Threat-removal Strategies for Partial-Order Planning, In Proceedings of the Eleventh National Conference on AI, Washington, D.C. Smith, D. 1988. A Decision Theoretic Approach to the Control of Planning Search., Technical Report 87-11, Stan- ford Logic Group, Department of Computer Science, Stan- ford University. Smith, D., and Peot, M. 1992. A Critical Look at Knob- lock’s Hierarchy Mechanism, In Proceedings of the First International Conference on AI Planning Systems, College Park, Maryland, 307-308. Smith, D., and Peot, M. 1993. Threat Analysis in Partial- Order Planning. Forthcoming. Tate, A. 1977. Generating Project Networks, In Proceed- ings of the Fifth International Joint Conference on AI, Bos- ton, MA, 888-893. Yang, Q., A Theory of Conflict Resolution in Planning, Artificial Intelligence, 58:361-392, 1992. 506 Smith | 1993 | 75 |
1,404 | PERMISSE PLANNING: A MAC TO LINKING INTERN Gerald De Jong dejong@cs.uiuc.edu Computer Science /. Beckman Institute University of Illinois at Urbana/Champaign 405 N. Mathews,Urbana IL 61801 Abstract Because complex real-world domains defy perfect for- malization, real-world planners must be able to cope with incorrect domain knowledge. This paper offers a theoreti- cal fhmework fmpemissiveplanning, a machine leam- ingmethodforimprovingthereal-world behaviorofplan- ners. Permissive planning aims to acquire techniques that tolerate the inevitable mismatch between the planner’s in- ternal beliefs and the external world. Unlike the reactive approach tothis mismatch, permissive planning embraces projection. The method is both problem-iudependent and domain-independent. Unlike classical planning, permis- sive planning does not exclude real-world performance from the formal definition of planning. Introduction An important facet of AI planning is projection, the pro- cess by which a system anticipates attributes of a future world state from knowledge of an initial state and the intervening ac- tions. A planner’s projection ability is often flawed. A classi- cal planner can prove goal achievement only to be thwarted by reality. The reactive approach; which has received much at- tention, avoids these problems by reducing reliance on projec- ticm or disallowing it-altogether. For ah its stimulating effect on the field, however,+ reactivity is only one path around pro- jection problems. It is important to continue searching for and researching alternatives. In this paper we advance one such al- ternative termedpemissiveplanning. In some ways it is the dual of the reactive approach, relying heavily on a goal projec- tion ability enhanced by machine learning. From a broader perspective, permissive planning embodies an approach to in- ference which integrates empirical observations into a tradi- tioual apriori domain axiomatization. The research reported in this paper was carried out at the University of Illiuois and was supported by the Office of Naval Research under grant NOOO1491-J-1563. The authors also wish to thank Renee Bail- largeon, Pat Hayes, Jon Gratch and the anonymous reviewers for helpful comments. 508 DeJong Scott Bennett bennett@sra.com Systems Research and Applications Corporation 2000 15th St. North Arlington VA 22201 Pe The real world is captured within an inference system by some description in a formal language such as the predicate calculus. The system’s internal description may only appmxi- mate the external real world, giving rise to discrepancies be- tween inferred and observed world behavior. In classical plan- ning, the difficulties with projection can be traced to such a discrepancy, in this case, adiscrepancy between an action’s in- terml definition (the one represented and reasoned about) and its external definition (the mapping enforced by the real world). To concentrate on the discrepancies of action defini- tions, we will assume in this paper that no difficulty is intro- duced by the system’s sensing or state representation abilities. Then, for simplicity in our figures, we can employ a single universe of states to denote both internal and external sets of states. Figure 1 illustrates a difference between a plan’s proj- I IJniverse of States I Figure 1: Au Action Sequence Projected and Actual Mappings ected and actual mappings from an initial state. The dot la- beled initial state represents both a particular configuration of the world (which we call the external state) and the system’s formal description of it (the internal state). According to the system’s internal model, the plan’s action sequence trans- forms the initial state into a goal state. In the real world, how- ever, the actual final state falls well outside the goal region. One might employ machine learning to improve the sys- tem’s operator definitions. The result would be a more faithful From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. representation of their real-world effects. This would yield a more accurate projection ability as illustrated in Figure 2. Un- Universe of States Universe of States Figure 2: Conventional Learning Figure 3: Permissive Raming Adjusts Projected Mapping Adjusts Actual Mapping towards Actual Mapping towards Projected Mapping fortunately, a trend toward increasingly correct operator defi- nitions is necessarily also a trend towards more complex oper- ator definitions. Increased complexity results in more work for the planner which must rule out concomitantly more nega- tive interactions. Indeed, operate complexity can grow un- boundedly as dictated by the qualification problem [McCart- hy80]. With sufficient experience, this use of machine learning could paralyze any planner. There is an alternative machine learning approach. When au apparently sound plan fails to achieve its goal in the real world, it may be possible to find a small alteration of the plan which tends not to influence the projected final state but moves the act&final state closer to the projected one. Instead of altering the system’s representations to better fit the ob- served world (Figure 2), this approach alters the actions se- lected so the real-world effects better match the projected ef- fects.ThisisillustratedinFigure3. Ifsuchaplancanbefound, the planner itself might be altered so that when presented with similar future problems, the planner will tend to produce a so- lution similar to the improved plan rather than repeating the mistakes of the the original plan. Our approach is to alter sys- tematically the planner in response to execution failures sothe planner prefers not to employ its domain knowledge in ways that seem to lead to failure. We call the approach “permissive” because it allows the planner to construct plans that work in the real world in spite of flaws in its domain knowledge. Permissive Planning Principle: Blame the plan and adjust the planner in response to execution failures even though the implementor-supplied domain theory is known to be at fault. ssive In response to the need for breakfast a planner may be able to formulate several acceptable action sequences. One results in pancakes, another in cold raisin bran cereal, still another in hot oatmeal, etc. CMcourse, most planners will not produce all possible plans. Plauniug activity typically ceases when the first acceptable solution is found. After constructing an ac- ceptable plan for a hot oatmeal breakfast, the system should not waste effort in working out the details for cooking pan- cakes. We call the set of all possible plans that a particular classi- cal planner could produce in principle for a problem, the corn- petence set of the planner for that planning problem. We call the particular element of this set which is in fact constructed in response to the problem theperfomtance of the planner for that planniug problem. We use the term planner bias to refer to the preference, no matter how it is realized, for the particu- lar performance plan from the planner’s competence set. By systematically altering a planner’s bias in response to ob- served plan execution failures, the same planning competence cau yield quite different planning performance, resulting in improved real-world behavior. The permissive adjustment of a planner’s bias so as to improve the real-world success of its performance behavior can be seen as applying the permissive planning principle above: the planner is blamed and its bias adjusted even though the offending projection failures are due to inaccurate operator definitions. In this view of planning, actions may have different inter- nal and externd effects. We ‘will say that an action sequence PSoZves a planning problem the goal holds in the projection of the initial state through the sequence. The sequence Esolves the problem if the goal is achieved in the real world by execut- ing sequence s from the initial state. In the planning literature, it is not uncommon to view a plan as a collection of constraints [Ghapman87, We subscribe to this notion but carry it a bit further. For us, a plan for a planning problem is any set of constraints which in- dividuates an action sequence such that the action sequence ISolves the planning problem. The accepted nonlinear view (e.g., CChapmau87, Sacerdoti75, WiIkins88]), is similar but does not require the individuation of an action sequence. A typical non-linear planner imposes constraints only until all action sequences consistent with the constraints are guaran- teed to reach a goal state (i.e., each ISolves the planning prob- lem). Stopping here allows anotion of minimal planning com- mitment. Remaining ambiguities are left for plan execution Plan Learning 509 when they are resolved in the most propitious manner avail- able in the execution environment. Our definition of a plan is more restrictive. We allow no ambiguities for later resolution. This requirement follows from our desire to adjust the plan- ner’s bias. We wish to incorporate the entire decision proce- dure resulting in an action sequence within the planner proper. Only then can the permissive adjustment procedure have ac- cess to the full bias reflected in the executable actions. We use the informal term partial plan to refer to a set of constraints which is intended to solve a particular planning problem but does not yet individuate an action sequence. A planner (including its bias) is adequate if 1) whenever a plan is produced that ISolves a problem it also ESolves the problem, and 2) whenever the planner fails to find a plan its competence set contains no plan whose action sequence ESolves the problem. Finally, aplanning computation is a finite sequence of de- cisions, Dip fi= I ,...n). Each decision selects a constraint, ci, to be added to the partial plan from a (possibly infinite) set of al- ternatives (ai, , ai,z, ai, . . . } entertained by the planner for that decision, sothat Ci E {ai,], ai,a, q,3...}. The partial plan (which is initially the empty set of constraints) is augmented with the new decision’s constraint, resulting in a (possibly) more re- strictive constraint set. A planning computation is successful if, at its termination, there is exactly one distinct action se- quenceconsistent with the set of constraints and that action se- quence ISolves the planning problem. Planning, in this framework, is repeatedly entertaining al- ternative constraint sets and for each, selecting one constraint to be imposed on the partial plan This is not to say that every planner must explicitly represent the alternative constraint sets. But every planner’s behavior can be construed in this way. From this perspective, a planner’s competence is deter- mined by the sets of alternatives that the planner can entertain. A (possibly empty) subset of alternatives from each constraint set supports successful plan completion. The planner’s com- petence is precisely the set of all possible successful plans giv- en the entertained alternatives. A planner’s performance, on the other hand, is determined by its particular choice at each point from among the subset of alternatives which support a successful computation continuation. The Permissive Planning Algorithm A planning bias is auy strategy for designating a particular element from among sets of valid continuation alternatives. Permissive adjustment of a planner is an empirically-driven search through the space of possible biases. Searching for an alternative bias is evoked whenever inadequate real-world planning behavior is observed. In practice, the bias space is extremely large and the per- missive planning search must be strongly guided by domain knowledge. If such domain knowledge is unavailable, the al- gorithm continues to have its formal properties. However, we believe that the practical ease with which suitable domain knowledge can be formulated will largely govern when the permissive planning approach will be useful. The permissive planning algorithm: 1. Initialize Candidate-Bias-Set to the space of all biases. 2. Using domain knowledge select an element from the Candidate,Bias-Set, call it Current-Bias. 3. Solve planning problems using Current-Bias. If an ex- ecuted plan fails to achieve its goal go to 4. 4. Delete all biases from Candidate-Bias-Set that are prov- ably inadequate using domain knowledge (including at least Current-Bias). 5. If Candidate-Bias-Set is not empty, Go To 2. 6. FAIL, no adequate bias exists. As will become clear in the example, domain knowledge, in the form of qualitative information relating operators’ ef- fects to their arguments, can substantially increase the effi- ciency of permissive planning by guiding the selection of a promising bias in step 2 and increasing the number of untried but guaranteed inadequate biases rejected in step 4. It can be easily proven when the algorithm terminates an adequate planner has been produced. Further, it can be proven that the algorithm will terminate so long as the bias space pos- sesses somemodest properties. On advice from an anonymous reviewer these proofs have been deleted in favor of more sub- stantial discussions of other issues. An Example of Permissive Planning Suppose we have a two-dimensional gantry-type robot arm whose task is to move past an obstacle to position itself above a target (see Figure 4). The operators are LEFT, RIGHT, Figure 4: Goal to Move Past au Obstacle Figure 5: Collision Observed During Execution OPEN, CLOSE, KJP, and DOWN, which each take an amount as an argument. The obstacle is 2.5 units high, the tip of the hand is 1.2 units above the table, and the front face of the target is 510 DeJong 4.201 units from the left finger. The planner’s native bias re- sults in a plan individuating the action sequence: ~~(1.4) ~m~~(4.2). Figure 6 shows the alternatives entertained. Shaded alternatives indicate the competence selections given the constraints already adopted. The performance selection for each decision is outlined in a solid white box. The first decision selects UP. The second decision selects a value for the parameter to UIb. Any value greater than 1.3 up to the ceiling is possible according to the system’s internal world. The value 1.4 is selected. Finally, RIGHT is selected with an argument of 4.2. Duriug execution a collision is detected. Permissive pro- cessing is invoked which explains the most likely collision in qualitative terms. The problem deemed most likely is the one requiring the least distortion of the internal projection. In this case, the most likely collision is with the obstacle block; the height of the gripper fingers is judged too low for the height of the obstacle as shown in Figure 5. The planning decisions are examined in turn for alternative competence set elements which can qualitatively reduce the diagnosed error. In this case it amounts to looking for alternatives that, according to the operators’ qualitative descriptions, increase the distance between the finger tip and the top of the obstacle. Decision 2 is a candidate for adjustment. Higher values for decision 2 do not influence the projected goal achievement in the internal world but appear qualitatively to improve avoidance of the ob- served collision. The resulting plan is generalized in standard EBL fashion [DeJong86] resulting in a new schema which might be called REACH-OVER-OBSTACLE. It embodies spe- cialized bias knowledge that, in the context of this schema, the highest possible value consistent with the internal world mod- el should be selected as the parameter for the UP action. The new performance choice is illustrated in Figure 6 by a dashed white box. 1) First 2) Argument 3) Second 4) Argument Operator for UP Operator for RIGHT Competeiii=e Selections (given prior constraints) Performance Selection Figure 6: Plan Computation before permissive adjust. cz: Performance Selection after permissive adjust. From now on, the robot will retreat to the ceiling when reaching over an obstacle. If other failures are encountered, permissive planning would once again be invoked resulting in additional or alternative refinements. If no further refinement can be constructed, the schema forces a hard planning failure; none of the elements of the systems performance set is empiri- cally adequate. Although the internal model supports solu- tions, this class of planning problems cannot be reliably solved in the external world. Any adequate planner must fail to offer a plan for such a class of planning problems. ias Space What constitutes a bias space and how does one go about selecting a particular bias? These are important practical ques- tions. If there is no convenient way to construct a searchable bias space, then permissive planning is of little consequence. The required theoretical properties are modest and do not sig- nificantly restrict what can and cannot serve as a bias space. In fact it is quite easy to construct an effectively searchable bias space. We employ a method for the example above and SPER system based on problem-solving schema- ta (generalized macro-operators). Each schema represents a parameterized solution to a class of planning problems (like ~ACH-o’9rE&oBsTACLE). When the planner is given a new planning problem, the schema library is examined first. If no schema is relevant, a standard searching planner is applied to the problem. If a schema is found, the schema specifies how the problem is to be dealt with, and the native searching plan- ner is not invoked. Thus, the schema library acts as a variable sieve, intercepting some planning problems while letting the native planner deal with the rest. One practical difficulty with this method of bias adjust- ment is the utility problem [Minton88]. However, this is a sep- arate issue from planner adequacy. Furthermore, recent re- search [Gratch92, Greiner921 has shown effective methods for attacking the utility problem that are consistent with this view of permissive planning. Empirical Evidence We have implemented a permissive planner, called GRASPER, and tested its planning on two domains using a real-world robot manipulator. Here we summarize the results of two experiments. Readers are referred to [Bennett931 for details of the system and the experimental domains. periment I. The task is to grasp and lift designated ob- jects from the table with the gripper even though the gripper’s actual movements are only imprecisely captured by the sys- Plan Learning 511 tern’s operator knowledge. Objects are known only by their silhouette as sensed by an over-head television camera. This experiment consists of an experimental and a control condi- tion. Twelve plastic pieces of a children’s puzzle were each as- signed a random position and orientation within the robot’s working envelope. Pieces were successively positioned as as- signed on the table. Por each, the robot performed a single grasp attempt. In the experimental condition permissive plan- ning was employed; in the control condition permissive plan- ning was turned off. The results are summarized in Figure 7A. In the control condition only two of the twelve attempted grasps succeeded. In the experimental condition ten of the twelve attempts succeeded. Pailures due to three dimensional motion of the target, which cannot be correctly sensed by the robot, were excluded. One bias adjustment was sufficient to preclude recurrences of each of the two observed types of grasping failure. The two bias adjustments can be interpreted as 1) preferring to open the gripper as wide as possible prior to approaching the target, and 2) selecting opposing sides for grasp points that are maximally parallel. Other permissive ad- justments that have been exhibited by the system in this do- main include closing the gripper more tightly than is deemed necessary and selecting grasp points as close as possible to the target’s center of geometry. Experiment 2. Details of this task domain are borrowed from Christiansen [Christiansen90]. It is alaboratory approxi- mation to orienting parts for manufacturing: a tray is tilted to orient a rectangular object free to slide between the tray’s sides.. Christiansen employed the domain to investigate sto chastic planning to which we compare permissive planning. The tray is divided into nine identical regions. The task is to achieve a desired orientation (either vertical or horizontal) of the rectangular object in a specified region. 0 1 2 3 4 5 6 7 8 9 101112 -‘p Trial 0 1 2 3 4 5 6 7 8 9101112 Control Condition, No Permissive Adjustment Experimental Condition, Permissive Adjustment enabled Figure 7: Effect of Permissive Adjustment on Grasp Success We compared l-step permissive plans to l-and 3-step op- timal stochastic plans. The optimal stochastic plans were gen- erated using the technique described in [Christiansen90]. In the experiment, a sequence of 52 block orientation problems was repeatedly given to the permissive planner 20 times ( 1040 planning problems in all). Figure 8 shows the improvement Figure 8: Average Success Rates over 20 Repetitions for I-Step Permissive vs. l- and 3Step Stochastic Plans in average success rate in the course of the20 repetitions. Each data point is the average success over the 52 problems. Suc- cess rate increased from about 40% to approximately 80%. The final l-step permissive performance approaches the 3-step stochastic performance, but requires fewertraining ex- amples. In this paper we have described our efforts towardformal- izing permissive planning. Prior descriptions [Bennett911 have discussed our motivations and how these motivations have guided the implementation of experimental systems. We feel that we now understand the reasons behind the success of our implementations sufficiently to offer a beginning formal account of why our experimental systems work. Qurnotionofbiaswasinspiredbytheuseofthesameterm in machine learning wtgoff86]. However, the bias referred to in permissive planning is quite separate from the bias of its un- derlying machine learning system. Likewise, our competen- ce/performance distinction for planners is borrowed from a similar but fundamentally different notion in linguistics [Chomslcy65]. In the current research literature the methods most similar to permissive planning trace their roots to dynamic program- ming [Bellman571 variously called reinforcement learning, temporal difference methods, and Q learning [92]. Like per- missive planning the motivation is toimprove real-world goal 512 DeJong achievement. Some approaches utilize prior operator knowl- edge. However, like dynamic programming, the goal specifi- cation is folded into the acquired knowledge; the improved system is not applicable to other goals without redoing the ma- chine learning. Purthermore, like stochastic planning, (but un- like permissive planning) a coarse discretization of continu- ous and fine-grained domain attributes is required. One interesting consequence of permissive planning con- cerns minimal commitment planning. Permissive planning re- jects this apparently attractive and generally accepted planner design goal. The desire to make the fewest necessary planning commitments is motivated by theoretical elegance, planning efficiency, and discrepancies with the real world. However, it denies access to a significant portion of the planning bias. The primary significance of this work, we believe, lies in combining the internal constraints of a planner’s apriori do- main axiomatization (i.e., its definition of operators) with the external constraints obtained from examining real-world out- comes of action sequences. The machine learning formalism for combining these internal and external world models is pro- blem-independent and domain-independent, although some amount of domain training is introduced. The approach offers real-world robustness previously associated only with reac- tive interleaving of sensing and action decisions. The examples described here are modest. For each failure that initiates permissive planning, one could, after the fact, re- craft the operator definitions. But this misses the point. After all of the axiom tuning a human implementor can endure, inac- curacies will still exist in a planner’s world knowledge. As long as there are multiple ways of solving a problem, permis- sive planning can be employed to preferentially generate plans that work in the external world. References on Machine Learning 586-590. cts ofthe Theory of Syn- 27-39. earning Search Contra on-Based Approach, [~~tt~~2~ Machine Learning . Wlkim, Practical Planning: Extending Plan Learning 513 | 1993 | 76 |
1,405 | Relative Utility of E in Partial Ordering vs. Subbarao Kambhampati* and Jengchiu Chen Department of Computer Science and Engineering Arizona State University, Tempe, A.2 852873406 Email: rao@asuvax.asu.edu Abstract This paper provides a systematic analysis of the relative utility of basing EBG based plan reuse techniques in partial ordering vs. total ordering planning frameworks. We separate the potential advantages into those related to storage compaction, and those related to the ability to exploit stored plans. We observe that the storage compactions provided by partially ordered partially instantiated plans can, to a large extent, be exploited regardless of the underlying planner. We argue that it is in the ability to exploit stored plans during planning that partial ordering planners have some distinct advantages. In particular, to be able to flexibly reuse and extend the retrieved plans, a planner needs the ability to arbitrarily and efficiently “splice in” new steps and sub-plans into the retrieved plan. 3his is where partial ordering planners, with their least-commitment strategy, and flexible plan representations, score significantly over state-based planners as well as planners that search in the space of totally ordered plans. We will clarify and support this hypothesis through an empirical study of three planners and two reuse strategies. 1. Introduction Most work in learning to improve planning performance through EBG (explanation based generalization) based plan reuse has con- centrated almost exclusively on state-based planners (i.e., planners which search in the space of world states, and produce totally or- dered plans;[3, 14, 11, 201.) In contrast, the common wisdom in the planning community (vindicated to a large extent by the recent formal and empirical evaluations [l, 12, 15]), has held that search in the space of plans, especially in the space of partially ordered plans provides a more flexible and efficient means of plan generation. It is natural to enquire, therefore, whether partial order (PO) planning retains its advantages in the context of EBG based plan reuse. In our previous work [8], we have shown that the explanation- based generalization techniques can indeed be extended in a system- atic fashion to partially ordered partially instantiated plans, to give rise to a spectrum of generalization strategies. In this paper, we will address the issue of comparative advantages of doing EBG based plan reuse in a partial order planning framework. We will do this by separating two related but distinct considerations: the advantages of storing plans as partially ordered and partially instantiated gener- alizations and the advantages of using the stored generalizations in the context of a PO planning framework. Storing plans in a partially ordered and partially instantiated form allows for compactness of storage, as well as more flexible editing operations at retrieval time. We will point out, however, that these advantages can be exploited whether the underlying planner is a PO planner or a total ordering planner. We will argue that it is in the ability to use the generalized plans during planning, that partial ordering planners have some distinct advantages over total ordering planners. In particular, to be able to flexibly reuse and extend the retrieved plans (when they are only partially relevant in the new problem situation), the planner needs to be able to arbitrarily and efficiently “splice in” new steps and sub-plans into the retrieved macro (and vice versa). Partial ordering planners, with their least- commitment strategy, and flexible plan representations, are more efficient than state-based planners as well as planners that search *This research is supported by National Science Foundation under grant IRI-9210997. 514 Kambhampati in the space of totally ordered plans, in doing this splicing. We argue that in many plan reuse situations, this capability significantly enhances their ability exploit stored plans to improve planning performance. We will support our arguments through focused experimentation with three different planners and two different reuse strategies. The rest of the paper is organized as follows: the next section provides a brief characterization of different planners in terms of how they refine plans during search. Section 3. uses this background to characterize the advantages of partial order planners in exploiting stored plan generalizations to improve performance. Section 4.1. describes an empirical study to evaluate the hypotheses regarding the comparative advantages of PO planning. Section 4.2. presents and analyzes the results of the study. Section 5. argues in favor of basing plan reuse and other speedup learning research in partial order planning, and clears some misconceptions regarding PO planning which seem to have inhibited this in the past. Section 6. concludes with a summary of contributions. All through this paper, we shall refer to stored plan generalizations as macros, regardless of whether they get reused as macro-operators, or serve as a basis for library- based (case-based) plan reuse. We also use the terms “efficiency” and “performance” interchangeably to refer to the speed with which a planner solves a problem. 2. A Characterization of Planners in terms of al- lowable plan refinements Whatever the exact nature of the planner, the ultimate aim of planning is to find a ground operator sequence, which is a solution to the given problem (i.e., when executed in a given initial state will take the agent to a desired goal state). From a first principles perspective, the objective of planning is to navigate this space, armed with the problem specification, and find the operator sequences that are solutions for the given problem. Suppose the domain contains three ground actions al, a2 and ~23. The regular expression {al~u2~u3}* describes the potential solution space for this domain. If we are interested in refinementplanners (i.e., planners which add but do not retract operators and constraints from a partial plan during planning) which most planners are, then the planner’s navigation through the space of potential solutions can be enumerated as a directed acyclic graph (DAG), as illustrated in Figure 1. When a refinement planner reaches an operator sequence that is not a solution, it will try to refine the sequence further (by adding more operators) in the hopes of making it a solution. Different types of planners allow different types of transitions in the DAG. For example, planners such as STRIPS and PRODIGY that doforward search in the space of world states, will only add operators to the end of the partial solution during refinement, and thus only transition via the solid lines in Figure 1.’ Most planners used in learning research to-date fall in this category. On the other hand, planners ‘Notice that the so called linearity assumption, which specifies whether the planner manages its list of outstanding goals as a stack or an arbitrary list, has no effect on this. In particular, both PRODIGY, which makes linearity assumption, and its extension NOLIMlT[19] doesn’t (and thus allows interleaving of subgoals), are both capable only of refining a partial plan by adding operators to the end of the current plan. From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. al-a2-a3 ’ * ’ Figure 1: Characterization of refinements allowed by various plan- ning strategies (see text) which do backward search in the space of world states, will only add new operators to the beginning of the partial solution during refinement, and thus allow the transitions shown in dashed lines. Finally, planners which search in the space of plans, allow new operators to be added anywhere in the partial solution, including in the middle of the existing plan, and thus allow all of the refinements shown in the figure. All the planners we discussed above can be called total ordering planners in that the partial plans they maintain during their search is always a totally ordered sequence of operators. However, planners searching in the space of plans have the option to either search in the space of totally ordered plans, or in the space of partially ordered (PO) plans. Many current-day planners such as NOAH, NONLIN, SIPE belong to the latter class, called partial order or PO planners2 are generally more efficient as they avoid premature commitment to inter-operator orderings, there by improving efficiency over corresponding planners that search in the space of totally ordered plans [I, 151. 3. Aiebistntages of Partial Order planning in 3.1. Storage Compaction A PO plan provides a compact representation for the possibly expo- nential number of its underlying linearizations by specifying just the steps, the partial ordering between steps and the codesignation and non-codes&nation constrains between the variables. This flexible plan representation allows for a spectrum of order, precondition and structure generabations. Our previous work [S] provides a systematic basis for generating these generalizations. Storing plans in PO form also allows for more sophisticated editing operations at retrieval time, when the macro is only partly applicable. Specifically, any irrelevant steps and constraints of the plan can be edited out by retracting the corresponding planning decisions. The retraction itself can be facilitated by justifying individual planning decisions in terms of the plan causal structures. Once such a justification framework is in place, the retraction of irrelevant constraints can be accomplished with the help of a polynomial time greedy algorithm (cf. [5, 81). However, all the advantages of storage compaction and plan editing will hold whether the underlying planner generates a totally ordered or partially ordered (PO) plans. For example, generalization techniques described in our previous work on EBG for PO plans [8] can be used whether the plan was initially produced by a partial ordering or a total ordering planner. Similarly, even in reuse frameworks based on total ordering planners (e.g. [20, IS]), order generalization has been used as a way to separate independent parts of the plan and store them separately, thereby containing the proliferation of macros by reducing the redundancy among them. In other words, although storage considerations motivate the use of PO 2Partial order planners have also been called nonlinear planners. We prefer the former term since the latter gives the misleading impression that partial order planning is related to linearity assumption. In fact, as we mentioned earlier linearity assumption is concerned with order in which different goals are attacked, and can be used in any planner. Linearity assumption causes incompleteness in planners that search in the space of world states (such as STRIPS and PRODIGY), but does not affect completeness in any way in planners that search in the space of plans. ART-IND (D”S’): (A; pTec : I; add : G; ) ART-MD (PS’): (A; pTec : Ii&: GiM: (Ijlj<i}) ART-MD-NS (W’32): (A: pTec : Ii add : Pi &J : (Ij jl j < i}) (AT pTec : Pi & : Gi &J : (Ij IVj} U {Pj I j < i}) Figure 2: The specification of Weld et. al.‘s Synthetic Domains plan representation during plan reuse, they do not necessarily argue for the use of PO planning. 3.2. Ability to exploit stored plans during plan reuse In this section, we argue that the real utility of using partial order planning when doing EBG based plan reuse is that it provides a flexible and efficient ability to interleave the stored plans with new operators, thereby significantly increasing the planner’s ability to exploit stored plans. To understand this, we need to look at the various possible ways in which a stored plan can be extended during planning. When a macro is retrieved to be reused in a new problem situation, it will only be a partial match for the problem under consideration: (i) The macro may contain extraneous goals/constraints that are not relevant to the problem at hand. (ii) There may be some outstanding goals of the problem that the retrieved macro does not match. The first situation can be handled largely through the editing operations described earlier. In the second case, the planner may have to do some further planning work even after the macro is incorporated into the current plan. The way a planner extends the macro during planning critically affects its ability to reuse stored plans in new situations. This, in turn, depends upon whether the planner searches in the space of world-states or plans (Section 2.). Supposea planner is solving a problem involving a set G of goals, and retrieves a macro M which promises to achieve a subset G’ of these goals. Let g E (G - G’) be an outstanding goal of the problem. We will say that M is sequenceable with respect to the outstanding goal g if and only if there exists a subplan P for achieving g such that M.P or P.M (where “.” is the sequencing operator) will be a correct plan for achieving the set of goals G U (g}. M is said to be interleavable3 with respect to g. if and only if there exists a sub-plan P for achieving g such that P can be merged with M without retracting any steps, ordering constraints or binding constraints in M or P. In particular, if M corresponds to a plan (TM, OM, 23~) (where TM is the set of steps, 0~ is the partial ordering on the steps and BM is the binding constraints on the variables), and P corresponds to a plan (TP, OP, BP), then there exists a plan P’ : (TM U Tp, 0~ U Op U 0’, BM U .13p U a’) which achieves the set of goals G u {g}.4 Clearly, interleavability is more general than sequenceability. There are many situations when the macros are not sequenceable but only interleavable with respect to the outstanding goals of the planner. Consider the simple artificial domains, ART-IND, ART- MD and ART-MD-NS (originally described in [ 11) shown in Figure 2. These domains differ in terms of the serializability [lo] of the goals in the domain. ART-IND contains only independent goals (notice that none of the actions have delete lists). The goals in ART- MD are interacting but serializable while those in ART-MD-NS are non-serializable.’ In particular, in the latter domain, macros will be 3Note that interleavability here refers to the ability to interleave plan steps. This is distinct from the ability to interleave subgoals. In particular state- based planners that don’t use linearity assumption can interleave subgoals, but cannot interleave plan steps 41nterleavability of macros, as defined here differs from modifiability (c.f. [5,6]) in that the latter also allows retraction of steps and/or constraints from the macro, once it is introduced into the plan. While modifiability is not the focus of the current work, in our previous work [5], we have shown that PO planners do provide a flexible framework for plan modification. More recently, we have been investigating the utility tradeoffs involved in incorporating a flexible modification capability in plan reuse [4]. ‘From the domain descriptions, it can be seen that a conjunctive goal Plan Learning 515 SNLP TQCL TOPI scratch +YBBG +IEBG scratch +SBBG +IBBG scratch +ShBG ART-MD % Solved 100% 100% 100% 100% 100% 100% 100% 100% Cum. time 80 306 136 92 177 2250 1843 3281 % Macro usage - 20% 100% - 20% 100% - 6% ART-MD-NS % Solved 40% 40% 100% 30% 26% 60% 40% 40% cum. time 19228 21030 4942 22892 23853 14243 20975 23342 % Macro usage - 0% 100% - 0% 100% - 0% Table 1: Performance statistics in ART-MD and ART-MD-NS domains. interleavable, but not sequenceable with respect to any outstanding goals of the planner. To illustrate, consider the macro for solving a problem with conjunctive goal Gr A Gz in ART-MD-NS, which will be: Ai -+ Ai --+ A: -+ A;. Now, if we add G3 to the goal list, the plan for solving the new conjunctive goal Gl A GZ A G3 will be A’, -+ Ai --+ Ah --+ AT --+ Ai -+ AZ (where the underlined actions are the new a&ns added to the plan to achieve G3). Clearly, the only way a macro can be reused in this domain is by interleaving it with new operators (unless of course it is an exact match for the problem). Even when the goals are serializable, as is the case in ART-MD, the distribution of stored macros may be such that the retrieved macro is not sequenceable with respect to the outstanding goals. For example, suppose the planner is trying to solve a problem with goals Gi A G:! A G3 from ART-MD domain, and retrieves a macro which solves the goals Gr A G3: A1 -+ AS. Clearly, the outstanding goal, G2 is not sequenceable with respect to this macro, since the only way of achieving Gr A GZ A G3 will be by the plan AI + A2 -+ AS, which involves interleaving a new step into the retrieved macro. The foregoing shows that any planner that is capable of using macros only when they are sequenceable with respect to the out- standing goals is less capable of exploiting its stored plans than a planner that can use macros also in situations where they are only interleavable. From our discussion in Section 2., it should be clear that planners that search in the spaceof world states, such as STRIPS, PRODIGY, and NOLIMIT [19], which refine plans only by adding steps to the beginning (or end, in the case of backward search in the space of states) of the plan, can reuse macros only when they are sequenceable with respect to the outstanding goals. In contrast, planners that search in the space of plans can refine partial plans by introducing steps anywhere in the middle of the plan, and thus can reuse macros even when they are only interleavable with respect to the outstanding goals. Of these latter, partial order planners, which eliminate premature commitment to step ordering through a more flexible plan representation, can be more efficient than total order planners. Based on this, we hypothesize that partial order planners not only will be able to reuse both sequenceable and interleavable macrops, but will also be able to do it more efficiently. This, we believe is the most important advantage of partial ordering planning during reuse. 4. Empiricall Evaluation The discussion in theprevious section leads to two plausible hypothe- ses regarding the utility of PO planning in plan reuse frameworks. (i) PO planners are more efficient in exploiting interleavable macros than planners that search in the space of totally ordered plans, as well as state-based planners and (ii) This capability significantly en- hances their ability to exploit stored macros to improve performance in many situations, especially in domains containing non-serializable Gi A Gj (where i < j) can be achieved in ART-IND domain by achieving the two goals in any order, giving rise to two plans A; -+ Aj and Aj + A;. Only the first of these two plans will be a correct plan in ART-MD domain, since the delete literals in the actions demand that G; be achieved before Gj. Finally, in ART-MANS domain, the subplans for G; and Gj have to be interleaved to give the plan Ai + Af --+ Ai -+ A:. 516 Kambhampati sub-goals. We have tested thesehypotheses by comparing the perfor- mance of three planners -- a partial ordering planner, a total ordering planner, both of which search in the space of plans; and a state-based planner -- in conjunction with two different reuse strategies.6 In the following two subsections, we describe our experimental setup and discuss the results of the empirical study. 4.1. Experimental Setup Performance Systems: Our performance systems consisted of three simple planners implemented by Barrett and Weld [ 11: SNLP, TOCL and TOPI. SNLP is a causal-link based partial ordering planner, which can arbitrarily interleave subplans. TOCL is a causal link based total ordering planner, which like SNLP can insert a new step anywhere in the plan, but unlike SNLP, searches in the space of totally ordered plans7. SNLP, by virtue of its least commitment strategy, is more flexible in its ability to interleave operators than is TOCL. The third planner, TOP1 carries out a backward-chaining world-state search. TOP1 only adds steps to the beginning of the plan. Thus, unlike SNLP and TOCL, but like planners doing search in the space of plan states, such as STRIPS and PRODIGY, TOPI is unable to interleave new steps into the existing plan. All three planners are sound and complete. The three planners share many key routines (such as unification, operator selection, and search routines), making it possible to do a fair empirical comparison between them. Reuse modes: To compare the ability of each planner to exploit the stored plan generalizations in solving new problems, the planners wererun in three different modes in the testingphase: Scratch mode, SEBG (or sequenceable EBG) mode and IEBG (or interleavable EBG) mode. In the scratch mode, the planner starts with a null plan and refines it by adding steps, orderings and bindings until it becomes a complete plan. In the SEBG mode, the planner first retrieves a stored plan generalization that best matches the new problem (see below for the details of the retrieval strategy). The retrieved plan is treated as an opaque macro operator, and is added to the list of operator templates available to the planner. The planner is then called to solve the new problem, with this augmented set of operators. The IEBG mode is similar to the SEBG mode, except that it allows new steps and constraints to be introduced between the constituents of the instantiated macro and the rest of the plan, as the planning progresses. To facilitate this, whenever the planner selects a macro to establish a precondition, it consider the macro as a transparent plan fragment, and adds it to the existing partial plan. This operation involves updating the steps, orderings and causal links of the current partial plan with those of the macro. In the case of SNLP, the exact ordering of the steps of the macro with respect to the current plan can be left partially specified (e.g., by specifying the predecessor of the first step of the macro, and the successor of the last step of the macro), while in the case of TOCL, partial plans need to be generated for each possible totally ordered interleaving of the steps of the macro with respect to the steps of the current partial kode and test data for replicating our experiments can be acquired by sending mail to rao@asuva.x.asu.edu 7Each partially ordered plan produced by SNLP corresponds to a set of totally ordered plans. TOCL generates these totally ordered plans whenever SNLP generates the corresponding partially ordered plans. Cumulative Statistics in MD-NS Domain Cpu time as a function of plan length in MD-NS domain I 1000.0 , n I 25000.0 k--ASNLP (40%) A---A TOCL (30%) - SNLPiSJISG 600.0 O--U TOUtSEBG @-dSNLP+lEBO ( 3 C--C TOCL+IEBG ( I 2 5% 600.0 E” .- 2, 0 & 400.0 2 s a 200.0 10.0 120 Plan length Figure 3: Performance across the three reuse modes in the ART-MD-NS domain plan. SNLP is thus more efficient and least committed than TOCL in interleaving macros. It is easy to see that the SEBG strategy can reuse a macro if and only if it is sequenceable with the other outstanding goals of the plan, while the more general IEBG strategy can also reuse a macro whenever it is interleavable with other outstanding goals of the plan. From the description of the three planners above, it should also be clear that only SNLP and TOCL can support IEBG mode. TOPI, like other state-based planners such as STRIPS, PRODIGY and NOLIMIT, cannot support IEBG mode. Our SEBG and IEBG strategies differ from usual macro operator basedreuse strategies in that rather than use the entire plan library as macro-operators, they first retrieve the best matching plan from the library and use that as the macro-operator. The primary motivation for this difference is to avoid the high cost of going through the entire plan library during each operator selection cycle. (This cost increase is due to both the cost of operator matching and instantiation, and the increased branching factor). Instead of a single best match plan, the strategies can be very easily extended to start with two or more best matching plans, which between them cover complementary goal sets of the new problem. This would thus allow for transfer from multiple-plans. Here again, the ability to interleave plans will be crucial to exploit multiple macros. Storage and Retrieval Skategies: To control for the factors of In the ART-MD domain, which has subgoals that are easily storage compaction, and flexible plan editing, no specialized storage serializable, none of the planners show much improvement through or editing strategies are employed in either of the planners. The reuse (although SNLP does perform significantly faster than TOCL retrieval itself is done by a simple (if unsophisticated) strategy in the interleaving EBG mode). All three planners are able to solve involving matching of the goals of the new problem with those of all the test problems from scratch within the 1000 cpu sec. time the macros, and selecting the one matching the maximum number limit. The addition of SEBG and IEBG strategies does not change of goals. Although, there is much scope for improvement in these the solvability horizon. More importantly, the cumulative time taken phases (for example, retrieval could have been done with a more is slightly worse in both SEBG and IEBG strategies as compared sophisticated causal-structure based similarity metric, such as the to from scratch planning. This is understandable given that the one described in [6]), our choices do ensure a fair comparison problems in this domain are easy to solve to begin with, and any between the various planners in terms of their ability to exploit improvements provided by reuse strategy are offset by the retrieval stored plans. costs. Evaluation strategy: As noted earlier, sequenceability and inter- leavability of the stored macros with respect to the goals of the encountered problems can be varied by varying the ratio of inde- pendent vs. serializable vs. non-serializable goals in the problems. The artificial domains described in Figure 2, and Section 3.2. make ideal testbeds for varying the latter parameter, and were thus used as the test domains in our study. Our experimental strategy involved training all three planners on a set of 50 randomly generated problems from each of these domains. The training problems all have between 0 and 3 goals. During the training phase, each planner generalizes the learned plans using EBG techniques and stores them. In the testing phase, a set of 30 randomly generated problems, that have between 4 and 7 goals (thus are larger than those used in the training phase) are used to test the extent to which the planners are able to exploit the stored plans in the three different modes. A limit of 1000 cpu sec. per problem is enforced on all the planners, and any problem not solved in this time is considered unsolved (This limit includes both the time taken for retrieval and the time taken for planning). To eliminate any bias introduced by the time bound (c.f. [16]), we used the maximally conservative statistical tests for censored data, described by Etzioni and Etzioni in [2], to assess the significance of all speedups. All experiments were performed in interpreted Lucid Commonlisp running on a Sun Spare-II. 4.2. Experimental Results Table 1 shows the cumulative statistics for solving the 30 test problems from each domain for all three planners and all three reuse modes. For each domain, the first row shows the percentage of domain test problems that were correctly solved by each strategy within the 1000 CPU. sec. time bound. The second row shows the cumulative CPU. time. for running through all the test problems (as mentioned, if a problem is not solved in 1000 CPU. seconds, we consider it unsolved and add 1000 CPU. sec. to the cumulative time). The third row shows the percentage of the solved problems whose solutions incorporated retrieved library macros. The situation in ART-MD-NS domain is quite different. We see that none of the planners are able to solve more than 40% of the problems in the from-scratch mode. Of the two reuse modes, SEBG remains at the same level of correctness as from-scratch. This is not surprising, since as discussed in Section 3.2., in ART-MD-NS domain, the stored plans are not sequenceable with respect to any remaining outstanding goals of the planner. Thus, unless the macro is an exact match (i.e., solves the full problem), it cannot be reused by an SEBG strategy. The IEBG strategy on the other hand, dramatically improves the correctness rates of TOCL and SNLP from 40% to 60% and 100% respectively, while TOPL which cannot support IEBG, stays unaffected. Moreover, as hypothesized, SNLP’s improvement is lan Learning 517 Cumulative time as a function of % of non-serializable goals % Problems solved as a function of % of non-serializable goals 30000.0 I I m SNLP - SNLPtsEBG O---OTOCLtSEBG &-.SNLP+IEBG C--OTOCL+IEBG 0.0 25.0 50.0 750 100.0 % of non-serializable goals (from ART-MD-NS) 0.0 % I 25 0 50 0 75.0 100.0 If non-serializable goals (from ART-MD-NS) Figure 4: Cumulative performance as a function of % of non-serializable sub-goals more significant than that of TOCL.8 The plots in Figure 3 compare the performance of TOCL and SNLP for all three reuse strategies in this domain. The left one plots the cumulative planning time as a function of the problem number (with the problems sorted in the incr$asing order of difficulty). The right plot shows the average cpu time taken by each planner as a function of the plan length. We see that in IEBG mode SNLP significantly outperforms TOCL in the ability to exploit stored plans both in terms of cumulative time, and in terms of solvability horizon. Experiments in Mixed Domains: The test domains ART-MD and ART-MD-NS above were somewhat extreme in the sense that the former only has serializable goals while the latter only has non- serializable goals. More typically, we would expect to see a mixture of independent, serializable and non-serializable goals in a problem distribution. To understand how the effectiveness of the various reuse strategies vary for such mixed problem distributions, we ex- perimented with a mixed domain obtained by combining the actions of ART-IND (the domain with independent subgoals) and ART-MD- NS (the domain with non-serializable subgoals) domains (shown in Figure 2). Five different training and testing suites, each containing a different (pre-specified) percentage of non-serializable goals in the problem distribution, were generated. We experimented with prob- lem sets containing 0, 25, 50, 75 and 100% non-serializable goals (where 0% corresponding to the problem set having goals drawn solely from ART-IND, and 100% corresponding to the problem set with goals drawn solely from ART-MD-NS). For each mixture, 50 training problems and 30 testing problems were generatedrandomly, as discussed before. Given the inability of TOP1 to support IEBG, we concentrated on comparisons between SNLP and TOCL. The plots in Figure 4 summarize the performance in each problem set as a function of the percentage of the non-serializable goals in the problem set. The plot on the left compares the cumulative time taken by each strategy for solving all the problems in the test suite of each of the 5 problem sets The plot on the right shows the percentage problems successfully solved within the time bound by each strategy for each problem set. Once again, we note that SNLP using IEBG shows the best performance in terms of both the cumulative time and the percentage problems solved. IEBG strategy is also the best strategy for TOCL, but turns out to be considerably less effective than the IEBG strategy for SNLP. More interestingly, we see that the 8Using the statistical tests for censored data advocated by Etzioni in [2], we fmd that the hypothesis that SNLP+IEBG is faster than SNLP as well as the hypothesis that SNLP+IEBG is faster than TOCL+IEBG are both supported with very high significance levels by our experimental data. ‘Ihe p-value is bounded above by 0.000 for both the signed test, and the more conservative censored signed-rank test. The hypothesis that TOCL+lEBG is faster than TOCL is however supported with a much lower significance level (with ap-value of. 13 for sign test and .89 for the censored signed-rank test). 51s Kambhampati performance of IEBG strategy compared to the base-level planner improves as the percentage of non-serializable goals in the problem set increases for both SNLP and TOCL. By the same token, we also note that the relative performance of SEBG strategy worsens with increased percentage of non-serializable goals for both SNLP and TOCL. Summary: The results in our empirical studies are consistent with our hypothesis regarding the superiority of PO planners in exploit- ing stored macros. First, we showed that TOP1 fails to improve its performance when faced with interleavable macros, while SNLP and TOCL can both exploit them. Next we showed that SNLP is more efficient in exploiting stored macros than TOCL. In particular, the strategy of using SNLP planner with IEBG reuse strategy signif- icantly outperforms all the other strategies including TOCL+IEBG, in most cases. The higher cost of TOCL+IEBG strategy can itself be explained by the fact that TOCL generates partial plans correspond- ing to each possible interleaving of the macro with the new steps, while SNLP can maintain partially ordered plan and interleave the steps as necessary. 5. Related Work Starting with STRIPS, stored plans have traditionally been reused as opaque macro operators that cannot be interleaved during planning. We believe that this was largely due to the limitations imposed by the underlying state-based planners. It is of course possible to get the effect of reuse of interleavable macros within state-based planning indirectly through the use of single operator macros (aka preference search control rules [14]). However, it is not clear what are the advantages of starting with a simplistic planner and get interleavability indirectly, when more sophisticated planners, such as PO planners, allow interleavability naturally. More generally, we believe that eager compilation strategies such as search control rules are complementary to rather than competing with more lazy learning strategies such as plan reuse. In some cases, the planner is better served by the lazy strategy of retrieving and modifying a large plan, rather than the eager strategy of compiling each incoming plan into search control rules. In the former, the ability to interleave is essential, and planners like STRIPS would be at a natural disadvantage. Interestingly enough, this was one of the original reasons for the shift from state based planning of STRIPS to plan-space based partial-order planning of NOAH, within the planning community. As McDermott [13, p. 4131 remarks, if you want the ability improve performance by piecing large canned plans together, postponing decisions about how these plans will interact, then partial order planning is in some sense the inevitable choice. In [19, 201, Veloso et. al. advocate basing learning techniques within state-based (total ordering) planning without linearity as- sumption, rather than within partial order planning. They justify this by arguing that the former have all the advantages of PO planners, and also scale up better to more expressive action representations because checking necessary truth of a proposition becomes NP-hard for partially ordered plans containing such actions. To begin with, as we discussed in Section 2.. the inability to interleave macros is due to the limited refinement strategies allowed by a state-based planner, and has little to do with whether or not the planner makes linearity assumption. Secondly, the argument regarding the relative difficulty of scaling up the partial ordering planners to more ex- pressive action representations is based on the (mistaken) premise that a PO planner has to interpret the full (necessary and sufficient) modal truth criterion for PO plans during each search iteration. Re- cent work [7, 12, 151 amply demonstrates that sound and complete partial ordering planners do not necessarily have to interpret the full-blown modal truth criterion at each iteration (since they only need completeness in the space of ground operator sequences rather than in the space of PO plans), and thus can retain their relative advantages over total ordering planners even with more expressive action representations. This, coupled with our experiments showing the superiority of PO planners in exploiting stored plans, make PO planning an attractive framework for doing plan reuse. The concentration on state-based planning has also been true of much of the speedup learning work within planning, including search control rules and precondition abstraction. In [14, p. 831 Minton et. al. seem to justify this by the observation: “[..I the more clever the underlying problem solver is, the more difj%‘cult the job will be for the learning component”. Preferring a state-based planning strategy only to make learning easier seems to be some what unsatisfactory, especially given that there exist more sophisticated planning strategies that avoid many of the inefficiencies of the state-based planners More over, we believe that the shift to more sophisticated planning strategies is not just an implementation issue; it may also lead to qualitatively different priorities in techniques as well as target concepts worth learning. As an example, one source of the inefficiency of STRIPS and other state-based planners stems from the fact that they confound the execution order (i.e., the order in which the goals are achieved during execution) with the planning order (i.e., the order in which the goals are attacked during planning). As we shift to planners that search in the space of plans, such as PO planners, planning order is cleanly separated from execution order; and many of the inefficiencies of state-based planners are naturally avoided. Thus, learning strategies and target concepts that are designed with state-based planners in mind may be of limited utility when we shift to more flexible planners. As an example, “goal order preference rules” of the type “work on on(y, z) before on(z, y)“, learned by EBL strategies in blocks world, are not that useful for partial order planners, which avoid premature step orderings to begin with. Similarly, in [17] Smith and Peot argue that may of the static abstractions generated with state-based planners in mind (e.g. [9]), impose unnecessary and sometimes detrimental ordering constraints, when used in conjunction with the more flexible and efficient partial order planning strategies. All of this, in our view, argues in favor of situating research on learning to improve planning performance within the context of more flexible and efficient planning paradigms such as partial order planning. 6. Concluding Remarks In this paper, we have addressed the issue of relative utility of basing EBG based plan reuse strategies in partial ordering vs. total ordering planning. We observed that while storage compactions resulting from the use of PO plans can be exploited irrespective of whether the underlying planner is a PO planner or a total ordering planner, the PO planners do have distinct advantages when it comes to the issue of effectively reusing the stored plans. In particular, we showed that PO planners are significantly more efficient in exploiting interleavable macros than state-based planners as well as planners that search in the space of totally ordered planners. We also showed that this capability substantially enhances the ability to exploit stored macros to improve performance in many situations, where the domain goals and problem distributions are such that a 111 121 [31 [41 r51 161 [71 181 191 1101 [ill r121 [I31 [I41 r151 1161 1171 [la1 1191 1201 significant percentage of stored macros are only interleavable with respect to the outstanding goals of the planner. Although this can happen both in domains with serializable subgoals and domains with non-serializable subgoals, our experiments show that the effect is particularly strong in the latter. When taken in the context of recent work on comparative ad- vantages of PO planners in plan generation [l, 12, 151, our study strongly argues for situating EBG based plan reuse strategies within the context of PO planning framework. We believe that such a move would also benefit other learning strategies such as search control rules and abstractions [4], and are currently working towards verifying these intuitions. Acknowledgements: We wish to thank Tony Barrett and Dan Weld for distributing the code for SNLP, TOCL and TOP1 planners. We also thank Oren Etzioni, Laurie Ihrig, Smadar Kedar, Suresh Katukam, Prasad Tadepalli and (anonymous) reviewers of AAAI-93 and ML-93 for helpful comments on a previous draft. eferences A. Barrett and D. Weld. Partial order planning: Evaluating possible efficiency gains. Technical Report 92-05-01, Department of Computer Science and Engineering, University of Washington, June 1992. 0. Etzioni and R. Etzioni. Statistical methods for analyzing spcedup learning experiments. Machine Learning. (To Appear). R. Fikes, P. Hart, and N. Nilsson. Learning and executing generalized robot plans. Artiftcial Intelligence, 3(4):25 l--288,1972. S. Kambhampati. Utility tradeoffsin incrementalplan modification and reuse. In Proc. AAAI Spring Symp. on Computational Considerations in Supporting Incremental Modification and Reuse, 1992 S. Kambhampati and J.A. Hendler. A validation structure based theory of plan modification and reuse. Artificial Intelligence, 55(2-3), June 1992. S. Kambhampati. Exploiting causal structure to control retrieval and refitting during plan reuse. Computational Intelligence (To appear). S. Kambhampati and D.S. Nau. On the Nature and Role of Modal Truth Criteria in Planning. LJniversity of Maryland Inst. for Systems Res. Tech. Rep. ISR-TR-93-30, 1993. S. Kambhampati and S. Kedar. A unified framework for explanation- based generalizationof partially ordered and partially instantiated plans. Technical Report (ASU-CS-TR 92-008) Arizona State University, 1992. (A preliminary version appears in Proc. 9th AAAI, 1991). Craig Knoblock. Learning abstraction hierarchies for problem solving. In Proc. Sth AAAI, pages 923--9281990. R. Korf. Planning as Search: A quantitative approach. Artificial Intelligence, 33(l), 1987. J. Allen and P. Langley and S. Matwin. Knowledge and Regularity in Planning. In Proc. AAAI Symp. on Computational Consideration in Supporting Incremental Modification and Reuse, 1992. D. McAllester and D. Rosenblitt. Systematic nonlinear planning. In Proc. 9thAAAI, 1991. D. McDermott. Regression Planning. Intl. Jour. Intell. Systems, 6:357- 416,199l. S. Minton, J.G. Carbonell, C.A. Knoblock, D.R. Kuokka, 0. Etzioni and Y. Gil. Explanation-based Learning: A problem solving perspective. Artificial Intelligence, ~0140, 1989. S. Minton, M. Drummond, J. Bresina, and A. Philips. Total order vs. partial order planning: Factors influencing performance. In Proc. HZ-92 A. Segre, C. Elkan, and A. Russell. A critical look at experimental evaluation of EBL. Machine Learning, 6(2), 1991. D.E. Smith and M.A. Peot. A critical look at Knoblock’s hierarchy mechanism. In Proc. 1st Intl. Conf. on AI Planning Systems, 1992. P. Tadepalli and R. Isukapalli. Learning plan knowledge from sim- ulators. In Proc. workshop on Knowledge compilation and speedup learning. M. M. Veloso, M. A. Perez, and J. G. Carbonell. Nonlinear planning with parallel resource allocation. In Proc. DARPA Planning workshop pages 207--212, November 1990. M.M. Veloso. Learning by Analogical Reasoning in General Problem Solving. PhD thesis, Carnegie-Mellon University, 1992. Plan Learning 519 | 1993 | 77 |
1,406 | Plan Transformat A Memor It. Oehlmann University of Aberdeen, King’s Aberdeen Al39 2UE, Scotland, UK {oehlmann, sleeman, pedwards}@csd.abdn.ac.uk Abstract Recent work in planning has focussed on the re- use of previous plans. In order to re-use a plan in a novel situation the plan has to be transformed into an applicable plan. We describe an approach to plan transformation which utilises reasoning ex- perience as well as planning experience. Some of the additional information is generated by a series of self generated questions and answers, as well as appropriate experiments. Furthermore, we show how transformation strategies can be learned. lanning, Execution, and Transformation Over the past few years a new view of planning has emerged. This view is based on the re-use of pre-stored plans rather than on building new plans from first prin- ciples. Debugging of Interaction Problems uses pattern based retrieval of pm-stored programs and subrou- tines. Programs containing bugs are repaired by patching faulty steps, (Sussman 1974). Adaptive Planning employs abstraction and spe- cialisation of plans as transformation strategies, (Al- terman 1988). Case-Based Planning is an extension of Sussman’s ideas of retrieval and repair and the application of these ideas to planning. Hammond’s CHEF system modifies a retrieved plan to satisfy those goals which are not already achieved, executes the plan, repairs it when it fails, and stores the planning problems related to the failure in the plan. This approach is characterised as memory-based, because the organ- isation of the memory of previous plans is changed during the process (Hammond 1989). These approaches stress the necessity to modify previ- ous plans in order to obtain plans which can be used in a new situation. In particular, the memory-based approach to plan transformation attempts to integrate the phases of building a plan and learning from plan- ning failures. In addition to the standard features of plan modifi- cation, execution, and repair, a planning system should be able to transform plans based on knowledge about actions, with the objective being to improve the sys- tem’s reasoning about actions as well as plan execution. Furthermore, it should be able to learn the transfor- mation strategy as a higher level plan in order to apply similar methods of modification and repair to the strat- egy as to other plans. Oehlmann, Sleeman, & Edwards (1992) argue that learning a more appropriate plan can be supported by the generation of appropriate ques- tions and answers which we refer to as self-questions and answers; this is done using case-based planning. If an answer can not be generated, a reasoning compo- nent plans and executes an experiment in an attempt to acquire the missing knowledge. In particular, we address the following planning is- sues: reasoning about plans before plan execution, reasoning about all known plans, generating self-questions to build a plan transforma- tion, learning transformation strategies. We have implemented our approach to plan transfor- mation in an exploratory discovery system, IULIAN’ (Oehlmann, Sleeman, & Edwards 1992) and have tested the system in various domains. In the remainder of this paper, we present a top level view of the IULIAN system followed by an exam- ple demonstrating the interaction between case-based question and experimentation planning. We then de- scribe our approach to plan transformation. Finally, we evaluate the approach and indicate various options for future work. ‘IULIAN is the acronym for Interogative Under- standing and Learning In AberdeeN. 520 Oehlmann From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. Reasoning about Actions from Self-Questions IULIAN uses the planning of self-questions, answers and experiments to model reasoning about plans and actions. The main stages of IULIAN’s top level algo- rithm are given in Figure 1. 1. Question Stage. Input a description of a new experi- mental setting (problem), execute a pre-stored ques- tion plan about the expected experimental result. 2. Answer Stage. Try to generate an Answer for the Ques- tion. 3. Experitueutatiom Stage. If the answer can not be gener- ated, conduct an experiment to obtain the answer. a) Generate a hypothesis about the experimental re- sult. b) Retrieve and execute an appropriate pre-stored ex- perimental plan (interacting with a simulator). c) Compare the experimental result with the hypotbe- sis and determine the expectation failure. d) Store experimental setting and experimental result as a new case. uestion and Answer Loop. If an expectation failure is found, generate a why question about the expectation failure. If the why question can be answered, store the answer as a new explanation, otherwise generate a sequence of questions, answers, and experiments in order to answer the why question and to obtain an ex- planation. If during the generation of this sequence a case is retrieved which is not sufficiently similar to the input problem, transform the plan associated with the retrieved case into a new plan and generate a new case by executing that plan. 5. Model Rev&ion Stage. Use the final explanation to modify the causal model. Figure 1: IULIAN’s Top Level Algorithm The main task of the system is the discovery of new explanations to revise an initial theory. The ba- sic data structures of the IULIAN system are cases, causal models, and plans (see Figure 2). A case com- prises two components, an experimental setting (e.g. a description of an electric circuit with battery, lamp, and switch) and the result of an experiment such as the statement that the lamp is on when the battery is switched on. Cases are represented by objects and relations be- tween objects. Causal models have a similar represen- tation. However, they are stored in a separate library and their objects are viewed as abstract concepts, eg., the concept “lamp” rather than an actual lamp used in a given experiment (Oehlmann 1992). In addition, causal models use particular relations between con- cepts to represent causal links. A causal model is linked to the cases used to generate that model. Experimental plans describe the steps which have to be executed in order to produce an experimental setting and result, i.e. a case. Each case has a link to the experimen- tal plan which generated it. In addition, each plan contains a set of descriptors (indexes), such as goals and planning failures, to enable plan retrieval. Plans are retrieved by matching their index values with the characteristic values of a given situation. The effects of plan execution are simulated by a rule-based simu- lator. The same basic plan structure is employed for question and answer plans, although the index vocabu- lary differs. Executing these plans allows the system to connect small parts of questions and answers in order to generate complete questions and answers. Question strategies are higher level plans2 which organise the execution of single question and answer plans, when it is important to generate questions in a particular sequence. Before we describe the details of the transformation process, we will discuss the core of an example focus- ing on the question and answer loop (stage 4 of the top level algorithm). The entire example and the ques- tions used during the reasoning process are described in (Oehlmann, Sleeman, & Edwards 1992). We assume that the IULIAN system receives as in- put the description of a electric circuit with lamp and closed switch in parallel (target domain). Associated with the description of the circuit (target case) is an experimental (target) plan to build the circuit by con- necting the various components and to observe the sta- tus of the lamp. The lamp is reported as being off, as a result of this experiment. This result is inconsistent with the systems expectation based on a previous case of a serial circuit in which the lamp was on. The sit- uation characterised by this expectation failure (par- tially) matches the index of a question plan for gener- ating a why question which would identify an explana- tion of the expectation failure (Stage 4 of the top level algorithm). IULIAN is able to retrieve the question plan but not an appropriate answer plan, because the explanation has not been stored. This new situation determines an index for retrieving the question strat- egy CROSS-DOMAIN-REMINDING which supports analogical case retrieval between domains. Executing the first step of the question strategy initialises the re- trieval and execution of a question plan to generate an additional top level question. The answer plan to this question can be executed and the generated answer comprises a (source) case in the domain of water pipes (source domain). The interplay between the execution of question and answer plans has actually lead to a case-retrieval (reminding) across domain boundaries. 2Note that all planning components in the IULIAN sys- tem are case-based planners, see (Qehlmann, Sleeman, & Edwards 1993). Plan Learning 521 Source Case: Target Case: Result: Paddle Wheel WI is A turning Result: Lump BI is of Target Plan: :action (connect3 object1 object6 objectll) (plan :name parallel#switch-bulb connect-triple-connector1 :domain electric-circuits :bindings ((object1 battery-2) (object2 bulb-3) :action (connect4 object4 object6 object7 objects) (object3 switch-2) (object4 triple-connector-A-l) connect-switch :action (connect3 object3 object7 objectlo) (object5 triple-connector-B- 1) (object6 wire l-2) (object7 wire2-2) (object8 wire3-2) connect-bulb :action (connect3 object2 object8 object9) (object9 wire4-1) (objectlO wire51) (object11 wire6-1) (object12 battery-on-l) connect-triple-connector2 (object13 battery-off- 1) (object14 switch-on- 1) (object15 switch-off-l)) :goals (parallel%obj2-obj3 objZstate) :action (connect4 object5 object9 object10 objectll) : steps connect-battery set-switch-to-on-state :action (set-state object3 object14 objectl5) switch-battery-on :action (set-state object1 object13 objectl2) check-bulb-state :action (check-state object2)) EXPERIMENTA~ON PLAN and CASE source Plan: (plan :name parallel#plainpipe-paddlewheel :domain water-pipes action (connect3 object1 object6 objectll) :bindings ((object1 pump-l) (object2 paddle-wheel-l) connect-pipe-triple-connector1 (object3 plain-pipe-l) (object4 pipe-triple-connector-A-l) x&ion (connect4 object4 object6 object7 objects) (object5 pipe-triple-connector-B-l) (object6 pi@-1) (object7 pipe2-2) connect-plainpipe :action (connect3 object3 object7 objectlo) (object8 p&3-2) (object9 pipe4-1) (object10 pipe51 (object11 pipe6-1) connect-paddlewheel :action (connect3 object2 object8 object9) (object12 pump-on-l) (object13 pump-off-l)) :goals @arallel%obj2-obj3 obj2-state) :steps connect-pipe-triple-connector2 connect-pump :action (connect4 object!? object9 object10 objectll) switch-pump-on :action (set-state object1 object13 objectl2) check-paddlewheel-state :action (check-state object2)) EXPERIMENTATION PLAN and CASE: Parallel Water Pipes with Plain Pipe and Paddle Wheel Parallel Circuit with Switch and Bulb Figure 2 In the source case a plain pipe and a paddle wheel are in parallel. Additionally the observation that the paddle wheel does not turn is stored in the source case, which is associated with both a causal model and the source plan. A summary of the causal model is given below: In order for the paddle wheel to turn, water must flow over it, There is a plain pipe in parallel with the paddle wheel. The smaller the resistance in a given pipe, the greater is the water flow in this pipe. If one of two parallel pipes has a very low resistance and the other one has a very high resistance, most of the water f-lows through the pipe with low resistance. Since the paddle wheel offers resistance to the flow and the plain pipe does not, all of the water flow goes through the plain pipe. Since there is no water flow over the paddle wheel, the paddle wheel does not move. The source model can not be applied to the original electric circuit, because the switch and the plain pipe are not sufficiently similar. 3 Therefore, the source plan is transformed into a new source plan able to generate a new source case which is sufficiently similar to the target case. The transformation process is described in the following section. During plan transformation, IIJLIAN replaces the step which refers to the insertion of a plain pipe by a step which refers to the insertion of a valve. The valve is more similar to the switch in the target do- main, because both components are used to pursue the goal “select:flow-interruption/flow-support”. In addi- tion, the system inserts a step which opens the valve.4 Once the transformation process is finished, two ad- ditional reasoning stages are needed. First, the rea- soner has to collect evidence that the causal model associated with the source case is valid for the trans- formed source case. Second, the reasoner has to mod- ify the causal model to make it applicable to the target case, and it then has to ensure that the modified causal model is valid for this case. (Oehlmann, Sleeman, & Edwards 1992). *The inserted steps are marked in Figure 3 in italics. 3For a discussion of how simiIarity is measured see 522 OeNmann Cross Domain Pl Lamp BI is ofl Modified Target Plan: ‘bansformed and Mod&xl Source Plan: (plan :name parallel#switch-bulb-test (plan :name parallel#valve-paddlewheel :domain electric-cicuits :domain water-pipes :bindings ((object1 battery-l) (object2 lamp-l) :bindings ((object1 pump-l) (object2 paddle-wheel-l) (object3 switch-l) (object4 triple-connector-A-l) (object3 valve-l) (object4 pipe-triple-connector-A-l) (object5 triple-connector-B-l) (object6 wirel-1) (object5 pipe-triple-connector-B-l) (object6 p&l-l) (object7 wireZ 1) (object8 wire3- 1) (object7 p&2-1) (object8 pipe3-1) (object9 wire4-1) (object10 wires-1) (object9 pipe4-1) (objectlO pipe5 1) (object11 wire6-1) (object12 battery-on-l) (object11 p&6-1) (object12 pump-on-l) (object13 battery-off-l) (object14 switch-off-l) (object13 pump-off-l) (object14 valve-closed-l) (object15 switch-on-l) (object15 valve-open-l)) (object 16 test-lamp- 1) :goals @arallel%obj2-obj3 obj2-state) (object17 wire7-1)) :steps :goals (parallel%obj2-obj3 obj2-state) connect-pump :steps :action (connect3 object1 object6 object1 1) connect-battery connect-pipe-triple-connector1 :action (connect3 object1 object6 objectll) :action (connect4 object4 object6 object7 object8) connect-triple-connector1 connect-valve :action (connect3 object3 object7 objectlo) :action (connect4 object4 object6 object7 objects) connect-paddlewheel :action (connect3 object2 object8 object9) connect-switch :action (connect3 object3 object7 object1 7) connect-test-lamp :action (connect3 object16 object1 7 objectlo) connect-pipe-triple-connector2 connect-1amp:action (connect3 object2 object8 objects) :action (connect4 object5 object9 object10 objectll) connect-triple-connector2 set-valve-to-open-state :action (connect4 object5 object9 object 10 object 11) :action (set-state object3 object14 objectIS) set-switch-to-on-state switch-pump-on :action (set-state object3 object14 objectl5) :action (set-state object1 object13 objectl2) switch-batterv-on check-paddlewheel-state :action (check-state object2)) :action (sei-state object 1 object 13 object 12) check-test-lamp-state :action (check-state object 16)) EXPERIMENTATION PLAN generated by RJLIAN: EXPERIMENTATION PLAN generated by IULIAN: Parallel Water Pipes with Valve and Paddle Wheel Parallel Electric Circuit with Switch, Lamp and Test Lamp Figure 3 The first stage can be achieved by modifying the transformed plan (Hammond 1989). This stage has the following effect: in executing the modified plan the water pipe circuit is built by connecting the various components and a small test paddle wheel is placed af- ter the valve. The test paddle wheel allows the planner to test whether the water runs through the valve. In the second stage, the table of similar objects (see the following section) enables the system to replace all objects from the water pipe domain which appear in the causal model with similar objects from the electric circuit domain. The validity of the modified causal model can be tested by modifying the target plan in the same way as the transformed source plan. A step is added to the plan, allowing the planner to insert a test lamp after the switch in the electric circuit in order to test whether the current flows through the path with the switch. The result of plan execution shows that the current flows through the switch rather than through the main lamp and confirms the new causal model. rmation from In this section, we describe our approach to plan trans- formation by extending the example in the previous section. We will focus in particular on the interaction between self-questions and transformation steps. The system has to accept that the plan is appropri- ate for mapping the relevant parts of the knowledge from the water pipe domain to the domain of electri- cal circuits. Therefore it retrieves a question plan to generate an evaluation question.5 uestion 1.2: Is the retrieved experimental plan (source) appropriate? Answer 1.2: No, there are steps in the target plan without equivalents in the source plan. 5We assume that the appropriate question and answer plans are available rather than a higher level question strat- egy representing the transformation process. This strategy will be learned during the process of self-questioning and answering. Plan Learning 523 The answer reveals that the plan is not completely appropriate; however, it is the best plan IULIAN can obtain. Therefore, the strategy CROSS-DOMAIN- REMINDING is suspended and the plan is trans- formed into a more appropriate one.6 It is important to note that the strategy goals are not abandoned, the planner still has the goal to perform a mapping be- tween the water pipe domain and the electric circuit domain. IULIAN will achieve this goal only if it has a plan available to build the appropriate links between source and target domains. It is therefore desirable that the system is able to reason about all the plans it knows (Pollack 1992). The stages of plan transforma- tion are summarised-in Figure 2. - 1. Match ob,jects and relations in target and source plan with respect to the abstract descriptor values goal, task, cd belief. If a match succeeds store the matching object pairs in the Obj&- transformation-table. If one of the matches does not succeed perform the following stages. 2. Instantiate steps in target and source plan using the information from the object-transformation- table and match the steps in target and source plan according to identical goals (omitted in Figures 2 and 3) and actions. Build the step-transformation- table of matching steps. 3. Identify the non matching steps in the target plan and search the plan library for matching steps in plans from the source domain. Store these pairs in the retrieved-step-table. 4. Build a new source plan by inspecting every step in the target plan. If the step-transformation-table contains a matching source step, copy this step into the new plan. If the retrieved-step-table contains a matching retrieved step, copy this step into the new plan. 5. Collect the abstract indices used to retrieve the appropriate questions and answers during the plan transformation process and build a new question strategy. Figure 4: The Transformation Steps In order to reason about plans in different domains, a similarity measure has to be established, i.e. the planner has to know which steps in the plan are similar and which steps are different. In a second stage, the planner uses the table of matching objects and the two binding lists to build a table of equivalent steps. The system is able to generate Answer 1.2, because there are steps which do not appear in the table of equivalent steps. In order to identify these differences, IULIAN retrieves a question plan whose execution re- 61f the transformation fails, the problem is presented to the user who can add additional knowledge and re-start the process. sults in Question 1.2.1. Question 1.2.1: Which steps in the target plan have no equivalent in the source plan and which steps in the source plan have no equivalent in the target plan? Answer 1.2.1: The step CONNECT-SWITCH in the target plan has no equivalent in the source plan and the step CONNECT-PLAINPIPE in the source plan has no equivalent in the target plan. After establishing the exact differences between both plans, the system has a choice between modifying the source plan or the target plan. In our example, IU- LIAN searches in the water pipe domain for the equiv- alent to the step CONNECT-SWITCH and uses this equivalence to modify the source plan. The system re- trieves and executes a question plan focusing on the step CONNECT-SWITCH. Question 1.2.2: What is the equivalent for the step CONNECT-SWITCH in the WATER-PIPE domain? Answer 1.2.2: The step CONNECT-VALVE in the plan SERIAL#VALVE-PADDLEWHEEL. The generation of the answer involves the search for an equivalent step in the remaining experimentation plans. The system is looking for a step which supports the same goals it pursued in the circuit domain us- ing the step CONNECT-SWITCH. However, the new step has to be found in the domain of water pipes, because the plan to be transformed describes an ex- periment in this domain. In addition, the new step and the step CONNECT-SWITCH should perform ac- tions with the same name. IULIAN might have identi- fied additional steps in the source plan without equiv- alents in the target plan. If this happened, the sys- tem would then attempt to find appropriate steps in the plan SERIAL#VALVEPADDLEWHEEL. If this limited search is not successful, it would start a new general search through the entire plan library. How- ever, it would use similar goal, domain and action con- straints as described above. After retrieving the step CONNECT-VALVE, the system has the necessary in- formation to build a new plan. The following ques- tion focuses on the plan transformation goal and at- tempts to combine all the single pieces of information obtained. Question 1.2.3: How can I make the source plan more similar to the target plan? Answer 1.2.3: I can make the source plan more similar to the target plan by removing the step CONNECT-PLAINPIPE and replacing it with the step CONNECT- VALVE. The new experimental plan PARALLEL#VALVE PADDLEWHEEL is then generated (see Figure 3). We take the memory-based view that organisation of mem- ory should reflect its function (Hammond 1989) and so memory supporting learning should reflect this pro- cess by changing its organisation. The learning task described here involves the transformation of plans. Therefore, the overall transformation process should be reflected by changing the indices of plans. IULIAN 524 Oehlmann improves plan indexing by storing information about equivalent objects or equivalent steps and information about the problem the system had with the source plan. After successful generation of the new plan, the question and answer goals used to retrieve the appro- priate questions and answers are packaged to form a strategy which can be executed when a similar problem arises. In this way, the planner learns a new plan, and additionally a new and more efficient way to cope with the problem of an inappropriate plan in the context of cross domain reminding. With the newly generated plan, IULIAN contin- ues the execution of the suspended question strategy CROSS-DOMAIN-REMINDING which finally leads to the explanation given in the example. Evaluation Discussion The three main goals of our transformation approach are: improving the execution of question strategies, im- proving plan execution, and learning new transforma- tion strategies. In this section, we discuss the achieve- ments and limitations of our approach in the light of these goals and compare the approach with previous systems discussed in Section 1. Execution of the question strategy is improved, because the transformation process generates new plans. These plans enable the system to execute the strategy. In previous approaches, (Alterman 1988; Hammond 1989; Hanks & Weld 1992), plans are transformed to facilitate plan execution. Similarly, approaches to cross domain analogy rely on exist- ing knowledge structures rather than on knowledge newly generated by experimentation (Vosniadou & Ortony 1989). IULIAN additionally views plans as knowledge about actions which facilitates the rea- soning process. Plan execution is improved in that the transformed plan addresses additional planning situations. In contrast to previous systems, the close integration of reasoning and planning enables IULIAN to trans- form plans based on its knowledge about the rea- soning process, rather than only using its experience with plan execution. A new transformation strategy is learned by stor- ing the components of the transformation process in a question strategy. Learning plan transformations in this way is a novel contribution. In previous ap- proaches, e.g. the CHEF system (Hammond 1989), plan transformation is implemented as a set of fixed, pre-stored rules. The current scope of our plan transformation ap- proach is limited to the cross domain reminding strategy. However, we expect that new strategies can be easily incorporated because the structure of our transformation approach is highly modular and each nent. component can be replaced with a new compo- Future Work Our system evaluation indicates that the current trans- formation mechanism is restricted to reasoning in the context of cross domain reminding. Although the rea- soner is able to apply the approach to a large variety of different situations, we intend to extend our concept of plan transformation to additional reasoning strate- gies. An important advantage of our approach is the application of case-based planning to the transforma- tion process itself. We will continue to address this question by investigating the modifications needed to adapt a transformation strategy learned in the context of a given reasoning strategy to a second reasoning strategy. cknowledgements We are grateful to Jeff Berger, University of Chicago, for his helpful comments on a previous version of this paper. This research is partially founded by a Univer- sity of Aberdeen studentship. eferences Alterman, R. 1988. Adaptive Planning. Cognitive Sci- ence 12:393-421. Hammond, K. 1989. Case-Based Planning: Viewing Planning as a Memory Task. New York: Academic Press. Hanks, S., and Weld, D. 1992. Systematic Adapta- tion for Case-Based Planning. In Proceedings of the First International Conference on Artificial Intelligence Planning Systems, 96-105. Oehlmann, R. 1992. Learning Causal Models by Self- Questioning and Experimentation. In Workshop Notes of the AAAI-92 Workshop on Communicating Scien- tific and Technical Knowledge, 73-80. Oehlmann, R., Sleeman, D., and Edwards, P. (1992). Self-Questioning and Experimentation in an Ex- ploratory Discovery System. In Proceedings of the ML-92 Workshop on Machine Discovery, 41-50. Oehlmann, R., Sleeman, ID., and Edwards, P. (1993). Case-Based Planning in an Exploratory Discovery Sys- tem. In Working Notes of the IEE/BCS Symposium on Case-Based Reasoning, l/1-1/3. Pollack, M. 1992. The Uses of Plans. Artificial Intel- ligence 57~43-68. Sussman, 6. 1974. The Virtuous Nature of Bugs. In Proceedings of the First Conference of the Society for the Study of AI and the Simulation of Behaviour. Vosniadou, S. and Ortony, A. (1989). Similarity and Analogical Reasoning. Cambridge, UK: Cambridge University Press. Plan Learning 525 | 1993 | 78 |
1,407 | Milincl Tambe School of Computer Science Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, Pa 15213 e-mail: tambe@cs.cmu.edu Abstract Machine learnin g approaches to knowledge compilation seek to improve the performance of problem-solvers by storing solutions to previously solved problems in an efficient, generalized form. The problem-solver retrieves these learned solutions in appropriate later situations to obtain results more efficiently. However, by relying on its learned knowledge to provide a solution, the problem-solver may miss an alternative solution of higher quality - one that could have been generated using the original (non-learned) problem-solving knowledge. This phenomenon is referred to as the masking effect of learning. In this paper, we examine a sequence of possible solutions for the masking effect. Each solution refines and builds on the previous one. The final solution is based on cascaded filters. When learned knowledge is retrieved, these filters alert the system about the inappropriateness of this knowledge so that the system can then derive a better alternative solution. We analyze conditions under which this solution will perform better than the others, and present experimental data supportive of the analysis. This investigation is based on a simulated robot domain called Groundworld.’ I. Introduction Knowledge-compilation techniques in the field of machine learning seek to improve the performance of problem-solvers/planners by utilizing their past experiences. Some examples of these knowledge- compilation techniques are explanation-based generalization (EBG/EBL) (DeJong and Mooney, 1986, Mitchell, Keller, and Kedar-Cabelli, 19861, chunking (Laird, Rosenbloom, and Newell, 1986a), production composition (Anderson, 1983, Lewis, 19781, macro- operator learning (Fikes, Hart, and Nilsson, 1972, Shell and Carbonell, 1989), and analogical and case-based reasoning (Carbonell, 1986, Hammond, 1986). These techniques store experiences from previously solved problems in an efficient, generalized form. The problem- solver then retrieves these learned experiences in appropriate later situations so as to obtain results more efficiently, and thus improve its performance. However, by relying on its learned knowledge to ‘This research was supported under subcontract to Carnegie Mellon University and the University of Southern California from the University of Michigan as part of contract NooO14-92-K-2015 from the Defense Advanced Research Projects Agency (DARPA) and the Naval Research Laboratory (NRLJ. The Groundworld simulator used in this paper was developed by Charles Dolan of Hughes AI center. The simulated mbota in Groundworld were developed in collaboration with Iain Stobie of the Univexsity of Southem California. 526 Tambe Paul S. Rosenbloom Information Sciences Institute & Computer Science Department University of Southern California 4676 Admiralty Way Marina de1 Rey, CA 90292 e-mail: rosenbloom@isi.edu provide a solution, the problem-solver may miss an alternative solution of higher quality - one that could have been generated using the original (non-learned) problem-solving knowledge. For instance, in a planning domain, the problem-solver may miss the derivation of a higher-quality plan, if a lower-quality plan has been learned earlier. The following example from the Groundworld domain (Stobie et al., 1992) illustrates this phenomenon. Groundworld is a two-dimensional, multi- agent simulation domain in which both space and time are represented as continuous quantities. The principal features in this world are walls, which block both movement and vision. Currently, our task in Groundworld involves two agents: an evasion agent and a pursuit agent. The evasion agent’s task is to reach its destination from its starting point, without getting caught by the pursuit agent, and to do so as quickly as possible. The pursuit agent’s task is to catch the evasion agent. Both agents have a limited range of vision. When the two agents are in visual range, the pursuit agent starts chasing, while the evasion agent attempts to escape by hiding behind some wall, from where it replans to reach its destination. Figure l-l-a shows part of an example scenario from Groundworld. The thick straight lines indicate walls. Here, the two agents are within visual range. To avoid capture, the evasion agent uses a map to create a plan (shown by dashed lines) to hide behind a wall. The plan is stored in learned rules, to be retrieved and reused in similar later situations. The situation in Figure l-l-b is similar and the learned rules directly provide a plan to the hiding spot. However, by relying on these learned rules, the evasion agent misses a closer hiding spot (denoted by X). If the evasion agent had confronted the problem in Figure l- l-b without its previously learned rules, it would have planned a path to the closer hiding spot. However, due to its learned rules, the evasion agent follows a low quality plan. While the lower-quality plan allows it to hide successfully, there is a significant delay in its hiding, which in turn delays it in reaching its real destination. This effect, of using a low quality learned solution, has been observed for some time in humans, where it is referred to as Einstehng (Luchins, 1942). Modeling Einstellung in computer simulations is an important aspect of capturing human skill acquisition (Lewis, 1978). More recently, Clarke and Holte (Clark and Holte, 1992) report this effect in the context of a Prolog/EBG system, where they call it the masking &kct, because the learned From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. pursuit agent’ Evasion agent Hiding destination (a) Learned hiding plan. 700 600 =i pursuit agenl Hiding destination Q 200 300 400 50G (b) The masking effect. Figure 1-I: The masking problem when hiding: approx 15% of the Groundworld scenario is shown. knowledge masks the original problem-solving knowledge. However, in contrast to the psychological work, Clarke and Holte’s goal is not to produce this effect, but to eliminate it. The hypothesis underlying the work described here is part way between these two perspectives; in particular, the assumption is that masking (Einstellung) is in its essence unavoidable, but that there are effective strategies that an intelligent system can use to minimize its negative consequences. Note that a low-quality solution produced due to masking is not always problematical. For instance, in real-time situations, a low-quality solution may be acceptable, as long as it is produced in bounded time (Korf, 1990). However, in other situations, a good quality solution has a much higher priority and hence avoiding the masking effect assumes importance. We start by motivating the overall hypothesis by examining the relationship of masking to overgenerality, ng at some of the existing approaches for dealing with this overgenerality and discussing the problems these approaches have. We then propose a sequence of three new approaches to coping with masking, based on the concepts of approximations, filters, and cascades of filters. This is all then wrapped up with some analysis and backup experiments comparing these approaches. 2. Masking and Overgenerality The masking effect arises because, while generating a new learned rule (i.e., at generation time), the system may fail to capture all of the knowledge that was relevant in deriving a high-quality solution. This may occur, for example, because the requisite knowledge is only implicit in the problem solving, or because it is intractable to capture the knowledge. Either way, the learned rule may be missing some knowledge about the exact situations where its application will lead to a high quality solution. Thus, when the learned rule is retrieved (i.e., at retrieval time), it may apply even though it leads to a low quality solution. This is clearly an instance of overgenerality (Laird, Rosenbloom, and Newell, 1986b); however, this overgenerality is with respect to producing a high quality solution, not with respect to producing a correct solution. That is, these learned rules do not lead to a failure in task performance. For instance, in Figure l-l-b, the learned rules that lead to masking do not result in a failure in hiding, even though a closer hiding place is missed. The two classical types of solutions to overgenerality are: (1) avoid it by learning correct rules, or (2) recover from it by detecting the performance failures they engender and then learning patches for those situations. Clarke and Holte’s approach is of the first type. In their Prolog-based system, knowledge about solution quality is implicit in the ordering of the Prolog rules. Their EBG implementation fails to capture this knowledge while learning new rules, and leads to the masking effect. The key feature of their solution is, at generation time, to order the learned and non-learned rules according to solution quality. This ordering is guaranteed to remain valid at retrieval time, so the highest quality solution can be retrieved simply by selecting the topmost applicable rule from the ordering. In general, solutions of type 1 - which we shall henceforth refer to as generation-time exact or @T-exact solutions - require capturing all of the relevant knowledge into the learned rules at generation time. However, in complex domains, it can be extraordinarily difficult to do this; that is, tractability problems result. Consider a second example from the Groundworld domain (Figure 2- 1). In Figure 2-l-a, the evasion agent attempts to reach its destination, using a map to plan a path through a set of regions (Mitchell, 1988). The path (shown by a dashed line) is chosen so as to be the shortest one that avoids traveling close to the ends of walls - these are potential ambush points that may not allow the evasion agent sufficient maneuvering space to reach a hiding place before it is caught. In this particular instance, the evasion agent has no information about the pursuit agent’s position, and hence cannot e that into account while planning the path; however, the pursuit agent is far enough away that it cannot intercept the evasion agent anyway. The rule learned from the path-planning process in Plan Learning 527 (a) Learned plan. 01 I 0 500 7r (b) The masking effect. Figure 2-1: Masking when trying to reach destination. Figure 2-1-a captures a plan - a generalized sequence of regions through which the agent must traverse - that transfers to the situation in Figure 2-l-b. In this situation, the plan leads to interception by the pursuit agent. Such interceptions occur in this world, and are by themselves a non-issue - interceptions do not lead to failure (capture) as long as there is enough maneuvering space for successful hiding. However, in this case masking occurs because the evasion agent has knowledge about the location of the pursuit agent - from an earlier encounter with it - so it should have been possible to avoid this interception, and the resultant time lost from hiding and replanning. Without the learned rule, the evasion agent would have formed a different plan in Figure 2-1-b, one that would have avoided the area around the pursuit agent, allowing it to reach its destination quickly. To apply the GT-exact solution to this problem, the correct learned rule would need to capture exactly the circumstances under which the path is of low quality; that is, those circumstances in which the pursuit agent is in a known location from which it can intercept the evasion agent’s path. For example, the overgeneral rule could be augmented with explicit disabling conditions of the form: (know pursuit agent in region-X), (know pursuit agent in region-Y) and so on. These disabling conditions avert the retrieval of the learned path if the pursuit agent is known to be in any of the regions from which it could intercept the path traversed. While this approach seems plausible here, there are two problems which tend to make it intractable. First, locating all possible disabling conditions, i.e., positions of the pursuit agent for which the plan is of low-quality, involves a large amount of processing effort This is a long path, and there are a variety of positions of the pursuit agent which threaten the path. Second, a large number of disabling conditions can severely increase the match cost of the learned rule, causing an actual slowdown with learning (Tambe, et al., 1990). The problems become even more severe in intractable domains. For example, in the chess end-game domain, it is effectively impossible to correctly condition a learned plan at generation time so as to ensure its exact retrieval (Tadepalli, 1989). As a result, at retrieval time, the learned plan may apply, but it does not always lead to a successful solution. And further in incomplete domains the relevant knowledge may not even be available at generation time. Together these problems limit the feasibility of the GT-exact approach to relatively simple domains. The second general class of existing solutions to overgenerality am the refinement (or recovery) strategies (Gil, 1992, Huffman, Pearson and Laird, 1991, Chien, 1989, Tadepalli, 1989). However, these solutions all depend on explicit detection of failures at planning or execution time (e.g., failure in forming or executing a plan) to indicate the incorrectness of a rule, and thus to trigger the refinement process (Huffman, Pearson and Laird, 1991). While this works for overgeneral solutions that produce incorrect behavior, with the masking effect the learned solutions are only of low quality, and do not lead to explicit failure. Without an explicit failure, the refinement process simply cannot be invoked. (Furthermore, failure-driven learning may not always be the right strategy, e.g., in Groundworld, failure is extremely expensive - it leads to capture by the pursuit agent!) Thus, this class of solutions does not look feasible for masking problems. 3. New Approaches to The previous section ruled out refinement strategies and raised tractability issues with respect to GT-exact. This section introduces a sequence of three new approaches: (1) GT-approximate takes the obvious step of avoiding the intractability of GT-exact by approximating the disabling conditions; (2) RT-approximate improves on 528 Tambe GT-approximate’s real-time characteristics by using the approximations as retrieval-time filters; and (3) RT- cascade refines RT-approximate by reducing the amount of replanning. 3.1. Approximating Disabling Conditions GT-approximate overcomes the intractability issues faced by GT-exact by using overgeneral approximations (simplifying assumptions) about the exact situations for which the learned rules lead to low quality solutions. In the path-planning example, this involves replacing the set of exact disabling conditions by a single, more general, approximate condition - (know pursuit agent’s position) - thus disabling the learned rule if any knowledge about the pursuit agent’s position is available. Inclusion of only a single disabling condition also alleviates the problem of high match cost. For this solution to be effective in general, the system must be able to derive good approximations. Fortunately, there is already considerable amount of work on this topic that could provide such approximations, e.g., (Elkan, 1990, Ellman, 1988, Feldman and Rich, 1986). However, there are still two other problems with GT-approximate. First, due to the overgeneral approximations, it may overspecialize a learned rule, disabling it from applying even in situations where it leads to a high quality solution. For instance, suppose the rule learned in 2-l-a is to be reused in 2-1-a, and (know pursuit agent’s position) is true. In this situation, GT-approximate will disable the learned rule, even though the pursuit agent is far away, and the learned rule is thus appropriate. Second, GT- approximate does not facilitate the speed-quality tradeoff that is essential for real-time performance (Boddy and Dean, 1989). In particular, the disabling conditions used here simply disable learned rules in situations where they lead to low quality solutions, forcing the system to derive a new solution from scratch. However, in some real-time situations, a low quality response is perfectly acceptable (Korf, 1990), e.g., in the hiding situation, the evasion agent may find a low-quality plan acceptable if the pursuit agent is close and there is no time to generate a better plan. 3.2. Approximations as Retrieval-Time Filters RT-approximate alleviates the real-time performance problem faced by GT-approximate by converting the (approximate) disabling conditions into (approximate) retrieval-time filters.2 These filters quickly check if a learned solution is of low quality after its retrieval. For instance, (know pursuit agent’s position) cm be used as an approximate filter for the path-planning example. If this filter is true at retrieval time, then the retrieved plan is marked as being one of possibly low quality. In a time- critical situation, such as the hiding situation, the system 2Filtering strategies have also been used in other agent architectures. For example, in IRMA (Bratman, et al., 1988). filters decide if an external event/opportunity is compatible or incompatible with the plan the system has committed to. can simply ignore this mark and use its learned solution. Where do these filters come from? One “top-down” possibility is that they arise from explicit generation-time assumptions, much as in GT-approximate. For example, if it is known that the planning proceeded under the assumption that no knowledge is available about the location of the pursuit agent, then this assumption could be captured as a filter and associated with the learned rule. Though existence of such location knowledge at retrieval time does not necessarily mean that the plan will be of low-quality, the filter does at least ensure that the plan will not suffer from the masking effect because of this location information. A second “data-driven” possibility is to use “significant” external events as the basis for filters. Essentially, the system notices some external object/event which may suggest to it that a retrieved solution is inappropriate. For instance, in the hiding example, if the system notices a closer, larger wall in front, then this may suggest to it that its retrieved hiding plan is inappropriate. This strategy is related to the reference features proposed in (Pryor and Collins, 1992), which are tags that the system associates with potentially problematical elements in its environment. Later activation of a reference feature alerts the system to a potential negative (positive) interaction of that element with its current plan. The biggest problem with RT-approximate is that it suffers from the same overspecialization problem that dogs GT-approximate; that is, the filters are overgeneral, and can eliminate plans even when they would yield high- quality solutions. 3.3. Cascading Filters RT-cascade overcomes the overspecialization problem of RT-approximate by cascading a more exact filter after the approximate filter. It fist applies the approximate filter to the retrieved solution. If this indicates that the solution may be of low quality, then the exact filter is applied to verify the solution. If the exact filter also indicates that the retrieved solution is inappropriate, then the system replans from scratch. (Sometimes, a better alternative may be to modify and re-use the existing plan (Kambhampati, 1990).) If the exact filter indicates that the solution is appropriate, then the original solution is used, thus overcoming the overspecialization introduced by the approximate fif ter. As an example, consider what happens when RT- cascade is applied to the two Groundworld scenarios introduced earlier. In the path-planning scenario, the approximate filter is (know pursuit agent’s position) and the exact filter is a simulation of the plan that verifies whether the pursuit agent’s position will lead to an interception. In the hiding scenario, the approximate filter is (wall in front of evasion agent). Here, the exact filter verifies that the wall actually is a hiding place (e.g., it will not be so if the pursuit agent is located between the wall and the evasion agent), and that the wall is close. In both the scenario in Figure 2-l-b and the one in Figure l-l-b Plan Learning 529 the approximate filters detect possibly low quality plans. The exact filters are then run, and since they concur, replanning occurs, yielding the plans in Figure 3-l. In both of these cases RTcascade yields the same qualitative behavior as would RT-approximate; however, in other circumstances RT-cascade would have stayed with the original plan while RT-approximate replanned. In either event, this experience can be learned so as not to repeat the exact verification (and replanning) on a similar future problem. 1000 ! 0-- r -d > I-- I 4 I Evasion ager 500 Ijestination 10 (a) Overcoming the masking effect in path-planning. 0 Pursuit agent 600 500 ml 200 300 400 500 (b) Overcoming the masking effect in hiding. Figrare 3-1: Overcoming the masking effect. The exact verification in RT-cascade may appear similar to GT-exact; but there is a big difference in their computational costs. In the exact verification process, the system reasons only about the single situation that exists at retrieval time. In contrast, GT-exact reasons about all possible potentia.lly problematical situations that may arise at retrieval time. For instance, in the path-planning example, GT-exact requires reasoning about all possible positions of the pursuit agent that can lead to an interception, as opposed to a single position of the pursuit agent. This difference in reasoning costs at generation time and retrieval time have also been observed (and exploited) in some other systems (Huffman, Pearson and Laird, 1991). Note that this high cost of GT-exact also rules out a GT-cascade solution, which would combine the exact and approximate disabling conditions at generation time. We have focused on applying the cascaded filters after the retrieval of a learned solution, but before its execution/application. However, the cascaded filters could be employed during or after execution as well. For instance, in the path-planning example, the cascaded filters could be invoked only if the pursuit agent actually intercepts the path. Here, this interception itself acts as a bottom-up approximate filter. The exact filter then verifies if the path is a low quality one (e.g., this path could be the best the system could plan if it had no prior knowledge about the pursuit agent, or this was the only path possible, etc.) This experience can be learned, and retrieved in future instances. However, one key problem with this strategy is that it disallows any preventive action on the problem at hand. The key remaining question about RT-cascade is how well it performs in comparison to RT-approximate; that is, whether the extra cost of performing the exact verifications in RT-cascade is offset by the replanning effort that would otherwise be necessary in RT- approximate. This is a sufficiently complicated question to be the topic of the next section. 4. -approximate vs Two factors determine whether RTcascade outperforms RT-approximate. The first is the amount of overspecialization/inaccuracy in the approximate filter. Without such inaccuracy, the exact filter is simply unnecessary. The second factor relates to the cost of (re)derivation. Since the exact filter is intended to avoid the (re)derivation of a solution, it must cost less than the rederivation to generate savings. Winslett (Winslett, 1987) shows that while, in the worst case, derivation and verification processes are of the same complexity, in general, verification may be cheaper. Let us consider two systems. The first, S-approximate, uses the RT-approximate approach; and a second, S- cascade, uses the RT-cascade approach. Now, consider a problem-instance where the approximate filter is inaccurate, i.e., it indicates that a solution is of low quality, but it is actually not of low quality. Since S- approximate depends on only the approximate filter, it will derive a new solution from scratch, and it will incur the cost of Cdetive. On the contrary, with S-cascade, the exact filter will be used to verify the quality of the solution. This verification will succeed, indicating that the solution is actually not of low quality. Therefore, S- cascade will not derive a new solution, and wilI only incur the cost of C;,,, for successful verification. Assuming C vsucc is less *m ‘derive (as discussed above), this situation favors S-cascade. It will obtain a speedup over S-approximate of: C,,etiv&vsucc. 530 Tambe Thus, a cascaded filter can lead to performance improvements. However, now consider a second problem instance, where the approximate filter is accurate, i.e., it indicates that a solution is of low quality, and it is actually of low quality. S-approximate will again derive a new solution from scratch, with cost of Cdetive. S-cascade will again use an exact filter to verify the quality of the solution. However, now since the solution is of low quality, the verification will fail, at the cost of Cvfail. S- cascade will then derive a new solution, at the cost of C derive, so that the toal cost for S-cascade will be: C derive+Cvfail. This situation favors S-approximate. It will obtain a speedup over S-cascade of: Thus, if the approximate filter functions inaccurately for a problem instance, S-cascade outperforms S- approximate; otherwise, S-approximate performs better. In general, there will be a mix of these two types of instances. Let N,, be the number of instances where the approximate filter performs accurately, and Nina= be the number of instances where it performs inaccurately. Simple algebra reveals that if S-cascade is to outperform S-approximate, then the accuracy of the approximate filter [Nacc/(Nacc+Ninacc)] must be bounded above by: (Cdetive - C vsuccY~Cvfail + (‘&r&e - C,,,,>)- If the approximate filter is any more accurate, S-approximate will outperform S-cascade. (It may be possible to improve this bound further for S-cascade by applying the exact filter selectively; that is, skipping it when C&rive is estimated to be cheaper than C,,,,.) We do not yet know of any general procedures for predicting a priori how accurate an approximate filter will be, nor how low this accuracy must be for RT-cascade to outperform RT-approximate. So, we have instead investigated this question experimentally in the Groundworld domain. Our methodology has been to implement RT-cascade as part of an evasion agent that is constructed in Soar (Rosenbloom, et al., 1991) - an integrated problem-solving and learning architecture, which uses dunking (a variant of EBL) to acquire rules that generalize its experience in solving problems - and then to use this implementation to gather data that lets us approximate the parameters of the upper-bound equation, at least for this domain. Table 4-l presents the experimental results. Since the values for Cdefive, C,,,, and C,,,, vary for different start and destination points, live different sets of values were obtained. The first, second and third columns give C derive’ C vsucc and Gfail respectively (measured in number of simulation cycles). The fourth column gives the speedup - Cdetiv&,succ - obtained by the System due to the cascaded filter, when the approximate filter is inaccurate. The ftith column gives the slowdown - (Cderive+Cvfail)/Cderive - observed by the system due to the cascaded filter, when the approximate filter is accurate. The last column in the table is the computed bound on the accuracy of the approximate filter. Table 4-1: Experimental results for the path-planning example. The first row in the table shows the data for the start and destination points as shown in Figure 2-l-a. Here, the value of 30 for Cvsucc represents a case where the pursuit agent is located far to the north-east of the evasion agent, so that it will not intercept the planned path. The value of I1 for C,, was obtained for the case where the pursuit agent is located as shown in Figure 2-I-b. The other four rows represent four other problems, with decreasing path lengths. The table shows that in cases where the approximate filter is inaccurate, the system derives good speedups due to the cascaded filter. In cases where the approximate filter is accurate, the system encounters very small slowdowns due to the cascaded filter. The last column in the table shows that even if the accuracy of the approximate filter is as high as 9597%, the cascaded filter will continue to provide the system with some performance benefits. The approximate filter that we have used - (know pursuit agent’s position) - is not as accurate as this. For the five problems above, its actual accuracy varied from about 44% for the first problem to 28% for the last problem. We could employ an alternative filter, but its accuracy would need to be more than 9597% before the cascaded filters become unuseful for this problem. The last row in Table 4-l shows a low Cdetive, for source and destination points that are close. Here, spcedup due to the cascaded filters has decreased. However, the other entries in this last row are blank. This is because with such close source and destination points, verification failure is instant: the pursuit agent is in visual range and starts chasing. In such cases, the evasion agent abandons its path planning, and instead tries to hide. For the hiding example, Cdetive is 14, while C,,,, and C vfail are both 3. These values show little variance with different hiding destinations. This provides a bound of 73% on the accuracy of the approximate filter. If the approximate filter is any more accurate than this, the cascaded filter is not beneficial. We estimate the accuracy of our approximate filter - (wall in front of evasion agent) - to be approximately 25%. Plan Learning 531 5. Summary and This paper focused on the masking problem in knowledge compilation systems. The problem arises when a system relies on its learn4 knowledge to provide a solution, and in this process misses a better alternative solution. In this paper, we examined a sequence of possible solutions for the masking effect. Each solution refined and built on the previous one. The final solution is based on cascaded filters. When learned knowledge is retrieved, these filters alert the system about the inappropriateness of this knowledge so that the system can then derive a better solution. We analyzed conditions under which this solution performs better than the others, and presented experimental data supportive of the analysis. Much more needs to be understood with respect to masking. Concerns related to masking appear in different systems, including some non-learning systems. One example of this is the the qualljkation problem (Ginsberg and Smith, 1987, Lifschitz, 1987, McCarthy, 1980), which is concerned with the issue that the successful performance of an action may depend on a large number of qualifications. The disabling conditions for learned rules (from Section 2) are essentially a form of such qualifications. However, the solutions proposed for the qualification problem have a different emphasis - they focus on higher-level logical properties of the solutions. For instance, one well-known solution is to group together all of the qualifications for an action under a single disabling abnormal condition (McCarthy, 1980, Lifschitz, 1987). This condition is assumed false by default, unless it can be derived via some independent disabling rules. However, issues of focusing or limiting the reasoning involved in these disabling rules are not addressed. In contrast, our use of filters to focus the reasoning at retrieval time and the use of two filters (not just one), provide two examples of our concern with more pragmatic issues. Masking also needs to be better situated in the overall space of learning issues. We have already seen how it is a subclass of overgeneralization that leads to a decrease in solution quality rather than outright solution failure. However, it also appears to be part of a more general family of issues that includes, among others, the utility problem in EBL. The utility problem concerns the degradation in the speed of problem-solving with learning (Minton, 1988, Tambe, et al., 1990). Clarke and Holte (Clark and Holte, 1992) note that this is distinct from masking, which concerns degradation in solution quality with learning. However, the utility problem can be viewed from a broader perspective, as proposed in (Holder, 1992). In particular, the traditional view of the utility problem is that it involves symbol-level learning (e.g., acquisition of search-control rules), and creates a symbol-level utility problem (degradation in speed of problem-solving) (Minton, 1988). Holder (Holder, 1992) examines the problem of over-fitting in inductive learning, and views that as part of a general utility problem. This over-fitting problem could actually be viewed as involving knowledge-level (inductive) learning, and creating a knowledge-level utility problem (degradation of the accuracy of learned concepts). Given this perspective, we can create a 2x2 table, with the horizontal axis indicating the type of utility problem, and the vertical axis indicating the type of learning (Figure S-l). Type of utility problem Symbol-level Knowledge-level Type of learning Figure 5-1: A broader perspective on the utility problem. The masking effect may now be viewed as involving symbol-level learning (knowledge compilation), but creating a knowledge-level utility problem (degradation in solution quality). Finally, the average growth effect that is observed in some systems (Tambe, 1991) provides an example of knowledge-level learning causing a symbol- level utility problem. Here, a large number of new rules acquired via knowledge-level learning can cause a symbol-level utility problem. We hope that by exploring such related issues, we can obtain a broader understanding of the masking effect. References Anderson, J. R. 1983. The Architecture of Cognition. Cambridge, Massachusetts: Harvard University Press. Boddy, M., and Dean, T. 1989. Solving Time-Dependent Planning Problems Proceedings of the International Joint Conference on Artificial Intelligence. pp. 979-984. Bratman, M. E., Israel, D. J., and Pollack, M. E. 1988. Plans and resource-bounded practical reasoning. Computational Intelligence, Vol. 4(4). Carbonell, J. G. (1986). Derivational analogy: a theory of reconstructive problem-solving and expertise acquisition. In Mitchell, T.M., Carbonell, J.G., and Michalski, R.S. (Eds.), Machine Learning: A Guide to Current Research. Los Altos, California: Kluwer Academic Press. Chien, S. 1989. Using and refining simplifications: explnation-based learning in intractable domains. Proceedings of the International Joint Conference on Artificial Intelligence. pp. 590-595. Clark, P., and Holte., R. 1992. Lazy partial evaluation: an integration of explanation-based generalization and 532 Tambe partial evaruauon. Proceedings of the International effects. Ph.D. diss., University of Michigan. conference on Machine Learning. pp. 82-9 1. Formal theories of action. DeJong, 6. F. and Mooney, R. 1986. Explanation-based learning: An alternative view. Machine Learning, 1(2), 145-176. Lifschitz, V. Proceedings of the 1987 workshop on the frame problem 1987. in AI. pp. 35-57. Luchins, A. S. 1942. Mechanization in problem solving. EIkan, C. 1990. Incremental, approximate planning. Psychological monographs, Vol. 54(6). Proceedings of the National Conference on Artificial Intelligence (AAAI). pp. 145150. Ellman, T. 1988. Approximate theory formation: An explanation-based approach. Proceedings of the National Conference on Artificial Intelligence. pp. 570-574. McCarthy, J. 1980. Circumsription -- a form of non- monontonic reasoning. Artificial Intelligence, 13,27-39. Minton, S. 1988. Quantitative results concerning the utility of explanation-based learning. Proceedings of the National Conference on Artificial Intelligence. pp. Feldman, Y., and Rich, C. 1986. Reasoning with 564-569. simplifying assumptions: a methodology and example. Proceedings of the National Conference on Artificial Intelligence (AAAI). pp. 2-7. Mitchell, J. S. B. 1988. An algorithmic approach to some problems in terrain navigation. Artijkial Intelligence, 37, 171-201. Ginsberg, M., and Smith, D. E. 1987. Reasoning about Fikes, R., Hart, P., and Nilsson, N. 1972. Learning and executing generalized robot plans. Artificial Intelligence, 3(l), 251-288. Gil, Y. 1992. Acquiring Domain Knowledge for Planning by Experimentation. Ph.D. diss., School of Computer Science, Carnegie Mellon University. - _ -- 230-235. Mitchell, T. M., Keller, R. M., and Kedar-Cabelli, S. T. 1986. Explanation-based generalization: A unifying view. Machine Learning, l(l), 47-80. Pryor, L. and Collins, G. 1992. Reference features as guides to reasoning about opportunities. Proceedings of the Conference of the Cognitive Science Society. pp. Action II: The qualification problem. Proceedings of the 1987 workshop on the frame problem in AI. pp. 259-287. Rosenbloom, P. S., Laird, J. E., Newell, A., and McCarl, R. 199 1. A preliminary analysis of the Soar architecture Hammond, K. 1986. Learning to anticipate and avoid planning problems through the explanation of failures. Proceedings of the Fifth National Conference on Artificial Intelligence (AAAI). pp. 556-560. Holder, L. 1992. The General Utility problem in EBL. Proceedings of the National Conference on Artificial Intelligence. pp. 249-254. as a basis for general intelligence. Artzjkial Intelligence, 47( l-3), 289-325. Shell, P. and Carbonell, J. 1989. Towards a general framework for composing disjunctive and itemative macro-operators. Proceedings of the Eleventh International Joint Conference on Artificial Intelligence. pp. 596-602. Huffman, S.B., Pearson, D.J. and Lair-d, J.E. November 1991. correcting Impegect Domain Theories: A Knowledge-Level Analysis (Tech. Rep. CSE-TR- 114-9 1) Department of Electrical Engineering and Computer Science, University of Michigan. Kambhampati, S. 1990. A theory of plan modification. Proceedings of the National Conference on Artificial Intelligence (AAAI). pp. 176- 182. Korf, R. 1990. Real-time heuristic search. Artificial Intelligence, Vol. 42(2-3). Laird, J. E., Rosenbloom, P.S. and Newell, A. 1986. Chunking in Soar: The anatomy of a general learning mechanism. Machine Learning, I (I), 1 l-46. Laird, J.E., Rosenbloom, P.S., and Newell, A. 1986. Overgeneralization during knowledge compilation in Soar. Proceedings of the Workshop on Knowledge Compilation. pp. 46-57. Lewis, C. H. 1978. Production system models of practice Stobie, I., Tambe, M., and Rosenbloom, P. 1992. Flexible integration of path-planning capabilities. Proceedings of the SPIE conference on Mobile Robots. pp. (in press). Tadepalli, P. 1989. Lazy explanation-based learning: A solution to the intractable theory problem. Proceedings of the International Joint Conference on Artificial Intelligence. pp. 694-700. Tambe, M. 199 1. Eliminating combinatorics from production match. Ph.D. diss., School of Computer Science, Carnegie Mellon University. Tambe, M., Newell, A., and Rosenbloom, P. S. 1990. The problem of expensive chunks and its solution by restricting expressiveness. Machine Learning, S(3), 299-348. Winslett, M. 1987. Validating generalized plans in the presence of incomplete information. Proceedings of the National Conference on Artificial Intelligence. pp. 261-266. Plan Learning 533 | 1993 | 79 |
1,408 | Rough Resolution: A Refinement of Resolution Heng Chu and avid A, Plaistecl Department of Computer Science University of North Carolina Chapel Hill, NC 27599-3175, USA { chulplaisted}@cs.unc.edu Abstract Semantic hyper-linking [Plaisted et al., 1992, Chu and Plaisted, 1993, Chu and Plaisted, 19921 has been proposed recently to use seman- tics with hyper-linking [Lee and Plaisted, 19921, an instance-based theorem proving technique. Ground instances are generated until an unsat- isfiable ground set is obtained; semantics is used to greatly reduce the search space. One disadvan- tage of semantic hyper-linking is,that large ground literals, if needed in the proofs, sometimes are hard to generate. In this paper we propose rough resolution, a refinement of resolution [Robinson, 19651, to only resolve upon maximum liter&, that are potentially large in ground instances, and ob- tain rough resoluents. Rough resolvents can be used by semantic hyper-linking to avoid generat- ing large ground literals since maximum literals have been deleted. As an example, we will show how rou h resolution helps to prove bM3 [Bled- soe, 1990 , which cannot be proved using semantic 9 hyper-linking only. We will also show other results in which rough resolution helps to find the proofs faster. Though incomplete, rough resolution can be used with other complete methods that prefer small clauses. Introduction Semantic hyper-linking has been recently pro- posed [Plaisted et al., 1992, Chu and Plsisted, 1992, Chu and Plaisted, 19931 to use semantics with hyper- linking [Lee and Plaisted, 19921. Some hard theorems like IMV [Bledsoe, 19831 and ExQ [Wang, 19651 prob- lems have been proved with user-provided semantics only. Semantic hyper-linking is a complete, instance-based refutational theorem proving technique. Ground in- stances of the input clauses are generated and a sat- isfiability check is applied to the ground instance set. Semantics is used to reduce the search space and keep *This research was partially supported by the National Science Foundation under grant CCR-9108904 relevant ground instances. Size is measured based on the largest literal in the clause and smaller ground in- stances are generated first. Such an instance genera- tion strategy, however, imposes difficulties generating large ground instances of a literal since there might be many smaller (yet irrelevant) ones that need to be generated first. For example, for some LIMi- prob- lems [Bledsoe, 19901, a correct ground instance of the goal clause P < o)v-(lf(~s(~))-f(~>l+Ig(~s(D))-g(~)l L 4 - has to be generated by substituting variable D with min(dl(ha(eO)), d2(ha(eO))) to obtain the proof. The ground instance is large and, with normal seman- tics, semantic hyper-linking generates many irrelevant smaller instances and diverts the search. Similar prob- lems happen when large ground literals need to be gen- erated in axiom clauses. Rough resolution addresses this problem by resolv- ing upon (and deleting) literals that are potentially large. Thus large ground literals are not needed and proofs are easier to obtain by semantic hyper-linking. However, rough resolution is against the philosophy of hyper-linking, namely, not to combine literals from dif- ferent clauses. We set restrictions to reduce the num- ber of retained resolvents so duplication of search space is not a serious problem as in ordinary resolution. In this paper we first describe semantic hyper-linking in brief. Then we discuss in detail the rough resolution technique. An example (LIM3 problem) is given to help illustrate the ideas. Finally we give some more test results and then conclude the paper. yper-linking In this section we briefly describe semantic hyper- linking. For more detailed discussion, please refer to [Chu and Plaisted, 1992, Chu and Plaisted, 19931. A refutational theorem prover, instead of showing a theorem H logically follows from a set of axiom clauses A, proves A and -H is unsatisfiable by deriving con- tradiction from the input clauses set. Usually we can find semantics for the theorem (and the axioms) to be proved. S UC semantics can be represented by a h Automated Reasoning 15 From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. structure which contains a domain of the objects and interpretations for constants, functions and predicates. A structure can be viewed as a (possibly infinite) set of all ground literals true in the semantics. A structure 1 is decidable if for a first-order formula F it is decidable whether F is satisfied by I. According to Herbrand theorem, in the input clauses for a refutational theorem prover, there must be ground instances of some clauses (usually the nega- tion of the theorem) that are false in the semantics; otherwise the input clauses are satisfiable. In general, no matter what semantics is chosen, such false ground instances exist for the input set. This observation is the base of semantic hyper-linking. The idea of semantic hyper-linking is this: Initially a decidable input structure is given by the user, then the prover systematically changes the structure (seman- tics) by generating ground instances of input clauses that are false in the structure. We have a set U of ground instances, initially empty. Ground instances that are false in the current structure, are generated. User-provided semantics is used to help generate the new ground instances. These ground instances are added to U. Then the structure is changed, if possible, to satisfy U. This procedure is repeated with the new structure until U is unsatisfiable. For details, please see [Plaisted et aZ., 1992, Chu and Plaisted, 1992, Chu and Plaisted, 19931. Semantic hyper-linking has been shown to have great potential. Hard theorems like IMV (the inter- mediate value theorem in analysis) [Bledsoe, 1983, Ballantyne and Bledsoe, 19821 and ExQ (three ex- amples from quantification theory) [Wang, 19651 are proved with the user-provided semantics only. No other human control is needed. However, if large ground literals are needed for the proof, they often are difficult to generate because of the way the ground instances are generated (which is basically an enumer- ation of the Herbrand base). Many small irrelevant ground literals need to be generated before large ones are generated. This generates a lot of useless smaller ground instances to complicate the proof search and makes many theorems unable to be proved. Rough resolution is designed to address this problem. Rough Resolution The basic idea of rough resolution is simple: we only resolve on those literals that are potentially large in ground instances. Those literals- resolved ‘upon are deleted in the resolvents and their ground instances need not be generated. If such larffe ground liter- als are used in the proof, rough resoi&n can avoid generating them and help semantic hyper-linking find the proof faster. For example, consider the following clauses (x, y and z are variables): Cl = { v(2, Y, f(z)), +, 2) 1 62 = { g(f(4, g(y), 4 -f(5 Y) 1 If the following two ground instances are needed in the proof which needs no other instances of S(fWl s@(b)), f(4): G = i -s(fW, !7@(w~ f(c)), d(c, f(4) 1 Ci = ( g( f (a), g(h(b)), f(c)), -f (a, h(b)) } We can resolve Cl and C2 upon the first literals and get c = { d(%, f (41, -f (2, Y) 1 Then, instead of C; and Ci, a smaller ground in- stance c’ = { d(c, f(a)), -f 6% h(b)) 1 can be used in the proof and avoids the use of larger g(f(a), g(h(b)), f(c)) which might be difficult to gener- ate by semantic hyper-linking. Maximum Eiterds Binary resolution on two clauses Cl and C2 chooses one literal L in Cl and one literal M in C2 such that LO = -MO, where 0 is a most general unifier of L and NM. A resolvent R=(Cl - L)B U (C2 - M)B is gener- ated. R is smooth if LOU is never larger than any literal in Ra for any a; R is rough if for some 6, L&T is larger than any literal in Ra. Smooth resolvents are not kept because they only remove small literals (the “smooth” parts of clauses); large literals (the “rough” parts of clauses) remain as a difficulty. Thus we are particu- larly interested in rough resolvents because they might remove large literals needed for the proof. Rough re- solvents can be obtained by resolving upon maximum literals. Definition 1 A literal L in a clause C is an absolute maximum literal if, for all u, the size of Lu is larger than or equal to that of any literal in Cu; L is a likely maximum literal if for some u, Lo is larger than or equal to any literal in Cu. A clause can only have one of those two imum literals. For example, in clause kinds of max- where ‘1~, absolute ( d(v, k(w, u)), -d(u, u), -d(v, w) 1 u and w are variables, d(v, k(w, u)) is the only maximum literal; in clause { 4% dw, a --d(f b, 4, 4,4% 4 1 both d(u, q(w, v)) and 4( f (u, v), 20) are absolute imum literals; clause 1 d(u, v), -d(v, u> } has two absolute maximum literals; in clause max- j 4% wh-d(u, 4, -4% 4 1 all three literals are likely maximum literals. efinition 2 A rough resolution step involves simul- taneously resolving upon maximum literals L1, . . . , L, of a clause C (called nuclei) with some absolute max- imum literals in other clauses Cl,. . . , C, (called elec- trons). A rough resolvent is obtained from a rough resolution step. 16 Chu Nuclei can have absolute or likely maximum literals; electrons can only have absolute maximum literals. Absolute maximum literals in a nucleus are all resolved upon at the same time in one single rough resolution step. This is based on the observation that absolute maximum literals should be eventually all removed since they are always the largest in any instances, and intermediate resolvents might not be saved due to non- negative growths (to be discussed in next section). Resolving on likely maximum literals in a clause is difficult to handle because it might not delete large literals and, at the same time, could generate too many useless resolvents. For example, the transitivity axiom contains three likely maximum literals and often gener- ates too many clauses during a resolution proof. Their role is obscure in rough resolution. We have used two strategies to do rough resolution on likely maximum literals. The first is to apply the following heuristics to si- multaneously resolve on more than one likely maxi- mum literal: if there are two likely maximum literals in a clause, we resolve upon them one at a time; if there are more than two likely maximum literals, we also resolve upon each possible pair of two of them at the same time. This is based on the observation that usually only few likely maximum literals will become the largest in the ground instances. Another important strategy is to require that any likely maximum literal resolved upon should still be a likely (or absolute) maximum literal after the proper substitution is applied. Otherwise the resolvent is dis- carded because it introduces larger literals from those not resolved upon. For example, consider the clause where all literals are likely maximum literals. Suppose first two literals are resolved with p(z, Q) and ~(a, z) respectively with substitution 0 = { y + LX }. Such res- olution is not allowed since none of p(x, a) and p(a, z) are maximum literals in C0, and the literal not resolved upon, p(x, z), becomes an absolute maximum literal in ce. Retaining Resolvents Rough resolution only resolves upon maximum literals. Since usually there are not many absolute maximum literals in a clause, the number of resolvents are greatly reduced. However there are still too many resolvents if no further restriction is applied. In this section we discuss one strategy that we use to retain resolvents more selectively. From the rough resolvents, we prefer those smaller than the parents clauses. Ordering on clauses is needed here and we use ordering of the multisets of all literal sizes in a clause. Definition 3 ICI is the multiset of sizes of all literals in C. Difference ICll- l&l is cl - c2 where cl and c2 are the largest elements (0 if the multiset is empty) in ICll and l&l respectively after common elements are deleted. For example, for clause C = { p(x),p(y), -Q(z, Y) }, ICI = {3,2,2); (3,2,2) - ($2) = (2) - {} = 2 and {4}-{5,2)= -1. nition 4 Suppose in a rough resolution step, re- solvent R is obtained from clauses Cl, . . . , C,. The growth of R is IRI- maximumof (ICll, l&l, . . ., I&I). Such multiset idea is only used to compute growth of a rough resolvent. In other situations the size of a clause is still the largest literal size. Growth is a useful measurement to retain resolvents from resolving upon likely maximum literals. Intu- itively growth indicates the size growth of a rough re- solvent relative to the parent clauses before substitu- tion is applied. If growth is negative, the resolvent is smaller than the largest parents clause and “progress” has been made to reduce the number of large literals. On the other hand, if the growth is zero or positive, the resolvent is of the same length or larger than the largest parent clause. There is no progress from this rough resolution step and it is not useful to keep the resolvent . The algorithm of rough resolution is described in Fig. 1. I&solvents of non-negative growth are re- tained only when there are no resolvents with negative growth. The procedure repeats until a proof is found. The resolvents are used in semantic hyper-linking in a limited way because many resolvents could be gener- ated. Reasonable time bound is set on the use of rough resolvents in semantic hyper-linking. The collaboration of rough resolution and semantic hyper-linking is not explicitly shown in Fig. 1. Ba- sically rough resolution executes for some amount of time then stops, and new resolvents are used in later semantic hyper-linking; when executed again, rough resolution picks up from where it left off and contin- ues. The rough resolvents with negative growth are al- ways kept; among the rough resolvents with non- negative growth, only the smallest (in terms of the largest literal in the resolvent) are saved, if necessary. This allows the resolvents with non-negative growth to be used in a controlled manner. ith restrictions on how rough resolution is applied (by resolving upon maximum literals simultaneously) and how resolvents are retained (based on growth), much less resolvents are retained than those in other similar resolution strategy. And we have found the al- gorithm practical and useful when used with semantic hyper-linking. Automated Reasoning 17 Algorithm Rough Resolution begin loop for each clause C with absolute maximum literals Obtain a new rough resolvent R using C as nucleus if R has a negative growth then save R permanently as a new clause else save R temporarily until there are no new resolvents with growth < 0 for each clause C having likely maximum literals loop Obtain a new rough resolvent R using C as nucleus if R has negative growth then save R permanently as a new clause else save R temporarily until no new rough resolvents can be generated if in last loop no rough resolvent was generated with negative growth then for each smallest temporarily saved rough resolvent R save R permanently as a new clause end Figure 1: Algorithm: Rough Resolution An Example Bledsoe gave LIM+ problems in [Bledsoe, 19901 as chal- lenge problems for automated theorem provers. Be- cause of the large search space they might generate, LIM+ problems are difficult for most theorem provers. However, they are not difficult for St&e [Hines, 19921 which has built in inequality inference rules for densed linear ordering. In this section we will look at the proof of EIM3 using rough resolution. Intermediate results from semantic hyper-linking are omitted. As mentioned in the introduction, the correct ground instance of the goal clause has to be generated to ob- tain the proof. However, that ground instance contains a literal so large that semantic hyper-linking cannot generate it early enough in the search for the proof. As a result, the prover got lost even before correct goal in- stances are generated. Rough resolution helps to delete large literals by resolving upon them; smaller ground literals are generated by semantic hyper-linking. It is interesting to observe that the proof presented here does not need the term min(dl(ha(eO)), d2(ha(eO))) which is essential to human proofs. We only list clauses used in the proof; literals in boxes are maximum literal8 and only clauses 14 and 15 (from the same transitivity axiom) have likely max- imum literals. Each rough resolution step is denoted by list of clause numbers. with nuclei in boxes. For exam- ’ ple, @,3,17) d enotes a rough resolution step using clause 14 as nucleus and clauses 3 and 17 as electrons. The proof: ,-Zt(aG), ha(Z)), -WW’), WW ) 24: a,21) (I -Zt(ab(pZ(f (xs(D)), w(f W)), h4W) I --It(aW(g(x@)), ns(s(a)))), h+O)) Jt(D, 0)) s(D), w(a))), dl(h+W)]p --It(ab(p@(D), w(a))), d2(ha(eO))) y Zt(ha(eO), 4, WA 0)) 97: (j-izJ,29,10) {I-Zt(ab(~Z(xs(D), rig(a))), d2(ha(eO))) 1, -Zt(D, dl(ha(eO))), Zt(ha(eO), o), Zt(D, o)} ), w(a))), dl(ha(eO))) 1, -Zt(D, da(ha(eO))), Zt(ha(eO), o), Zt(D, 0)) 139: (j-iEJ,97) { 1 -Zt(d2(ha(eO)), dl(ha(eO))) 1, Zt(ha(eO), o), Zt(da(ha(eO)), o) } (w+o), 0) and Zt(d2(ha(eO)),o) are then unit deleted) y;, Zt(ha(eO), o), Zt(dl(ha(eO)), o) } (Wa(eO), 0) and Zt(dl(ha(eO)),o) are then unit deleted) 18 Chu Y-9 13 ,139,141) is obtained and a proof is found. Unit { wZt(ha(eO), o) } is generated by model filter- ing [Chu and Plaisted, 19931 in semantic hyper-linking. Or it can be generated by UR resolution from clause 4 and 7. It is then used with clause 5 to gener- ate { wZt(dl(ha(eO)), o) }; with clause 6 to generate { -Zt(d2(ha(eO)), o) }. 4: { wZt(e0, 0) } 5: { Zt(X, o), -Zt(dl(X), o) } 6: { Zt(X, o), -Zt(d2(X), o) } 7: ( Zt(X, o), -Zt(ha(X), o) } None of the above three units can be generated by rough resolution. This shows the collaboration of rough resolution with semantic hyper-linking to find the proof. We have implemented a prover in Prolog. Experiment results show that rough resolution indeed improves the prover and often helps to find proofs faster. Ta- ble 1 lists some results (in seconds) that show rough resolution is in general a useful technique used with semantic hyper-linking. AMY is the attaining maxi- mum (or minimum) value theorem in analysis [Bled- soe, 19831; LIMl-3 are the first three LIM+ prob- lems proposed by Bledsoe [Bledsoe, 19901; IMV is the intermediate value theorem in analysis [Bledsoe, 19831; 11, IPl, Pl and Sl are four problems in im- plicational propositional calculus [Lukasiewicz, 1948, Pfenning, 19881; 1~37 is the theorem that Vx E a ring R+*O= 0 where 0 is the additive identity; SAM’s lemma is a lemma presented in [Guard et al., 19691; wosl5 proves the closure property of subgroups; wosl9 is the theorem that subgroups of index 2 are normal; wos20 is a variant of wosl9; wos2l is a variant of 1~37; and ExQ problems (including wos3 1) are three exam- ples from quantification theory [Wang, 19651. LIMl-3 are considered simpler in [Bledsoe, 19901. Prover STROVE, with built in inference rules for densed linear ordering, can easily prove all LIM+ problems. However, few other general-purpose theorem provers can prove LIMA-3 (especially LIM3). METEOR [As- trachan and Loveland, 19911 can prove all three by using special guidance. OTTER [McCune, 19901 and CLIN [Lee and Plaisted, 19921 could not prove any LIM+ problem. Conclusions Rough resolution is incomplete but it is useful when used with other complete methods that prefer small clauses. It helps to focus on removing “large” part of the proofs. In particular, we have found that se- mantic hyper-linking and rough resolution conceptu- ally work well together: semantic hyper-linking solves the “small” and “non-Horn” part of the proof; UR reso- lution [Chu and Plaisted, 19931 solves the “Horn” part Problem with rough without rough resolution resolution AMY 2127.3 -* LIMl 83.0 - LIM2 63.3 - I LIM3 534.8 - IMV 374.5 49.8 SAM’s Lemma 146.5 95.5 wos15 282.7 - *U - n indicates the run is aborted after either running over 20,000 seconds or using over 30 Megabyte memory Table 1: Proof results using rough resolution with se- mantic hyper-linking of the proof; and rough resolution solves the “large” part of the proof. Rough resolution is powerful enough to help seman- tic hyper-linking prove some hard theorems which can- not be obtained otherwise. However we have observed that likely maximum literals are the source of rapid search space expansion in some hard theorems like LIM4 and LIM5. Future research includes further in- vestigation of the role of likely maximum literals and avoid generating unnecessary resolvents from resolving upon likely maximum literals. One possible direction is applying rough resolution idea on paramodulation since equality axioms often contain likely maximum literals. Also, focusing on relevant resolvents should be another important issue to be addressed. eferences Astrachan, O.L. and Loveland, D.W. 1991. ME TEORs: High performance theorem provers using model elimination. In Boyer, R.S., editor 1991, Auto- mated Reasoning: Essays in Honor of Woody Bledsoe. Kluwer Academic Publishers. Ballantyne, A. M. and Bledsoe, W. W. 1982. On gen- erating and using examples in proof discovery. Ma- chine Intelligence 10:3-39. Bledsoe, W. W. 1983. Using examples to generate instantiations of set variables. In Proc. of the 8 ” IJCAI, Karlsruhe, FRG. 892-901. Automated Reasoning 19 Bledsoe, W. W. 1990. Challenge problems in elemen- tary calculus. J. Automated Reasoning 6:341-359. Chu, Heng and Plaisted, David A. 1992. Semanti- cally guided first order theorem proving using hyper- linking. Manuscript. Chu, Heng and Plaisted, David A. 1993. Model find- ing strategies in semantically guided instance-based theorem proving. In Komorowski, Jan and R&, Zbig- niew W., editors 1993, Proceedings of the 7h Inter- national Symposium on Methodologies for Intelligent Systems. To appear. Guard, J.; Oglesby, F.; Bennett, J.; and Settle, L. 1969. Semi-automated mathematics. J. ACM 16( 1):49-62. Hines, L. M. 1992. The central variable strategy of Str/ve. In Kapur, D., editor 1992, Proc. of CADE- 11, Saratoga Springs, NY. 35-49. Lee, Shie-Jue and Plaisted, David. A. 1992. Elim- inating duplication with the hyper-linking strategy. J. Automated Reasoning 9125-42. Lukasiewicz, Jan 1948. The shortest axiom of the implicational calculus of propositions. In Proceedings of the Royal Irish Academy. 25-33. McCune, WilliamW. 1990. OTTER 2.0 Users Guide. Argonne National Laboratory, Argonne, Illinois. Pfenning, Frank 1988. Single axioms in the implica- tional propositional calculus. In Lusk, E. and Over- beek, R., editors 1988, Proc. of CADE-9, Argonne, IL. 710-713. Plaisted, David. A.; Alexander, Geoffrey D.; Chu, Heng; and Lee, Shie-Jue 1992. Conditional term rewriting and first-order theorem proving. In Pro- ceedings of the Third International Workshop on Con- ditional Term-Rewriting Systems, Pont-a-Mousson, France. Invited Talk. Robinson, J. 1965. A machine-oriented logic based on the resolution principle. J. ACM 12:23-41. Wang, H. 1965. Formalization and automatic theorem-proving. In Proc. of IFIP Congress 65, Washington, D.C. 51-58. 20 Chu | 1993 | 8 |
1,409 | ualitatively Deseri Alicia Abella John R. Kender Department of Computer Science Columbia University New York, NY 10027 Abstract The objective in this paper is to present a frame- work for a system that describes objects in a qual- itative fashion. A subset of spatial prepositions is chosen and an appropriate quantification is ap- plied to each of them that capture their inher- ent qualitative properties. The quantifications use such object attributes as area, centers, and elonga- tion properties. The familiar zeroth, first, and sec- ond order moments are used to characterize these attributes. This paper will detail how and why the particular quantifications were chosen. Since spa- tial prepositions are by their nature rather vague and dependent on context a technique for fuzzi- fying the definition of the spatial preposition is explained. Finally an example task is chosen to illustrate the appropriateness of the quantification techniques. Introduction The work presented in this paper is motivated by an interest in how spatial prepositions may be used to describe space and more interestingly, the spatial re- lationship among the objects that occupy that space. This work is not concerned with the natural language aspect of spatial prepositions. Given a particular en- vironment and a particular task, where the task and environment may change, we wish for a framework that describes the elements in the environment. It is this framework that is of concern in this paper. It is known that language meaning is very much de- pendent on context. An example of a context depen- dent use of the spatial preposition next as taken from [Landau and Jackendoff, accepted for publication] is the bicycle next to the house. We would normally not say the house next to the bicycle. This is the case be- cause the house is larger in size and as such it serves as an anchor for those objects around it. The house in this example serves as a reference object, or in an environmental context, as a landmark. In the system presented in this paper either description is acceptable since the only concern is in the spatial arrangement of 536 Abella the objects irrespective of the size or the purpose of either of the two objects. The treatment of objects in our chosen environment is a binary one. There is not a reference object, or landmark, because we wish to avoid choosing a ref- erence object that would require the use of physical attributes such as color, size, or shape and focus solely on two objects’ spatial relationship. If we think about the use of a preposition like near we realize that the requirement of a particular shape is not needed for its proper use. Landau and Jackendoff [Landau and Jackendoff, accepted for publication] have categorized spatial prepositions into those that describe volumes, surfaces, points, lines, and axial structure. They have pointed out that an object can be regarded as a “lump” or “blob” as far as most of the commonly used spatial prepositions are concerned. For example the prepo- sition in or inside can regard an object as a blob as long as the blob has the capacity to surround. Like- wise, near and at only require that the blob have some spatial extent. Along requires that an object be fairly linear and horizontal with respect to another. The work presented in [Herskovits, 1986] covers the topic of spatial prepositions fairly extensively from a natural language perspective. The author only sug- gests the possibility of constructing a computational model for expressing spatial prepositions. The intent here is to demonstrate that a computational model can be constructed and that it indeed captures the vi- tal properties sufficient for a succinct use of the cho- sen prepositions. We can encode the spatial preposi- tions fairly concisely because we are treating objects as “blobs” and because most of the properties char- acterized by these prepositions can be encoded using geometric properties such as alignment and distance. Other related works can be found in [Lindkvist, 1976; Talmy, 19831. The following sections will provide the details of the encoding we have chosen and demonstrate them though the use of an example. From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. Notations and Definitions The prepositions for which we have encoded are near, far, inside, above, below, aligned, next. We have de- fined a preposition as a predicate that maps k objects to true (T) or false (8’); true if the k objects meet the requirements imposed by the preposition and false otherwise. p : 0” - {T, F) where p is a preposition and 0” is a k-tuple of objects. In this paper we will consider k = 2. Nevertheless, prepositions that involve three objects like between can also be represented, using a similar formalism. Now that we have defined a preposition we need to define an object. Formally, each object is represented by a six element vector that depend on an object’s area A, center (xc, yC), and inertia tensor [ :; ;,y ] It is important to scale the elements in this vector so that they have consistent units, in this case units of length, because we will use this vector in the fuzzifica- tion procedure described in section 4. Therefore, the kth object is represented by a vector The pair vector of objects p is represented by a 1Zcomponent p = (p’,p2) E7F2 It is this scaled vector that we will be using in our future calculations. The parameterization of objects presented above leads to the concept of a bounding box. A bounding box encloses the object using certain criteria. There are various ways in which to compute a bounding box for an object, one of which may be to find the max- imum and minimum x and y values belonging to the object. The one we’ve chosen is defined through the values of & and &, that offer a measure of how much an object stretches in the x and y direction. See the Appendix for the derivation. Two objects define a point in 12D space. A prepo- sition p can be thought of as a set of points UP E RI2 such that UP = h-4Ph4~. 7% e volume in this 12D space may be able to reveal some of the inherent prop- erties associated with prepositions. In other words, examination of the space occupied by the various sets U” may tell us something about the spatial preposi- tions. Vacancies in this 12D space may reveal why we do not have a word to describe certain spatial relation- ships among objects. The intersection and distances of volumes occupied by various spatial prepositions may reveal a correlation between various prepositions. We say that objects O1 and O2 are in preposition p if ($, p2) E UP. This “ideal” set is made up of pairs of object vectors that satisfy the constraints imposed by the preposition p. As we well know, prepositions are inherently vague in their descriptions, and their inter- pretation may vary from person to person. Because of this, it is important to add some fuzzifying agent to our ideal set. The fuzzifying technique is as defined through fuzzy set theory [Klir and Folger, 19881. The theory of fuzzy sets is used to represent uncertainty, information, and complexity. The theory of classical sets’ represents certainty. A classical set divides ob- jects in the world into two categories: those that cer- tainly belong to a set and those that certainly do not belong to a set. A fuzzy set, on the other hand, divides the world much more loosely, by introducing vagueness into the categorization process. This means that mem- bers of a set belong to that set to a greater or lesser degree than other members of the set. Mathematically, members of the set are assigned a membership grade value that indicates to what degree they belong to the set. This membership grade is usually a real number in the closed interval between 0 and 1. Therefore a mem- ber that has a membership grade closer to 1 belongs to the set to a greater degree than a member with a lower membership grade. Because of its properties fuzzy set theory can find application in fields that study how we assimilate information, recognize patterns [Abella, 19921, and simplify complex tasks. In our notation the fuzzified ideal set is defined through a membership function We also define a threshold value that depends on how much vagueness we allow before we decide that two objects are no longer describable with the given prepo- sition: fqh-4 L Qp Computational ode1 of’ Spatial Prepositions The quantification of prepositions entails represent- ing objects through certain physical properties that can then serve as a basis for expressing prepositions. The physical properties we’ve chosen include object area, centers of mass, and elongation properties. These properties are calculated through the use of the zeroth, first, and second order moments. The basis for this choice of attributes is simplicity and familiarity. What ensues is a brief description of the various prepositions we’ve chosen to illustrate. Each preposition is defined through a set of inequalities. This results in sets UP having nonzero measure (i.e. full dimensionality) in R12 which is necessary for the fuzzification procedure described in section 4. NEAR We’ve defined near so that objects’ bounding boxes ‘Referred to as “crisp” sets in fuzzy set theory. Qualitative Reasoning 537 Figure 1: Two objects that are near each other Figure 3: Definition of relevant angles for aligned ‘I Figure 2: Two objects that are far from each other have a non-empty intersection (see figure 1). Mathe- matically this is : t: + 6: > Id -x:landt;+t;> IYE-YPI FAR Far is not the complement of near as one may initially suspect. We may be faced with a case where an object is neither near or far from another object, but rather it is somewhat near or somewhat far. This notion of somewhat will be explained more fully when we intro- duce the concept of fuzzifying our “ideal” set. For now it suffices to say that far is defined so that the distance between two bounding boxes in either the x extent or the y extent is larger than the maximum length of the two objects in that same x or y extent (see figure 2). Mathematically, 2 I .:. c. .:. clax ,,‘: ‘, :’ , ; . : .\ ,’ : : \\ ,’ : D , /” .L(: . . . . ‘I; Z*,..’ t9kin / 82 . . . . . . . . . . . ,’ ex I’ ,’ ,,; ;,f,, ,,,; _,/...~ .’ .’ ‘\ .’ : __‘\ ‘\ a’ ,’ ..’ ’ .’ : ,.’ \ eLn ‘\ ,’ : ,: _....‘. ‘. ,I : : ‘\I /? ,*’ : :’ ;.... . _.__._._ 1: _.... ....... , ’ , ,’ ,’ \’ .’ ‘\ #’ \ I *’ INSIDE Inside requires that the bounding box of one object 538 AbeUa be completely embedded within the bounding box of another. Formally, FJ - EZ > Id -4 andEi -E,2 > IYE -~,2l ABOVE, BELOW Above requires that the projections of bounding boxes on the x axis intersect and that the projections of bounding boxes on the y axis do not intersect. The mathematical relationship is Fi + c: > Id - x:1 and Ei + tjj < yi - yz Note that above is non-commutative. We define be- low similarly. As with near and far, above and below are mutually exclusive prepositions. However, not- above does not strictly imply below. ALIGNED The alignment2 property is angular in nature, therefore its quantification involves inequalities between angles, rather than lengths as the previous prepositions had. For this purpose we define a different type of bounding box that is centered at the object’s center of mass and oriented along the object’s principal inertia axes with dimensions proportional to the object’s maximum and minimum moments of inertia. 0, emin and 8,,, are as shown in figure 3. With this in mind, the preposition aligned is defined as: max(elt7in 7 Oii,) < Bi < min(Ok,,, O:,,), i = 1,2 NEXT We’ve defined next as a combination of the prepositions near and aligned. Therefore the definition for next is: u next - - Unear n Ualigned The preposition next is an example of a spatial preposition that is a combination of more elementary 2Although not a preposition from a language perspective we’ve adopted it as a spatial preposition. prepositions. This hints at the possibility of a nat- ural hierarchy of spatial prepositions. It also shows evidence of the possible partitioning of the 12D space mentioned previously. The ification of Spatial repositions This section describes why and how we fuzzify spatial prepositions. We need to fuzzify spatial prepositions because they are vague by their very nature; they de- pend on context and depend on an individual’s percep- tion of them with respect to an environment. For these reasons we need to allow for some leeway when deciding if two objects are related through a given preposition. There is a lot of freedom in how we can fuzzify spa- tial prepositions, or equivalently, the “ideal” set, UP. The idea we have adopted is to define the member- ship function fv,(p) where ~1 E RI2 as a function of a distance d between ~1 and Up. Note that d(p, Up) = 0 for p E Up. The distance d tells us by how much the defining preposition inequal- ities are not satisfied. Thus, h, (r-l) = 1 for p E up fu,(P) - 0 as 4% up> - ca Up is a multi-dimensional set defined by complex in- equalities, for which computing d may be very bur- densome. For this reason we resort to a Monte- Carlo simulation with a set of random points around p that have given statistical properties. The experi- ments we’ve conducted use normally distributed ran- dom points with mean ~1 and covariance matrix diag (9) . ..) a2). The exact form for fu, used is c f 19 P E UP where N is the total number of random points in the Monte-Carlo simulation and N’ is the number of points p’ E Up. Note that the formulation of fu, ensures that fu, for p very close to the boundary of Up will have a value close to 1. The following section will detail some experiments that use this fuzzification technique and put into effect the inequalities that define the given spatial preposi- tions. ualitative escription Experiments We will use the image shown in figure 4 to illustrate several uses of the prepositions. Each object has been numbered to ease their reference. The image is read as a grey-scale pixel image. It is then thresholded to produce a binary image and objects are located using a sequential labelling algorithm [Horn, 19891. Once the objects in the scene have been found, the attributes Figure 4: The experimental image necessary for construction of the 12-dimensional vector are computed (e.g. the area of an object is the sum of all the pixels belonging to the object). Currently the system accepts a spatial preposition and displays all those objects that satisfy the preposition inequalities. The system also accepts as input two objects along with a preposition and it outputs how well those two objects meet the given preposition (the value of fu, for given a). All intuitively obvious relations between objects are discovered by the system, e.g. objects 1 and 3 are next to each other, etc. An interesting case, and one that demonstrates the effects of fuzzification is the case of supplying object 2 and object 6 along with the preposition aligned. With no fuzzification the system finds that 2 and 6 are not aligned. However, if we allow a certain amount of fuzzi- fication with say (T = 0.03 the value of fUaliglaed is 0.8. This value indicates that they may be sufficiently aligned to be regarded as such (which we actually see in the image!), depending on how much leeway we wish to allow. The dependency of fUaliglaed on (r is shown in figure 5. From this graph we see that the value of the membership function significantly deteriorates for large values of 0. This simply means that the amount of induced uncertainty is so large that the objects cease to possess their original features (such as orientation in this case). This also indicates what the maximal acceptable value for (T should be. In this case, that is 0 < 0.1. Another interesting case is that of supplying object 2 and object 6 along with the preposition near or far. Neither satisfies the inequalities precisely. However, if we again, allow for fuzzification, we get a most inter- esting result, as shown in figure 6. We observe that Qualitative Reasoning 539 fUP Figure 5: The dependency of fUalioned (2,6) as a func- tion of 0 Figure 6: The dependency of fu,,,, (2,6) and fufaT(2, 6) as a function of 0 although we can not say for certain that object 2 and object 6 are either near or far, we can say that they are somewhat near or somewhat far. How we decide which of the two to use can be seen in figure 6. If we examine the slopes of the two curves we see that for small values of a the slope for .far is steeper than that for near. Therefore it would seem more appropriate to say that 2 is somewhat far from 6 as opposed to 2 is somewhat near to 6. Conclusion The intent of this paper was to establish a computa- tional model for characterizing spatial prepositions for use in describing objects. A quantification was estab- lished and demonstrated through the use of an exam- ple. A framework to deal with the inherent vagueness bf prepositions was also introduced with the use of a fuzzification technique. An extension of this work would be one in which a user could conduct a dialogue with the system, capa- ble of understanding as well as generating scene de- scriptions. In other words, we may wish to describe a particular object with as few descriptions as possible through the feedback from the system. The goal would be to home in on the object we are truly referring to through repeatedly supplying additional prepositions to those objects that were singled out after previous in- quires. An experiment using this technique may reveal that people naturally describe spatial arrangements in a series of descriptions, rather than once and for all. It may also demonstrate inadequacies in the vocabulary or complexity of a scene. We may also discover that certain environments require that we adopt preposi- tions that do not exist in the English language for de- scribing a particular sort of spatial arrangement. References Abella, A. 1992. Extracting geometric shapes from a set of points. In Image Understanding Workshop. Herskovits, A. 1986. Language and spatial cognition: An interdisciplinary study of the prepositions in En- glish. Cambridge University Press. Born, Berthold K.P. 1989. Robot Vision. The MIT Press. Klir, G. J. and Folger, T. A. 1988. Fuzzy Sets, Un- certainty, and Information. Prentice Hall. Landau, B. and Jackendoff, R. ation. “What” and “Where” in spatial language and spatial cognition. BBS. Lindkvist, K. 1976. Comprehensive study of concep- tions of locality in which English prepositions occur. Almqvist & Wiksell International. Talmy, L. 1983. How language structures space. In Spatial orientation: Theory, research, and applica- tion. Plenum Press. Definition of & and ty We have used the following two equations to define how much an object stretches in the x and y directions. 6X = 2max(/~lcos8~, /%IsinBl.) (Y = 2max{/Fl sinei, /Fl cosOl, ) The followi ng is the derivation of the above formulas. The maximal moment of inertia is given by the formula I max = u2dudv = k[; A where u and v are axes of maximal and minimal mo- ment of inertia respectively, A is an object’s area and 524 is an elongation parameter that conveys informa- tion regarding how much an object “stretches” along the axis u. Constant k is chosen so that in the case of a circle with radius r we have c21 = r. Simple calculation gives k = 2, and formulas for & and & are obtained by projecting tu and &, onto axes x and y. 540 Abella | 1993 | 80 |
1,410 | L.I.P.N. UniversitC Paris Nord Avenue Jean- Baptiste Clement 93430 Villetaneuse - France e-mail: dague@lipn.univ-parisl3.fr Ah& EWt In [Dague, 19931, a formal system ROM(K) involving four relations has been defmed to reason with relative orders of magnitude. In this paper, problems of introducing quantitative information and of ensuring validity of the results in [w are tackled. Correspondent overlapping relations are defmed in Iw and all rules of ROM(K) are transposed to [w. Unlike other proposed systems, the obtained system ROM( Iw) ensures a sound calculus in Iw , while keeping the ability to provide commonsense explanations of the results. If needed, these results can be refmed by using additional and complementary techniques: k-bound-consistency, which generalizes interval propagation; symbolic computation, which con- siderably improves the results by delaying numeric evaluation; symbolic algebra calculus of the roots of partial derivatives, which allows the exact extrema to be obtained; transformation of rational functions, when possible, so that each variable occurs only once, which allows interval propagation to give the exact results. ROM(R), possibly supplemented by these various techniques, constitutes a rich, powerful and flexible tool for performing mixed qualitative and numeric reasoning, essential for engineering tasks. The fast attempt to formalize relative order of magni- tude reasoning, with binary relations invariant by homothety, appeared with the formal system FOG [Raiman, 19861 (see also [Raiman, 19911 for a more general set-based framework), based on 3 basic relations and described by 32 inference rules, which has been used successfully in the DEDALE system of analog circuit diagnosis [Dague et al., 19871. Never- theless, FOG has several limitations which prevent it from being really used in engineering. A first difficulty arises when wanting to express a gradual change from one order of magnitude to another: only a steep change is possible, due to the non overlapping of the orders of magnitude. This can be solved, as described in [Dague, 19931, by introducing a fourth relation “to be distant from” which allows overlapping relations to be defmed and used. This has given a formal system ROM(K) with 15 axioms, consistency of which was proved by funding models in non standard analysis. But two crucial problems remain: the difficulty to incorporate quantitative information when available (in DEDALE this lack of a numeric-symbolic inter- face meant writing Ohm’s law in an ad hoc form) and the difficulty to control the inference process, in order to obtain valid results in the real world. These prob- lems were pointed out in [Mavrovouniotis and Stephanopoulos, 19871 but the proposed system O(M) does not really solve them. In particular, use of heuristic interpretation semantics just ensures the validity of the inference in IF! for one step (application of one rule) but not for several steps (when chaining rules). This paper focuses on solving these two prob- lems by concentrating on how to transpose the formal system ROM(K) to Iw with a guarantee of soundness and how to use additional numeric and symbolic algebra techniques to refme the results if needed, in order to build a powerful tool for both qualitative/symbolic and quantitative/numeric cal- culus for engineering purposes. The present paper is organized as follows. Section 2 shows through an example how ROM(K), as FOG or O(M), may lead to results that are not valid in [w. In section 3 a translation in DB of axioms and proper- ties of ROM(K) is given, which ensures soundness of inference in (w . In section 4 the example is revisited with this new formulation; this time correct results are obtained. Nevertheless they may be far from the optimal ones and too inaccurate for certain purposes. In section 5 numeric and symbolic algebra techniques are proposed to refme these results: applications of consistency techniques for numeric CSPs; use of computer algebra to push symbolic computation as far as possible and delay numeric evaluation, which considerably improves the results; symbolic calculus of derivatives and of their roots by using computer algebra alone in order to compute extrema and obtain optimal results; formal transformation of rational functions by changing variables, which allows the exact results to be obtained, in particular cases, by a simple numeric evaluation and opens up future ways of research. xample: a Let us remember that the formal system ROM(K) (see [Dague, 19931 for a complete description) involves four binary relations z, -, < and +, intui- tive meanings of which are “close to”, “comparable Qualitative Reasoning 541 From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. toll, “negligible w.r.t.” and “distant from” respectively. The 15 axioms are as follows: (Al) ABY (A2) Aw B ++ B ~2: (A3) A=B, BwC H AsC (A4) A N B H B N A (A5) A-B, B NC H A-C (A6) A M B H A N B (A7) A B B H CA w C.B (A8) A N B H C.A N C.B (A9) A N 1 H [A] = + (AlO) A<B t-) Bw(B+A) (All) A+B, BNC H A<C (A12) A x B, [C] = [A] I--, (A+ C) z (B+ C) (A13) A N B, [C] = [A] I-+ (A+ C) N (B+ C) (A14) A N (A+A) (A15) A + B +-+ (A-B) N A or (B-A) N B 45 properties of ROM(K) have been deduced from these axioms and 7 basic overlapping relations between two positive quantities A < B have been defmed: A B B, T(A + B), +A = B) A A N B, A + B A A N B, A + B A T(A Q B), y(A N B), A < B. Taking into account signs and identity, these relations give 15 primitive overlapping relations. Adding the 47 compound relations obtained by disjunction of successive primitive relations gives a total of 62 legiti- mate relations. Let us try to apply ROM(K) to a sirnple example of a counter-current heat exchanger as described in [Mavrovouniotis and Stephanopoulos, 1988-J. Let FH and KH be the molar-flowrate and the molar- heat of the hot stream, FC and KC the molar- flowrate and the molar-heat of the cold stream. Four temperature differences are defmed: DTH is the tem- perature drop of the hot stream, DTC is the temper- ature rise of the cold stream, DTl is the driving force at the left end of the device, and DT2 is the driving force at the right end of the device. The two fol- lowing equations hold: (el) DTH - DTl - DTC + DT2 = 0, (e2) DTH x KH x FH = DTC x KC x FC. The first one is a consequence of the definition of the temperature differences, and the second one is the energy balance of the device. Let us take the fol- lowing assumptions expressed as order of magnitude relations: (i) DT2 N DTl, (ii) DTl Q DTH, (iii) KH x KC. The problem is now to deduce from the 2 equations and these 3 order of magnitude relations the 5 missing order of magnitude relations between quantities having the same dimension (4 for temper- ature differences and 1 for molar-flowrates). Let us take the axioms (Ai) above and the properties (Pi), viewed as production rules of a symbolic deduction system ROM, as stated in [Dague, 19931. Consider fast the relation between DT2 and DTII. Thanks to (P4) A Q B, C N A H C Q B, ROM infers from (i) and (ii) that (1) DT2 6 DTII. Consider the relation between DTH and DTC. (P5) A < B I--, -A < B and (P6) A + C, I3 Q C I-+ (A + B) Q C applied to (ii) and (1) imply -DTl + DT2 4 DTH. From this it can be deduced, using (AlO), that DTH M DTH - DTl + DT2, i.e. using (el) that (2) DTH M DTC. Consider the relation between DTl and DTC. From (ii) and (2) it can be deduced, using (A6) and (All), that (3) DTl $ DTC. Consider the relation between DT2 and DTC. It results from (i) and (3), by using (P4), that (4) DT2 + DTC. Another deduction path can be found to obtain the same result. In fact, from (AlO) A E B I-+ (B-A) < AandAEC I-+ (C-A) < A, using (P5) and (P6), (P) A M B, A M C I-+ (C-B) < A can be derived. As, from (3) and (AlO), it results that DTC M DTC + DT 1, it can be deduced from this and (2), using (P), that -DTH+ DTl+ DTC + DTC, i.e. using (el) that (4) DT2 $ DTC. Consider fmally the relation between FH and FC. (A7), applied to (iii) and (2), gives DTH x KH M DTH x KC and DTH x KC z DTC x KC. Applying (A3) then gives DTH x KH z DTC x KC. Applying (A7) again and using (e2) gives (5) FH x FC. The five results (1 to 5) have thus been obtained by ROM (identical to those produced by O(M) because c$ is not used here): (1) DT2 < DTH (2) DTH w DTC (3) DTl < DTC (4) DT2 Q DTC (5) FH w PC. We have now to evaluate them in the real world. For this, it is necessary to fix a numeric scale for the order of magnitude relations. Choose for example < represented by at most lo%, E by at most (for the relative difference) 10 % and - by at most (for the relative difference) 80%. Assumptions thus mean that: (i’) 0.2 I DT2/DTl I 5, (ii’) DTl/DTII < 0.1, (iii’) 0.9 < KH/KC I 1.112. It is not very difficult in this example to compute the correct results by hand. It is found (see also sub- sections 5.3 and 5.4) that: (1’) DT2/DTH zz 0.5 (2’) 0.714 I DTII/DTC I 1.087 (3’) DTl/DTC I 0.109 (4’) DT2/DTC < 0.358 (5’) 0.828 < FH,‘FC I 1.556. This shows that only the formal result DTl 4 DTC is satisfied in practice. For the 4 others, although the inference paths remain short in this example, there is already a non trivial shift, which makes them unacceptable. This is the case for the two w relations: DTH may in fact differ from DTC by nearly 30%, and FH may difler from FC by 35%. And the same happens for the two Q relations: DT2 can reach 35% of DTC and, worse, 50% of DTH. This is not really surprising because we know that there is no model of ROM(K) in [w. IIere, it is essen- tially the rule (P4) that causes discrepancy between qualitative and numeric results. Rules such as (P4), or (Al 1) from which it comes, and also (A3) are obviously being infringed. What this does demon- strate is the insufficiency of ROM for general engi- neering tasks and the need for a sound relative order of magnitude calculus in Iw . 542 Dague In fact, all the theoretical framework developed in [Dague, 19931 is a source of inspiration for this task. Since the rules of ROM capture pertinent qualitative information and may help guide intuition, they will serve as guidelines for inferences in iw ([Dubois and Prade, 19891 addresses the same type of objectives by using fuzzy relations). Let us introduce the natural relations in ll#, parameterized by a positive real k, “close to the order k”: ALI3 c--) IA-B1 <k x Max(IA/,IBI), i.e. for k < 1, (I) l-k I A/B I l/l-k or A 7 B = 0, “distant at the order k”: A+B c-) 1~4 2 k x Max(lAI,IBI), i.e. for k< 1, (II) A/B < l-k or A/B >_ l/l-k or B 70, and “negligible at the order k”: A4B w IAl <kx IBI, i.e. (III) -k 5 A/B I k or A= B = 0. The first one will be used to model both m and N, the second one to model +, and the third one to model <, by associating a particular order to each relation. When trying to transpose the axioms (Ai) by using these new relations, three cases occur. Axioms of reflexivity (Al), symmetry (A2,A4), invariance by homothety (A7,A8), and invariance by adding a quantity of the same sign CA1 2,A 13, assuming (A9)) are obviously satisfied by N for any positive k. A second group of axioms imposes constraints between the respective orders attached to each of the 4 relations. Coupling of N with signs (A9) is true for any order k attached to - that verifies k < 1. The fact that N is coarser than w (A6) forces the order for z to be not greater than the order for N. Axiom (A14) is true for k * rf k 2 l/2. The left to right implication of (A 10) has the exact equivalent: A < B I-+ B A (B + A) for k < 1. We can thus take the same order kl for < and M. In the same way, the deftitifn of + in terms of N (A15) has its equivalent: A + B +-+ (A-B) ‘Lk A or (B-A)‘Ek B provided k I l/2, i.e. l-k 2 l/2. If we call kz the order for +J, we can thus take l-k2 as the order for N. All the above thus leads to the following correspondences: AZ: - A%3 A-B t--) A1ik2B A<B - A%B fw- - A+B with O<kr<k2< l/21 1 -k2< 1. Note that, as the formal system ROM(K) depends on two relations, its analog in aB has two degrees of freedom represented by the orders kl and k2. Remaining axioms are those which are not true in IF& For these, the loss of precision on the orders is computed exactly in conclusion. The right to left implication of (AlO) gives: B k (B+A) H Ak’z k B (k/( l-k) < 1 when kTm%?t!vity axioms (A3) and&Q each give: A A B, B E C w A N k’ C (k+k’-kk’<l whenkclandk’cl). Finally the coupling between M (through <) and - (Al 1) gives: A 2 B, B’Lk’ C H Ak$ C (k/k’< 1 when kck’). Like the axioms (Ai), all properties (Pi) deduced in [Dague, 19931 are demonstrated in the same way by computing the best orders in conclusion, when they are not directly satisfied, and constitute all the infer- ence rules of ROM( I@. For reasons of lack of space, here are the most significant or useful of the 45 prop- erties for our purpose. (P4), as (Al 1): gives 4’2 B, AILk’ C k+~ C “$ B. (P6)givesA<C, B+C H (A+B)< C(which can be improved if [A] = -[B] by taking max$k’) as order mkmconclusion) and (P) grves A N B, A N C I-+ C-B $ A with k”= (k + k’-kk’)/( l-max(k,k’)) (which can be improved if [C-B]= [A] by taking k” = (k + k’-kk’)/( 1 -k’)). Trysitivrtykof 6 (P 111 $viously improves the degree A<<B, B<C H A$ C. The incompatibility of 6 and N (P14) A1 Lk2 B, A 2 B I-+ A=B=Oandof~and+(P34)A~B,A~ B I-+ A = B = 0 are ensured provided kl < k2. These two properties will be used to check the consistency of the set of relations describing, for example, an actual behavior of a physical system, or on the con- trary to detect inconsistencies coming, from example, from discrepancies between modeled and actual behaviors of-a system for tasks such as diagnosis.e, Relations between C%A I--+ C (k2- kly(,*kg + uw P38) tiveke IJi + B and A (C-Ate2 (C-B). zB,C+A Finally, the completeness ,,of the description . B, H is obtained: A’ kk2 B or A 4 B provided that k2 s l/2. Moreover, it can be proved th$ adding k:he assumption A 1-b B Or B122 - A or A < B ork2B + A would be equivalent to adding A 2 B or A j, B and also equivalent to k2 I kl. This would imply kl = k2 = e as in FOG or in the strict interpretation of Q(M) [Mavrovouniotis and Stephanopoulos, 19871, i.e. only one degree of freedom instead or two. In the same way that formal models of ROM(K) could not be reduced to FOG or O(M), the same is obtained for ROM(R) by choosing kl< k2, i.e. 0 < kr < k2 I l/2. Note that, in relation to non standard models of ROM(K), one degree of freedom corresponds to what is chosen as the analog in Iw of the infinitesimals, and the other to the choice of the analog of the parameter E [Dague, 19931 of the model, where E corresponds to kl/ks. The above gives the exact counterpart in 08, of the 15 primitive relations of [Dague, 19931 (in fact infer- ence rules analog to the previous ones can be defined for these relations, often with some better orders in conclusion by taking into account the signs of the quantities, e.g. (P6) and (P) above), for describing the order of magnitude of A/B w.r.t. 1. These relations correspond to real intervals which overlap (in con- QuaNative Reasoning 543 rast with the strict interpretation of O(M)) and are 3uilt from the landmarks kl, k2, 1-k2, 1-kl, 1, l/(l- sr), l/(1-k2), l/ks, l/k1 of Iw, (see Fig. 1). Note that, zs in the formal model, intervals (kl, k2), (I-k2, 1-kl) md their inverse are not acceptable relations. The neuristic interpretation of O(M) [ Mavrovounio tis and Stephanopoulos, 19871, which only allows the correct nference to be made for one rule, corresponds to the particular case where kl = k2/( 1 + kz). But here kl and s2 are chosen independently, according to the exper- ;ise domain. 1 1 k, k2 - - 0 l-k, l-k, 1 l-k 1 l-k 2 R: ! yy Iwuu I i / L F ! The orders appearing in each rule of ROM(R) have to be considered as variables, with order in conclu- sion symbolically expressed in terms of orders in premises. Inference by such a rule is made simply by matching relations patterns in premises with existing relations, i.e. by instantiating the orders to real numbers, and computing the orders in conclusion to deduce a new relation. A sound calculus in [IB is thus ensured whatever the path of rules used, i.e. any con- clusion that can be deduced by application of rules of ROM@) from given correct premises is correct when interpreted in [w . This may be entirely hidden from the user, with both data and results being translated via ki and kz in symbolic order of magnitude relations as in FOG, but this time correctly. But the user may also incorporate numeric information as required by using k orders directly or even introducing numeric values for quantities. In this last case, binary relations between two given numeric quantities are automat- ically inferred from the equivalent formulation of these relations in terms of intervals (I,II,III). Control of the inference in order to avoid combina- tory explosion remains a problem for large applica- tions. A pragmatic solution is to call on the expert who wilI possibly prevent a rule from being applied if he considers in the light of its underlying qualitative meaning that the conclusions are not relevant. Let us take the example of the heat exchanger again and follow the same reasoning paths, but this time using the inference rules of ROM(R). Take, as in the numeric application, kr = 0.1 and k2=0.2. As (1) is inferred by u.ing (P4) we obtain: (la) DT2 << DTH with k3= kl/k2= 0.5. DT2 and -DTl having opposite signs, application of (P6) gives -DTl + DT2 << DTH with k4 = max(k1,k3) = 0.5. cation of (A lk9$ Using left to right impli- and (el), we get: (2a) DTH N DTC with k4= max(kl,k3) = 0.5. From (ii) and (2a), it can be deduced, using (A 1 I), that: (3a) DTl 2 DTC with k5= k,/(l-k4) = 0.2. The fast deduction path leading ,ty (4) has as an equivalent, using (P4), DT2 < DTC with k6= k5/k2 = 1, which cannot be used because the order of Q is assumed to be < 1. We may, however, use the other deduction path. (AlO), we have DTC z As, from (3a) and DTC + DTl, it can be deduced from,: this and (2a), using (P) and (e l), that: (4a) DT2 $ DTC with k7 = (k4 + k5-k4k5)/( l-k5) = 0.75. Note that two different paths leading to the same formal result in ROM may lead here to different results in ROM(R). This poses the as yet unsolved problem of deftig heuristics in order to obtain the best result, by pruning the paths that lead to less precise ones (the length of the deduction path not necessarily being a %ood criterion). From DTHxKH 2 DTHxKC and DTIIxKC 2 DTCxKC, (A3) gives DTHxKII g DTCxKC, from which, using (A7) and (e2), we obtain: (5a) FH N FC with k8 = kr + k4-klk4= 0.55. Results (la to 5a) have thus been obtained by applying rules of ROM(R). In terms of intervals, these relations are expressed as: DT2/DTH I 0.5 0.5 < DTH/DTC I 2 DTl/DTC I 0.2 DT2/DTC I 0.75 0.45 I FH/FC I 2.223. They can automatically be translated in terms of formal order of magnitude compound relations w.r.t. scales kl and k2 (with obviously a loss of precision in general), giving: DT2 <..-( DTH DTH =v H DTC DTl -+ DTC DT2 4..-( DTC TV- FC. In contrast with results of FO ‘or O(M) ( 1 to 5), this time the results are correct. We thus have a sound calculus in Iw, with all the advantages of the qualitative meaning transmitted by rules of ROM. In particular, each of the above results has its explana- tion in commonsense reasoning terms given by the rules applied to obtain it. Using symbolic computa- tion, the formal k orders of the results can even be expressed in terms of kr and k2. This can be used for other purposes such as design, so as to tune the values of kr and k2 and thus make sure desired orders of magnitude relations are satisfied. Nevertheless it can be noticed, by comparing them with exact results (l’to 5’), that these relations are not in general optimal. In fact, in qualitative terms, the exact results would give: DT2 <..< DTH DTH =%I ..w+ DTC DTl < DTC DT2<..< DTC F N-.. FC, improving 3 of the 5 above results. F- he improvement is even more obvious when comparing ranges given by numeric orders k3 to k8 with exact results. Only DT2/DTH is correctly estimated (i.e. (la) and (1’) are the same). In fact, this is not surprising. Although each rule of ROM@) has been computed with the 544 Dague best estimate for order in conclusion, so that each rule taken separately cannot be improved, this does not guarantee optimality through an inference path using several rules that share common variables. In some way, what we have is local optima&y, not global optimality. If we estimate that the obtained results, although sound, are not accurate enough for our purpose, we have to supplement ROM(R) with other techniques. Once sound results of ROM@), with the obvious qualitative meaning of the inference paths, have been obtained, several supplementary techniques can be used in order to refine them if needed. These tech- niques come from two different approaches: numeric ones which transpose well-known consistency tech- niques for CSPs to numeric CSPs, and symbolic ones which use computer algebra. These approaches are not exclusive and can be usefully combined. Applying Consistency Techniques for Numeric A fast way of reftig the results is to start from de& nitions (I,II,III) of the fundamental relations of ROM@) in terms of intervals, a technique that can easily be extended to all 15 primitive relations. Numeric values are also naturally represented by intervals to take into account precision of observa- tion. Interval computation thus offers itself. More- over, we are not limited to intervals representing the 15 primitive relations or the compound ones, i.e. to the scale of ROM(R); we can in fact express any order of magnitude binary relation between two quantities by an interval encompassing the quotient of the quantities. In particular, intervals do not need the specific symmetry properties of those of ROM(R) such as in (I,II,III). Since using intervals is thus more accurate when expressing data, it should also be so for the results. But, unfortunately, interval propa- gation is rarely powerful enough: in the heat exchanger example nothing is obtained by this method. The idea is to generalize interval propagation in the same way that, in CSPs, k-consistency with k > 2 extends arc consistency. This has been done in [Lhomme, 19931, who shows that the consistency techniques that have been developed for CSPs can be adapted to numeric CSPs involving, in particular, continuous domains. The way is to handle domains only by their bounds and to defme an analog of $;~or&tency restricted to the bounds of. the . called k-B-consistency. In particular 2-B-con&stency, or arc B-consistency, which formal- izes interval propagation, is extended by the notion of k-B-consistency. The related algorithms with their complexity are given for k = 2 and 3. They have been implemented in Interlog [ Dassault Electronique, 19911, above Prolog language. In this section, these techniques are evaluated w.r.t. the heat exchanger example. Starting from equations (el) and (e2) and assump- tions (i’,ii’,iii’), bounds for the 5 remaining quotients are looked for. In this case, as already seen, arc B-consistency gives no result. But 3-B-consistency gives the following results for the fast 4 quotients (nothing is obtained for FH/FC) with parameters characterizing the authorized relative imprecision at the bounds wl = 0.02 and w2 = 0.0001 (in about 75s on an IBM 3090): (1”) DT2/DTH I: 0.508 (2”) 0.665 < DTH/DTC I 1.120 (3”) DTl/DTC I 0.112 (4”) DT2/DTC I 0.559. It can be noticed that estimates (2”,3”,4”) are better than corresponding results of ROM@!) (2a,3a,4a) and, for the first two, not far from optimal ones (2’,3’). 4-B-consistency has also been tried, although execution time increases considerably. For example, 0.710 < DTH/DTC I 1.090 and DT2/DTC I 0.362, which well approximate (2’) and (4’), are obtained in a few minutes with w 1 = 0.0 1 and w2 = 0.05. Although interval propagation alone is in general insufficient, k-B-consistency techniques with k>3 may thus provide very good results, but some diffr- culties remain (here, nothing can be done with equation (e2), unless considering at least 5-consistency with efficiency problems). ok Algebra first The above results reach the limits of purely numeric approaches. If we want to progress towards optimal results, we have to use computer algebra in order to push symbolic computation as far as possible and delay numeric evaluation In a great number of real examples, the total number of equations expressing the behavior of the system and of order of magnitude assumptions equals the number of order of magni- tude relations asked for, and the desired dimensionless quotients can be solved in terms of the known quotients, using these equations. These sol- utions are very often expressed as rational functions and this symbolic computation can be achieved by computer algebra. For example, from equations (el) and (e2), known relations DT2= QlxDTl, DTl = Q2xDTH, KH = Q3xKC, and searched relations DT2=XxDTH, DTH=YxDTC, DTl= WxDTC, DT2= ZxDTC, FH = UxFC, MAPLE V [Char, 19881 immediately deduces the fi!mulas (F): = QlxQ2, Y = Ml-Q2+ QlxQ2, y = Q2/(1-Q2+ QlxQ2), = QlxQ2/(1-Q2+ QlxQ2), U = (l-42+QlxQ2)/Q3, with l/5 I Ql I 5, 0 I Q2 < l/10, 9/10 I Q3 < 10/9. Numeric CSP techniques can now be applied directly to these symbolic equations. This time, Qualitative Reasoning 545 results are obtained just with arc B-consistency, even for U: (Is) x IO.5 0.112 (2s) 0.666 I Y I 1.112 (4s) Z I 0.556 (3s) w I (5s) 0.810 $ U I 1.667. It can thus be seen that, when starting from solved symbolic expressions, the most simple numeric tech- nique, i.e. analog to interval propagation, gives results which are close to the exact ones (l’to 5’) and, in all cases, much better than those given by ROM(R) (la to 5a). Obviously, using 3-B-consistency improves the results still further, in particular for Z, as follows (with wl = 0.001 in 10s): (Is’) x I 0.5 52 0.110 (2s’) 0.713 < Y I 1.088 (4s’) Z I 0.358 (3s’) w 1.556, (5s’) 0.827 I U < which are practically optimal. Using Symbolic Algebra Alone for Computing ptimal Results Symbolically expressing searched quotients in terms of known ones (Qi) leads to expressions which are continuously differentiable in Qi and most often alge- braic (rational functions such as in (F)). The problem to be solved can thus generally be expressed as that of fmding the absolute extrema of these expressions on n-dimensional cl0 sed convex parallelepipeds defined by the ranges of the known intervals mi I Qi < Mi for l<iln. It is well-known that these extrema occur at points where partial derivatives are null. Thus this is a way to compute them exactly from roots of derivatives by using com- puter algebra. More precisely, a necessary (not sufficient because it can correspond in particular to a local extremum) condition for an absolute extremum in a neighbor- hood is the nullity of all the partial derivatives at the given point. A difficulty arises because extrema may be obtained on a face of dimension < n rather than in the interior of the parallelepiped. Thus derivatives on all faces have to be considered. But, thanks to com- puter algebra, it is sufficient to symbolically compute partial derivatives once and for all and then, in order to obtain derivatives on any face, to fix the Qi, which determine the face, to their numeric values. Roots of all derivatives (in our case roots of a system of polynomials) are computed, fnstly in the interior and then on the different faces in decreasing order of dimension, and the corresponding numeric values of expressions at these points are evaluated up to the vertices. These values are fmally compared and only the highest and lowest are kept, which correspond to the absolute extrema. Let us now apply this method, implemented in MAPLE V, to the heat exchanger example. Expressions X, Y, W and Z depend on the 2 vari- ables Q 1 and Q2 and are thus considered w.r.t. the rectangle l/5 < Ql 6 5, 0 I Q2 < l/10; U, which depends on the 3 variables Ql, Q2 and Q3 is consid- ered w.r.t. the parallelepiped based on the previous rectangle with 9/10 < Q3 I 10/9. Results are com- puted immediately and summarized below. For X, Y, W and Z, it is found that only their derivatives w.r.t. Ql are null on the edge Q2= 0. Corresponding constant values X = 0, Y = 1, W = 0 and Z= 0 are shown, after inspection of vertices, to be the minima for X, W and Z, but not an absolute extremum for Y. Looking now at the vertices, it is found that the maximum of X is obtained at the vertex Q 1 = 5, Q2= l/l0 and is equal to l/2; the minimum of Y is reached at Ql=5, Q2= l/l0 and is equal to 5/7, and its maximum is reached at Ql = l/5, Q2 = l/ 10 and is equal to 25/23; the maximum of W is obtained at Ql = l/5, Q2= l/l0 and is equal to 5/46 and the maximum of Z is obtained at Ql = 5, Q2= l/l0 and is equal to 5114. The derivative of U w.r.t. Ql is null both on the edge Q2=0, Q3= 9/ 10 corresponding to the con- stant value U = 10/9 and on the edge Q2 = 0, Q3= 10/9 corresponding to the constant value U = 9/ 10. But it is finally found that the minimum occurs at the vertex Ql = l/5, Q2= l/10, Q3 = 10/9 and is equal to 207/250, and that the maximum occurs at the vertex Q 1 = is equal to 14/9. 5, Q2= l/10, Q3=9/10 and Finally computer algebra, which works with rational numbers, gives the exact solutions (S) to our problem: 0 < X s l/2, 517 I Y < 25123, 0 I W I 5/46, 0 I Z < 5/14, 207/250 I U I 14/9. Floating point approximation with 3 significant digits gives (1’ to 5’). The method of roots of derivatives, processed by computer algebra, is thus a very powerful technique to automatically obtain the exact ranges. But, in addition to the complete loss of the qualitative aspect of the inference and the necessity, as in the above subsection, for the system of equations to be algebra- ically solvable, there are two other drawbacks to this approach. The first one is that roots of a polynomial system cannot in general be obtained exactly. This is solved in practice in a large number of cases by using the most recent modules of computer algebra which are able to deal with algebraic numbers (represented as a couple of a floating point interval and a polynomial, coefficients of which are algebraic numbers, such that the considered number is the only root of the polynomial belonging to the interval). The second one is the exponential complexity of the method: in an n-dimensional space we have 3” systems of polynomials to look for, from the interior to the vertices. The method becomes intractable very rapidly unless the number of variables (assumed order of magnitude relations) remains very small. Syntactically Transforming Rational Functions: a Line of Research There are cases where, after having judiciously syn- tactically transformed rational functions which are solutions of the set of equations, the simple interval propagation technique gives the exact optima, as illustrated in the example. Let us consider symbolic formulas (F). The exact result (1s) can be obtained simply by interval propa- 546 Dague gation for X because variables Ql and Q2 have only one occurrence in X. It is not the case for the other 4 formulas, which is why, in this case, interval propa- gation does not give exact results (2s to 5s). IIowever, a simple trick may be found by hand to satisfy this condition. In fact the expression l-Q2 + QlxQ2 in Y, W and U may be rewritten as 1 +Q2x(Ql-1), which boils down to changing a variable: Ql-1 instead of Q 1. A simple interval propagation gives 23/25 < 1 + Q2x(Ql- 1) I 715, from which exact sol- utions (S) for Y, W and U are immediately obtained. It is not the case for Z because Ql appears also in the numerator. But Z can be rewritten as Z = l/( 1 + (l/Ql)( l/(Q2-1))) where each new vari- able l/Q1 and l/(Q2-1) appears only once. The exact result (S) 0 I Z I 5/14 follows immediately. This interval propagation may be achieved exactly by manipulating rational numbers, or with a given approximation by manipulating floating point numbers, as is done by Interlog with 10 exact signif- icant digits. It can be concluded that, when expressions can be rewritten by changing variables, such that each new variable occurs only once, simple interval propagation gives exact solutions. This transformation is obvi- ously not always possible. A line of research would be to characterize the cases where such a transforma- tion of rational functions (or at least a partial one which minimiz es the number of occurences of each variable) is possible and to find algorithms to do this. It has been shown in this paper that the formal system ROM(K) [Dague, 19931 can be transposed in IF! in order to incorporate quantitative information easily, and to ensure validity of inferences in Iw. Rules of ROM(R) thus guarantee a sound calculus in [w (which was not the case with FOG, O(M) or ROM(K)), while keeping their qualitative meaning, thus guiding research and providing commonsense explanations for results. If the loss of precision through inference paths is such that some of these results are judged to be too imprecise for a specific purpose, several complemen- tary techniques can be used to refme them. k-consistency algorithms for numeric CSPs, which generalize for k > 2 interval propagation, generally improve the results but may require a large k, in which case they are very time consuming. A better approach is first to use computer algebra to express dimensionless quotients for which approximation is searched in terms of quotients for which given bounds are assumed, and then to apply k-consistency techniques to the symbolic expressions obtained. It has also been shown that computer algebra alone may be used to obtain exact results, by computing roots of partial derivatives in order to obtain the extrema of the expressions on n-dimensional parallelepipeds although this method, which is expo- nential in n, is tractable only for a small number of variables (i.e. known quotients). Finally, future work would consist in formally modifying rational func- tions in order to have a minimal number of occur- rences of each variable, thus making interval computation more precise; in particular, when it is possible to have only one occurrence for each vari- able, simple interval computation gives the exact results. All this assortment of tools, with ROM(R) as the basis, is now available to perform powerful and flex- ible qualitative and numeric reasoning for engineering tasks, and will be tested soon on real applications in chemical processes. This research was done partly at the Paris IBM Sci- entific Center, to whom I extend my thanks, and partly at the University Paris 6, where I had very fruitful discussions with Daniel Lazard on computer algebra. I would also like to thank Olivier Lhomme for experiments with Interlog and Emmanuel Goldberger for applying this work to chemical proc- esses. Finally, my thanks go to Rosalind Greenstein for the English version of this paper and to Gilles Dague for the figure. eferences B.W. Char, MAPLE Reference Manual, Watcom, Waterloo, Ontario, 1988. P. Dague, “Symbolic reasoning with relative orders of magnitude,” Proceedings of the Thirteenth IJCAI, Chambery, August 1993. P. Dague, P. De&, and 0. Raiman, “Trouble- shooting: when Modeling is the Trouble,” Proceedings of AAAI Conference, Seattle, July 1987. Dassault Electronique, INTERLOG 1 .O, User’s Guide, 1991. D. Dubois and H. Prade, “Order of magnitude rea- soning with fuzzy relations,” Proceedings of the IFA C/MA CSIIFORS International Sym- posium on Advanced Information Processing in Automatic Control, Nancy, 1989. 0. Lhomme, “Consistency techniques for numeric CSPs,” Proceedings of the Thirteenth IJCAI, Chamber-y, August 1993. M.L. Mavrovouniotis and G. Stephanopoulos, “Rea- soning with orders of magnitude and approx- imate relations,” Proceedings of AAAI Conference, Seattle, July 1987. M.L. Mavrovouniotis and 6. Stephanopoulos, “Formal order of magnitude reasoning in process engineering,” Comput. them. Engng. 12, 1988. 0. Raiman, “Order of magnitude reasoning,” Proceedings of AAAI Conference, Philadelphia, August 1986. 0. Raiman, “Order of magnitude reasoning,” ArtiJicial Intelligence 51, I99 1. 547 | 1993 | 81 |
1,411 | robabilistic Marek J. Druzdzel Carnegie Mellon University Department of Engineering and Public Policy Pittsburgh, PA 15213 marek+ @emu. edu Abstract Qualitative Probabilistic Networks (QPNs) are an abstraction of Bayesian belief networks replacing numerical relations by qualitative influences and synergies [Wellman, 1990b]. To reason in a QPN is to find the effect of new evidence on each node in terms of the sign of the change in belief (increase or decrease). We introduce a polynomial time al- gorithm for reasoning in QPNs, based on local sign propagation. It extends our previous scheme from singly connected to general multiply con- nected networks. Unlike existing graph-reduction algorithms, it preserves the network structure and determines the effect of evidence on all nodes in the network. This aids meta-level reasoning about the model and automatic generation of intuitive explanations of probabilistic reasoning. Introduction A formal representation should not use more speci- ficity than needed to support the reasoning required of it. The appropriate degree of specificity or numerical precision will vary depending on what kind of knowl- edge is available and what questions users want it to address. Q ua 1 a ive l’t t Probabilistic Networks (QPNs) can replace or supplement quantitative Bayesian be- lief networks where numerical probabilities are either not available or not necessary for the questions of in- terest. QPNs have been found valuable for such tasks as planning under uncertainty [Wellman, 1990b] and for explanation of probabilistic reasoning [Henrion and Druzdzel, 19911. Like other qualitative schemes, QPNs are weaker than their quantitative counterparts, but they can provide more robust results with much less effort. QPNs are in essence a qualitative abstraction of Bayesian belief networks and influence diagrams. A QPN requires specification of the graphical belief net- work, expressing probabilistic dependence and inde- pendence relations. In addition, it requires specifica- *This research was supported in part by the Rockwell International Science Center. Max Henrion Rockwell International Science Center Palo Alto Laboratory 444 High Street, Suite 400 Palo Alto, CA 94301 henrion@camis.stanford. edu tion of the signs of influences and synergies among vari- ables. A proposition A has a positive influence on a proposition B, if observing A to be true makes B more probable. Variable A is positively synergistic with vari- able B with respect to a third variable C, if the joint effect of A and B on the probability of C is greater than the sum of their individual effects. QPNs gener- alize straightforwardly to multivalued and continuous variables. An expert may express his or her uncertain knowledge of a domain directly in the form of a QPN. This requires significantly less effort than a full numer- ical specification of a belief network. Alternatively, if we already possess a numerical belief network, then it is straightforward to identify the qualitative relations inherent in it. In previous work [Henrion and Druzdzel, 199 l] we introduced an approach called qualitative belief prop- agation, analogous to message-passing algorithms for quantitative belief networks (e.g., [Kim and Pearl, 1983]), which t races the effect of an observation e on successive variables through a belief network to the tar- get t . Every node on the path from e to t is given a label that characterizes the sign of impact. This was further developed, with particular emphasis on inter- causal reasoning in [Wellman and Henrion, 19911. This approach differs from the graph reduction-based ap- proach [Wellman, 1990b] in that it preserves the orig- inal structure of the network. The graph-reduction scheme performs inference by successively reducing the network to obtain the qualitative relation directly be- tween e and t. There are usually several node reduc- tion and arc reversal sequences possible at any step of the algorithm. As some of these sequences may lead to ambiguous signs, the algorithm needs to determine which sequences are optimal with respect to maximum specificity of the result. The computational complexity of this task is unknown [Wellman, 199Oc]. The reason why different sequences of operators lead to different specificity of the results, is that, although the opera- tion of arc reversal preserves the numerical properties of the network [Shachter, 19861, it leads to loss of the explicit qualitative graphical information about condi- tional independencies. 548 Drwdzel From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. Earlier work on qualitative belief propagation ap- plied only to a restricted class of singly connected be- lief networks (polytrees). The main contribution of this paper is to extend qualitative belief propagation to arbitrary networks and to present a complete belief propagation algorithm for QPNs. All random variables that we deal with in this paper are multiple-valued, discrete variables, such as those represented by nodes of a Bayesian belief network. We make this assumption for convenience in mathematical derivations and proofs. Lower case letters (e.g., Z) will stand for random variables, indexed lower-case letters (e.g., xi) will denote their outcomes. In case of binary random variables, the two outcomes will be denoted by upper case (e.g., the two outcomes of a variable c will be denoted by C and c). Outcomes of random vari- ables are ordered from the highest to the lowest value. And so, for a random variable a, V’i < j [ai 2 aj]. For binary variables C > ??, or true>false. Qualitative Probabilistic Networks Formally, a QPN is a pair G = (V, Q), where V is a set of variables or nodes in the graph and Q is a set of qualitative relations among the variables [Wellman, 1990b]. Th ere are two types of qualitative relations in Q: qualitative influences and additive synergies. We re- produce their definitions from [Wellman and Henrion, 19911. The qualitative influences define the sign of di- rect influence between two variables and correspond to an arc in a belief network. Definition 1 (qualitative influence) We say that a positively influences c, written S+(a, c), i# for all values al > a2, co, and x, which is the set of all of c’s predecessors other than a, This definition expresses the fact that increasing the value of a, makes higher values of c more probable. Negative qualitative influence, S-, and zero qualitative influence, So, are defined analogously by substituting 2 by 5 and = respectively, Definition 2 (additive synergy) Variables a and b exhibit positive additive synergy with respect to vari- able c, writtenY+((a, b},c), iffor all al > a2, bl > b2, CO, and x, which is the set of all of c’s predecessors other than a and b, Pr(c 2 c&-&x) + Pr(c 2 c&&x) 2 Pr(c 2 colalb2x) + Pr(c 2 cola2blx). The additive synergy is used with respect to two causes and a common effect. It captures the property that the joint influence of the two causes is greater than sum of individual effects. Negative additive synergy, Y-, and zero additive synergy, Y”, are defined analogously by substituting > by 5 and = respectively. If a qualitative property is not ‘+, ‘-, or ‘0, it is by default ‘? (S? and Y. respectively). As all the defini- tions are not-strict, both ‘+ and ‘- are consistent with ‘0; for the same reason ‘? is consistent with ‘0, ‘+, and I -. Any qualitative property that can be described by a ‘0 can be also described by ‘+, ‘-, or ‘?. Obviously, when specifying a network and doing any kind of rea- soning, one prefers stronger conclusions to weaker ones and this is captured by the canonical order of signs: ‘0 is preferred to ‘+ and ‘-, and all three are preferred to ‘? [Wellman, 1990b]. Qualitative properties can be elicited directly from a domain expert along with the graphical network using their common-sense interpret at ion or, alterna- tively, extracted from the numerical specification of a quantitative belief network using the definitions given above. It is worth noting that most popular prob- abilistic interactions exhibit unambiguous qualitative properties. It can be easily proven, for example, that bi-valued noisy-OR gates have always positive influ- ences (S+) and negative additive synergies (Y-). Lin- ear (Gaussian) models yield well defined qualitative in- fluences (i.e., non-‘?) and zero additive synergies (Y’). Figure 1 shows an example of a QPN. This network is a small fragment of a larger belief network proposed for modeling an Orbital Maneuvering System (OMS) propulsion engine of the Space Shuttle [Horvitz et al., 19921. The dMS engine’s fragment captured by the HeOx Ternp . High Ox Figure work. He HeOx Valve Ox Prressur-e 1: An example ofa qualitative probabilistic net- Probe network consists of two liquid gas tanks: an oxidizer tank and a helium tank. Helium is used to pressur- ize the oxidizer, necessary for expelling the oxidizer into the combustion subsystem. A potential temper- ature problem in the neighborhood of the two tanks (HeOx Temp) can be discovered by a probe (HeOx Temp Probe) built into the valves between the tanks. An increased temperature in the neighborhood of the two tanks can increase the temperature in the oxidizer tank (High Ox Temp) and this in turn can cause a leak in the oxidizer tank (Ox Tank Leak). A leak may lead Qualitative Reasoning 549 to a decreased pressure in the tank. A problem with the valve between the two tanks (HeOx Valve Prob- lem) can also be a cause of a decreased pressure in the oxidizer tank. The pressure in the oxidizer tank is measured by a pressure gauge (Ox Pressure Probe). Of all the variables in this network, only the values of the two probes (HeOx Temp Probe and Ox Pressure Probe) are directly observable. The others must be inferred. Links in a QPN are labeled by signs of the qual- it ative influences S6, each pair of links coming into a node is described by the signs of the synergy be- tween them. Note that all these relations are uncer- tain. An increased HeOx Temp will usually lead to an increased reading from the HeOx Temp Probe, but not always - the probe may fail. But the fact that increased HeOx Temp makes an increased HeOx Temp Probe more probable is denoted by a positive influence s+. Qualitative Intercausal Reasoning In earlier work [Henrion and Druzdzel, 19911 we pro- posed a third qualitative property, called product syn- ergy, which was further studied by Wellman and Hen- rion [1993]. Product synergy captures the sign of con- ditional dependence between immediate predecessors of a node that has been observed or has evidential support. The most common pattern of reasoning cap- tured by product synergy is known as explaining away. For example, suppose my observed sneezing could be caused by an incipient cold or by a cat allergy. Subse- quently observing a cat would explain away the sneez- ing, and so reduce my fear that I was getting a cold. This is a consequence of the negative product synergy between cold and allergy on sneezing. A key desired feature of any qualitative property be- tween two variables in a network is that this is invari- ant to the probability distribution of other neighbor- ing nodes. This invariance allows for drawing conclu- sions that are valid regardless of the numerical values of probability distributions of the neighboring variables. Previous work on intercausal reasoning concentrated on situations where all irrelevant ancestors of the com- mon effect were assumed to be instantiated. To be able to perform intercausal reasoning in arbitrary belief net- works, we extended the definition of product synergy to accommodate this case. We reproduce here only the most important results. The complete derivations and proofs are reported in [Druzdzel and Henrion, 19931. Definition 3 (half positive semi-definiteness) A square n x n matrix M is called half positive semi- definite (half negative semi-definite) if for any non- negative vector x consisting of n elements xTMx 2 0 (XTMX 5 0). The following theorem states the sufficient condition for a matrix to be half positive semi-definite. Neces- sity of this condition in general remains a conjecture, although we have shown that it is necessary for 2 x 2 and 3 x 3 matrices. Theorem I. (half positive semi-definiteness) A suficient condition for half positive semi-definiteness of a matrix is that it is a sum of a positive semi-definite and a non-negative matrix. Definition 4 (product synergy) Let a, b, and x be predecessors of c in a QPN. Let n, denote the num- ber of possible values of x. Variables a and b exhibit negative product synergy with respect to a particular value co of c, regardless of the distribution of x, *written X-({a, b}, co), if for all al > a2 and for all bl > b2, a square nz x nz matrix D with elements Dij = Pr(colalblxi)Pr(coll2b2x~) - Pr(coJa2blxi)Pr(cola1622j). is ha/f negative semi-definite. If D is half positive semi- definite, a and b exhibit positive product synergy writ- ten as X+({a, b}, CO). If D is a zero matrix, a and b exhibit zero product synergy written as X0 ({ a, b}, co). Note that product synergy is defined with respect to each outcome of the common effect c. There are, there- fore, as many product synergies as there are outcomes in c. For a binary variable c, there are two product synergies, one for C and one for ??. Although the definition of product synergy may seem rather unintuitive, it simply captures formally the sign of conditional dependence between pairs of direct ancestors of a node, given that the node has been observed. This sign can be easily elicited directly from the expert. It is worth noting that most popu- lar probabilistic interactions, the bi-valued noisy-OR gates exhibit negative product synergy (X-) for the common effect observed to be present and zero prod- uct synergy (X0) for the common effect observed to be absent, for all pairs of their direct ancestors. Intercausal reasoning is an important component of the qualitative belief propagation allowing for sign propagation in cases where some of the network vari- ables are instantiated. The following theorem describes the sign of intercausal reasoning for direct evidential support for the common effect node (see Figure 2). We prove an analogue theorem for indirect evidential support in [Druzdzel and Henrion, 19931. Figure 2: Intercausal c observed. reasoning between a and b with Theorem 2 (intercausal reasoning) Let a, b, and x be direct predecessors of c such that a and b are con- ditionally independent (see Figure 2). A suficient and 550 Druzdzel necessary condition for S- (a, is negative product synergy, X b) on observation of co -‘({a, b}, co). ualitative Belief In singly connected networks, evidence flows from the observed variables outwards to all remaining nodes of the network, and never in the opposite direction. In the presence of multiple connections, this paradigm be- comes problematic, as the evidence coming into a node can arrive from multiple directions. For any link that is part of a clique of nodes, it becomes impossible to determine in which direction the evidence flows. Nu- merical belief propagation through multiply connected graphs, encounters the problem of a possibly infinite sequence of local belief propagation and an unstable equilibrium that does not necessarily correspond to the new probabilistic state of the network [Pearl, 1988, pages 195-2231. Algorithms adapting the belief propa- gation paradigm to multiply connected belief networks treat loops in the underlying graph separately and es- sentially reduce the graph to a singly connected one. It turns out that the qualitative properties of the QPNs allow for an interesting view of qualitative belief propagation. The qualitative influences and synergies are defined in such a way that they are independent of any other nodes interacting with the nodes that they describe. This allows the propagation of belief from a node e to a node n to disregard all such nodes and effectively decompose the flow of evidence from e to n into distinct trails from e to n. On each of these trails, belief flows in only one direction, from e to n, and never in the opposite direction, exactly as it does in singly connected networks. The belief propagation approach requires that quali- tative changes be propagated in both directions. Prod- uct synergy is symmetric with respect to the predeces- sor nodes, so X6({a, b}, co) implies Xs({b, a}, co). The following theorem shows that a qualitative influence between any two nodes in a network is also symmetric. Theorem 3 (symmetry) Ss(a, b) implies S6(b, a). The theorem follows from the monotone likelihood property [Milgrom, 19811. It shows merely that the sign of influence is symmetric. The magnitude of the influence of a variable a on a variable b can be arbi- trarily different from the magnitude of the influence of bon a. Of the definitions below, trail, head-to-head node, and active trail are based on [Geiger et al., 19901. Definition 5 (trail in undirected graph) A trail in an undirected graph is an alternating sequence of nodes and links of the graph such that every link joins the nodes immediately preceding it and following it. Definition 6 (trail) A trail in a directed acyclic graph is an alternating sequence of links and nodes of the graph that form a trail in the underlying undirected graph. Definition 7 (head-to-head node) A node c is called a head-to-head node with respect to a trail t if there are two consecutive edges a 3 c and c t b on t. efinition 8 (minimal t~tiB) A trail connecting a and b in which no node appears more than once is called a minimal trail between a and b. Definition 9 (active trail) A trail t connecting nodes a and b is said to be active given a set of nodes L if (1) every head-to-head node with respect to t either is in L or has a descendant in L and (2) every other node on t is outside L. Definition 10 (evidential trail) A minimal active trail between an evidence node e and a node n is called an evidential trail from e to n. Definition 11 (intercausal link) Let a and b be di- rect ancestors of a head-to-head node t. An intercausal link exists between a and b, if t is in or has a descen- dant in the set of evidence nodes. The sign of the in- tercausal link is the sign of the intercausal influence between a and b determined by the product synergy. Qualitative signs combine by means of sign multipli- cation and sign addition operators, defined in Table 1. Table 1: Sign multiplication (@) and sign addition (@) operators [Wellman, 1990b] Definition 12 (sign of a trail) The sign of a trail t is the sign product of signs of all direct and intercausal links on t. Theorem 4 (evidential trails) The qualitative in- fluence of a node e on a node n is equal to the sign sum of the signs of all evidential trails from e to n. Proof: (outline) We demonstrate for each of the three qualitative properties that they are insensitive to the probability distribution of neighboring nodes. The presence of another, parallel trail through which evidence might flow changes only the probability dis- tribution of the neighboring nodes, and this does not impact the qualitative properties of other nodes and paths. This implies that none of the straightforward prop- agation rules for the singly connected networks will be invalidated by the presence of multiple trails. Qualita- tive change in belief in a node n given a single evidence node e can be viewed as a sum of changes through in- dividual evidential trails from e to n. It will be well determined only if the signs of these paths are consis- tent (i.e., the sign sum is not ‘?). 0 Qualitative Reasoning 551 The algorithm for qualitative belief propagation (Figure 3) is based on local message passing. The goal is to determine a sign for each node denoting the direction of change in belief for that node given new evidence for an observed node. Initially each node is set to ‘0, except the observed node which is set to the specified sign. A message is sent to each neighbor. The sign of each message becomes the sign product of its previous sign and the sign of the link it traverses. Each message keeps a list of the nodes it has visited and its origin, so it can avoid visiting any node more than once. Each message travels on one evidential trail. Each node, on receiving a message, updates its own sign with the sign sum of itself and the sign of the message. Then it passes a copy of the message to all unvisited neighbors that need to update their signs. Given: A qualitative probabilistic network, an evidence node e. Output: Sign of the influence of e on each node in the network Data structures: { In each of the nodes } sign ch; { sign of change } sign evs; { sign of evidential support } Main Program: for each node n in the network do ch := ‘0; Propagate-Sign( 0, e, e,’ +) ; Recursive procedure for sign propagation: ( trail visited nodes, from sender of the message, to recipient of the message, sign sign of influence from from to to } Propagate-Sign(trail, from, to, sign) begin if to 1 ch = sign $ to 1 ch then exit; { exit if already made the update } to t ch := sign @ to t ch; { update the sign of to } trail := trail U to; { add to to the set of visited nodes } for each n in the Markov blanket of to do begin S := sign of the link; { direct or intercausal } sn := n 1 ch; { current sign of n } if the link to n is active and n 4 trail andsn#tofch@sthen Propagate-Sign(trail, to, n, to t ch @ s); end end Figure 3: The algorithm for qualitative sign propaga- tion. The character of the sign addition operator implies that each node can change its sign at most twice - first from ‘0 to ‘+, ‘-, or ‘? and then, if at all, only to ‘?, which can never change to any other sign. Hence each node receives a request for updating its sign at most twice, and the total number of messages for the network to reach stability is less than twice the number of nodes. Each message carries a list of visited nodes, which contains at most the total number of nodes in the graph. Hence, the algorithm is quadratic in the size of the network. Unfortunately, this propagation algo- rithm does not generalize straightforwardly to quanti- tative belief networks. The sign propagation will reach each node in the net- work that is not d-separated from the evidence node e. It is possible to change the focus of the algorithm to determining the sign of some target node n and all nodes that are located on active trails from e to n. This requires a small amount of preprocessing, consist- ing of removal of irrelevant barren and ancestor nodes. The methods to do that are summarized in [Druzdzel, 19931. HeOx Temp Probe Figure 4: Algorithm for qualitative An example. belief propagation: Figure 4 shows an example of how the algorithm works in practice. Suppose that we want to know the effect of observing a high reading of the HeOx Temp Probe on other variables in the model. We set the signs of each of the nodes to ‘0 and start by sending a pos- itive sign to HeOx Temp Probe, which is our evidence node. HeOx Temp Probe determines that its parent, node HeOx Temp, needs updating, as the sign product of ‘+ and the sign of the link ‘+ is ‘+ and is different from the current value of the node ‘0. After receiving this message, HeOx Temp sends messages to its direct descendants High Ox Temp and Ox Tank Leak, who also need updating. As the sign of Ox Tank Leak is already ‘+, High Ox Temp does not send any further messages. Seeing that Ox Pressure Probe needs updat- ing, Ox Tank Leak will send it a message. The sign of this message is ‘-, because the sign of the qualitative influence between Ox Tank Leak and Ox Pressure Probe is I---. Ox Pressure Probe will not send any further mes- sages and the algorithm will terminate leaving HeOx Valve Problem unaffected. The final sign in each nodes expresses how the probability of this node is impacted by observing a high reading of HeOx Temp Probe. 552 Druzdzel Conclusions and Applications This paper has described an extension of belief prop- agation in qualitative probabilistic networks to mul- tiply connected networks. Qualitative belief propaga- tion can be performed in polynomial time. The type of reasoning addressed by QPNs and by the algorithm that we propose, namely determining the sign of evi- dential impact, is one of the few queries that can be answered in polynomial time in general networks, even when all nodes are of some restricted type, for exam- ple noisy-OR or continuous linear (Gaussian). Belief propagation is more powerful than graph reduction ap- proach for two reasons: (1) it uses product synergy, which is a new qualitative property of probabilistic in- teractions, and (2) it offers a reasoning scheme, whose operators do not lead to loss of qualitative information and whose final results do not depend on the order of their application. Although examples of problems that can be resolved by belief propagation and not by graph reduction can be easily found, it is unfair to compare the strength of the two methods, as belief propagation uses an additional qualitative property, namely prod- uct synergy. Wellman [199Oa] d escribes several possible applica- tions of QPNs, such as support for heuristic planning and identification of dominant decisions in a decision problem. The belief propagation approach proposed in this paper supports these applications, and has the ad- ditional advantage over the graph-reduction approach in that it preserves the underlying graph and deter- mines the sign of the node of interest along with the signs of all intermediate nodes. This supports directly two new applications of QPNs. Firstly, it allows a computer program, in case of sign-ambiguity, to reflect about the model at a meta level and find the reason for ambiguity, for example, which paths are in conflict. Hence, it can suggest ways in which the least addi- tional specificity could resolve the ambiguity. This may turn out to be a desirable property, as many multiply- connected networks that we used for testing the algo- rithm led in some queries, especially those involving intercausal reasoning, to ambiguous results. One rea- son for that is that the most common value of product synergy appears to be negative, which in loops often leads to conflicts with usually positive signs of links. A second application involves using the resulting signs for generation of intuitive qualitative explanations of how the observed evidence is relevant to the node of interest. The individual signs, along with the signs of influences, can be translated into natural language sen- tences describing paths of change from the evidence to the variable in question. A method for generation of verbal explanations of reasoning based on belief prop- agation is outlined in [Druzdzel, 19931. References Acknowledgments We are most grateful to Michael Wellman and two anonymous reviewers for insightful comments. Druzdzel, M.J., and Henrion, M. 1993. Intercausal reasoning with uninstantiated ancestor nodes. Forth- coming. Druzdzel, M.J. 1993. Probabilistic Reasoning in De- cision Support Systems: From Computation to Com- mon Sense. Ph.D. diss., Department of Engineering and Public Policy, Carnegie Mellon University. Geiger, D.; Verma, T.S.; and Pearl, J. 1990. d- Separation: From theorems to algorithms. In Hen- rion, M.; Shachter, R.D.; Kanal, L.N.; and Lem- mer, J.F., eds. 1990, Uncertainty in Artificial Inteldi- gence 5. 139-148. North Holland: Elsevier. Henrion, M., and Druzdzel, M.J. 1991. Qualitative propagation and scenario-based approaches to expla- nation of probabilistic reasoning. In Bonissone, P.P.; Henrion, M.; Kanal, L.N.; and Lemmer, J.F., eds. 1991, Uncertainty in Artificial Intelligence 6. 17-32. North Holland: Elsevier. Horvitz, E.J.; Ruokangas, C.; Srinivas, S.; and Barry, M. 1992. A decision-theoretic approach to the display of information for time-critical decisions: The Vista project. In Proceedings of SOAR-92, NASA/Johnson Space Center, Houston, Texas. Kim, J.H., and Pearl, J. 1983. A computational model for causal and diagnostic reasoning in inference sys- tems. In Proceedings of the 8th International Joint Conference on Artificial Intelligence, IJCAI-83, Los Angeles, Calif. 190-193. Milgrom, Paul R. 1981. Good news and bad news: Representation theorems and applications. Be/Z Jour- nad of Economics 12(2):380-391. Pearl, J. 1988. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. San Mateo, Calif.: Morgan Kaufmann Shachter, R.D. 1986. Evaluating influence diagrams. Operations Research 34(6):871-882. Wellman, M.P. and Henrion, M. 1991. Qualitative in- tercausal relations, or explaining “explaining away”. In KR-91, Principles of h’nowledge Representation and Reasoning: Proceedings of the Second Interna- tional Conference, Cambridge, MA. 535-546. Wellman, M.P., and Henrion, M. 1993. Explaining “explaining away”. IEEE Trans. PAMI 15(3):287- 291. Wellman, M.P. 1990a. Formulation of Tradeo$s in Planning Under Uncertainty. London: Pitman. Wellman, M.P. 1990b. Fundamental concepts of qual- itative probabilistic networks. Artificial Intelligence 44(3):257-303. Wellman, M.P. 199Oc. Graphical inference in qualita- tive probabilistic networks. Networks 20(5):687-701. Qualitative Reasoning 553 | 1993 | 82 |
1,412 | Toyoaki Nishida Graduate School of Information Science Advanced Institute of Science and Technology, Nara 8916-5, Takayama-cho, Ikoma-shi, Nara 630-01, Japan nishida@is.aist-nara.ac.jp Abstract Understanding flow in the three-dimensional phase space is challenging both to human ex- perts and current computer science technology. To break through the barrier, we are building a pro- gram called PSX3 that can autonomously explore the flow in a three-dimensional phase space, by integrating AI and numerical techniques. In this paper, I point out that quasi-symbolic representa- tion called flow mappings is effective as a means of capturing qualitative aspects of three-dimensional flow and present a method of generating flow map- pings for a system of ordinary differential equa- tions with three unknown functions. The method is based on a finding that geometric cues for gen- erating a set of flow patterns can be classified into five categories. I demonstrate how knowledge about interaction of geometric cues is utilized for intelligently controlling numerical computation. Flow in Three-Dimensional Phase Space In this paper, we consider qualitative behavior of sys- tems of ODES of the form: g = f(x), (1) where x E R3 and f : R3 --) R3. For a while, I fo- cus on systems of piecewise linear ODES in which f is represented as a collection of linear functions and c0nstants.l Although they are but a subclass of ODES, systems of piecewise linear ODES equally exhibit com- plex behaviors under certain conditions. For example, consider a system of piecewise linear ODES: dx z = -6.3~ + 6.3~ - 9g(x) $ r 0.;; - 0.7y + z dt- (2) 1 Later, I will discuss how the method presented extended to cases in which only general restrictions nuity) are posed on f. can be (con ti- where, s(x) = -0.5x + 0.3 (x < -1) -0.8x (-1 5 x 5 1) -0.5x - 0.3 (1 < 2). System of ODES (2) results from simplifying the circuit equations of Matsumoto-Chua’s circuit (third order, reciprocal, with only one nonlinear, 3-segments piece- Wise linear resistor VR; see Figure la). In spite of its simplicity in form, (2) exhibits a fairly complex behav- ior. The phase portrait contains a chaotic attractor2 with a “double scroll” structure, that is, two sheet-like thin rings curled up together into spiral forms as shown in Figure le (Matsumoto et al., 1985). Orbits approach the attractor as time goes and manifest chaotic behav- iors as they irregularly transit between the two “rings.” Chaotic attractors may exist only in three or higher dimensional phase space. This fact makes analysis of high dimensional flows significantly harder than twe dimensional flows. Analysis of the double-scroll attrac- tor was reported in a full journal paper (Matsumoto ei al., 1985) in applied mathematics. Flow Mappings as Representation of Flow r We would like to represent flow using finite-length, quasi-symbolic notations. The key idea I present in this paper is to partition orbits into intervals (orbi-t in- tervals) and aggregate them into “coherent” bundles (hereafter, bundles of orbi indervuls) so that the flow can be represented as a sum of finitely many bundles of orbit intervals. I define the coherency of orbit in- tervals with respect to a finite set of sensing planes arbitrarily inserted into the phase space: a bundle of orbit intervals Q is coheren-t, if all orbit intervals in Cp come from the same generalized source (or g-source) and go to the sa,me generalized sink (or g-sink) without being cut by any sensing plane, where g-source and g- sink are either (a) a fixed point or a repeller/attractor 2Roughly, an attractor is a dense collection of orbits that nearby orbits approach as t -+ 00. The reader is referred to (Guckenheimer and Holmes, 1983) for complete definition and discussion. 554 Nishida From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. (a) the circuit (e) trace of an orbit near (O,O, 0) (b) characteristic of UR iR = !?(uR) = mOvR + ;(m, - mO)luR + BP/ + +O - ml)b’R - B,i (c) the circuit equation i C dvc 1 di - = G(w, - vc,) -g(w,) Cg%=G(vc, -uc,)+iL L% = -vc, (d) constants and transformation Cl = l/9,& = I,L = 1/7,G = 0.7, mo = -0.5,ml = -0.8, B, = 1, ‘UC* = 2, “C, = y,i~ = .z Figure 1: MatsumotoChua’s circuit (Matsumoto et al., 1985) and a trace of an orbit near a double scroll attractor f: g-source Figure 2: A bundle of orbi tation by a flow mapping f--T--, 1 \ I I rlll-li-; ;. .._ \ , I ___ -1: I 1 1 ’ \ \ i \ 1 \.J&- sensing plane sensing plane 1 characterized as: f -+ t t intervals and its represen- g-sink with more complex structure, or (b) a singly connected region of a sensing plane. A flow mapping represents a bundle of orbit intervals as a mapping from the g-source to the g-sink. Thus, it is mostly symbolic. However, it is not completely symbolic as we represent the shape of g-sources and g- sinks approximately. Figure 2 schematically illustrates a bundle of orbit intervals and its representation by a flow mapping, where the g-sinks and g-sources are connected regions of a sensing plane. We are interested in minimal partition of flow into coherent bundles of orbit intervals that would lead to minimal description length. Figure 3 shows minimal partition of the flow of Ma,tsumoto-Chua equation (2) in a three-dimensional region R : 1 5 x < 3,-2 2 y 2 2,-3 < z 5 3,0.566(x-1.5)-0.775y+O.281(2+1.05) 2 0 into bun- dles of orbit intervals. Plane 0.566(x - 1.5) - 0.775~ + 0.281(z+1.05) = 0 is a two-dimensional eigenspace and line p26p29 is a one-dimensional eigenspace of a fixed point p26. Orbits in region 0.566(x - 1.5) - 0.775~ $ 0.281(~ + 1.05) > 0 approach the two-dimensional eigenspace with turning around the eigenspace p26p29. As they approach the twodimensional eigenspace, the spiral becomes bigger and diverges. The flow in R can be partitioned into fifteen bundles of orbit intervals. For example, orbit intervals entering R through region v105p41p4op10 can be partitioned into five bundles of orbit intervals: @l 1 vlP2P3P4P5 - ~lPlP8P6P5 @ @2 1 P2%P43P39P3z3P3QP28P4P3 - plpSlp35p36p34pZSp22p9p8 @ a3 1 P5P4P28P32P31PlO - PllP15P38P3OP31PlO @ @4 1 p29P3QP31p32P28P30 - P25p23P24p23P34p33 @ % : p39p43p41p40 - P44P43p41p42. Generating Flsw Mappings for Three- imensional lOW In order to design an algorithm of generating flow map- pings for a given flow, I have studied the relationships between geometric patterns that flow makes on the sur- face of sensing planes and the topological structure of underlying orbit intervals, and found that they can be classified into five categories called geomeiric cue inter- uction patterns. My algorithm makes use of geometric cue interaction patterns as local constraints both to focus numerical analysis and interpret the result. Geometric Cues Let us consider characterizing flow in a convex region called a cell which is bounded by sensing planes by a set of flow mappings. In order to do that we study Qualitative Reasoning 555 0 z -2 ,(: . . / :’ 32 r----- P40 ‘2 I IS i 7 iu3 Figure 3: Anatomy of flow of Matsumoto-Chua equation (2) in Region R : 1 _< x < 3,-2 < y 5 2,-3 < z 5 3,0.566(x - 1.5) - 0.7759 + 0.281(2 + 1.05) 2 0 556 Nishida geometry that orbits make on the surface of a sensing plane. I classify the surface in terms of the orientation of orbit there. A contingent section S of the surface is called an entrance section if S is on a single sensing plane and orbits enter the cell at all points of S ex- cept some places where the orbits are tangent to the surface. An ezi-t section is defined similarly. An en- trance or exit section (e.g., exit section vlp5p72~q) may be further divided into smaller sections (e.g., sections ww4, ~1~5p6~~3~1, and p6p7p8) by one or more section boundary (e.g., vlpl and psps), which may be either (a) an intersection of sensing planes, (b) an image or an inverse image of a section boundary, or (c) an inter- section of a two-dimensional eigenspace and the cell surface. Section boundaries play an important role as primary geometric cues on the surface. Tangent sections separate entrance and exit sections. Tangent sections are further classified into two cate- gories: a concave section (e.g., ~5~7) at which orbits come from the inside the cell, touch the surface, and go back to the cell, and a conzlex section (e.g., vlp5 and plop31) at which orbits come from the outside the cell, touch the surface, and leave the cell. An intersection of an eigenspace and the surface3 is called a pole or a ground depending on whether the eigenspace is one-dimensional or two-dimensional, re- spectively. In Figure 3, point p29 is a pole and line segments ~41~40, ~40~18, mm, etc are grounds. A lhorn is a one-dimensional geometric object which thrusts outward from section boundary into an en- trance/exit section. In Figure 3, there are two thorns: p23p24 and p3oP29 - Thorns result from peculiarity of eigenspace. Interaction of geometric cues may result in a junc- tion of various types. For example, section boundaries ~2~4 and ~5~14~28 in Figure 3 meet, at ~4, making a T- junction, while vlplpsl and v4p1p8 make an X-junction at PI. SOme geometric cues such as fixed point p26 or con- vex section plop31 are triviaE in the sense that they can be easily recognized by local computation without tracking orbits, while others such as a T-junction at p4 are nontrivial because they cannot be found with- out predicting their existence and validation by focused numerical analysis. Geometric Cue Interaction Patterns I have classified interactions of geometric cues into five categories, as shown in Figure 4. Each pattern is char- acterized by a landmark orbit such as X1X2 in an X-X interaction or TlX in a T-X interaction that connect geometric cues. A X-X (“double X”) interaction is an interaction 3For simplicity, I assume that no surface of a cell is an eigenspace, which is a special subspace consisting of orbits tending to/from a saddle node. between boundary sections. In Figure 3, example of a X-X interaction is with the landmark orbit ~2~1. A T-X and a T-T (“double T”) interaction co-occurs with a concave section, which “pushes in” or “pops out” bundle of orbit intervals. In Figure 3, example of a T-X int,eractions is with landmark orbit ~28~22~38. A Pole-T interaction results from peculiarity of or- bits in an eigenspace of a saddle node. The closer the start (or end) point of an orbit approaches the ground, the closer the end (or start) point of an orbit approaches the pole. Special care is needed for search- ing for a Pole-T interaction when the derivative of the flow at the fixed point has complex eigenvalues, for a boundary edge may turn around the pole infinitely many times. A Thorn-T interaction accompanies peculiarity, too. A T-junction consisting of a section boundary, a convex section, and a concave section is mapped to/from the top of a thorn. Points on the section boundary of the T-junction are mapped to/from the concave section, points on which are in turn mapped to/from the body of the thorn. Analysis Procedure Roughly, a procedure for generating flow mappings for a given cell consists of four stages: (stage 1) recogni- tion of trivial geometric cues, (stage 2) recognition of nontrivial geometric cues, (stage 3) partitioning of the cell surface into coherent regions, and (stage 4) gener- ation of flow mappings. Instead of describing the procedure in detail,4 I will illustrate how it works for the top-left portion of the cell shown in Figure 3. As a result of the initial analy- sis, the surface is classified with respect to the orienta- tion of flow and trivial geometric cues are recognized as shown in Figure 5a. Then, orbits are numerically tra,cked from sampling points on each trivial geometric cue. When the images or/and inverse images are obtained, they are examined to see whether they suggest the existence of a nontriv- ial geometric cue. For the case in hand, as pis move downward from vertex VI, their images $~(pi)s jump from the top plane to the rear plane, suggesting the existence of an X junction (Figure 5b). Similarly, as qjs go to the right, their inverse images +-l(qj) jump from the left plane to the front plane, suggesting the existence of another X junction (Figure 5~). Explanation is sought that may correlate the two X junctions, by consulting a library of geometric cue in- teraction patterns. As a result, a X-X interaction is chosen as the most plausible interpretation. The ap- proximate location of the landmark orbit is computed by focused numerical computation (Figure 5d). The algorithm is implemented as PSX3 (Nishida, 1993)) except procedures for Pole-T and Thorn-T in- teractions. We have tested the current version of PSX3 41nterested reader is referred to (Nishida, 1993) for more detail. Qualitative Reasoning 557 Figure 4: Geometric Cue Interaction Patterns against a few systems of piecewise linear ODES whose flow does not contain Pole-T or Thorn-T interactions. Generalization to Nonlinear ODES So far, I have carefully limited our attention to sys- terns of piecewise linear ODES, for which the flow in each cell is linear. However, it is not hard to extend the method to nonlinear ODES, if we are to handle only non-degenerate (i.e., hyperbolic) flow~.~ What to be added is twofold: (a) a routine which will divide the phase space into cells that contain at most one fixed point, and (b) g a eneral nonlinear (non-differential) si- multaneous equation solver. Neither of these are very different from those that have been implemented for analyzing two-dimensional flow (Nishida and Doshita, 1991)? Another thing we might have to take into account is the fact that certain assumptions such as planarity of an eigenspace do not hold any more. Fortunately, local characteristic of a nonlinear flow is equivalent to a linear flow, as linear approximation by Jacobian pre- serves local characteristics of nonlinear flow as far as the flow is hyperbolic. Thus, the local techniques work. Globally, we have not made any assumption that takes advantage of the linearity of local flow, so it also works. 5 Degenerate flows are rare, even though generative prop- erty (Hirsch and Smale, 1974) (a proposition that the prob- ability of observing a degenerate flow is zero) does not hold for three-dimensional flow. 61t should be noted that a nonlinear simultaneous equa- tion solver may not always produce a complete answer. Dealing with incompleteness of numerical computation is open for future research. Some early results are reported in (Nishida et al., 1991). 558 Nishida Implementation of these codes is, however, left for fu- ture. This work ca*n be thought of as development of a basic technology for intelligent scientific computation (Abel- son et al., 1989; Kant et a/., 1992), whose purpose is to automate scientific and engineering problem solving. In this paper, I have concentrated on deriving quasi- symbolic, qualitative representation of ODES by intel- ligently controlling numerical analysis. Previous work in this direction involves: POINCARE (Sacks, 1991), PSX2NL (Nishida and Doshita, 1991), Kalagnanam’s system (Kalagnanam, 1991), and MAPS (Zhao, 1991). KAM (Yip, 1991) is one of the frontier work, though it is for discrete systems (difference equations). Un- fortunately, these systems are severely limited to two- dimensional flows, except MAPS. Zhao claims MAPS (Zhao, 1991) can analyze n- dimensional flows too. MAPS uses polyhedral approx- imation of collection of orbits as intermediate inter- nal representation. As polyhedral approximation rep- resents rather the shape of flow than the topology, it is not suitable for reasoning about qualitative as- pects in which the topology of the flow is a main issue. The more precise’ polyhedral approximation becomes, the more irrelevant information is contained, making it harder to derive topological information. In contrast, flow mappings only refer to g-sinks and g-sources of bundles of orbit intervals, neglecting the shape of or- bit intervals in between. As a result, (a) topological information is directly accessible, and (b) unnecessary computation and memory are suppressed significantly. (a) classify the surface section boundaq esit section ‘0 ti (b) tracking the orbits forward at p, (i = 1,2,. .) (c) tracking the orbits backward at q, (i = 1,2,. .) (d) interpretation based on geometric cue interaction patterns e-ntrance section v -% convex section -3 convex section \ (exit) & convex section / Figure 5: A process of generating flow mappings for (2) Limitations of the A The method reported in this paper has two major lim- itations. First, it is not straightforward to extend it to general n-dimensional flow, even though the underly- ing concepts are general, for I have chosen to improve efficiency by taking advantages of three-dimensional flow. Second, the current approach may be too rigid with respect to t,he topology of flow. Sometimes, we may have to pay a big cost for complete information, especially when the topology of the flow is inherently complex (e.g., fractal basin boundary (Moon, 1987)).’ Making the boundary fuzzy might be useful. Concausion Automating qualitative analysis by intelligently con- trolled numerical analysis involves reasoning about complex geometry and topology under incomplete in- formation. In this paper, I have pointed out that com- plex geometric patterns of solution curves of systems of ODES can be decomposed into a combination of sim- ple geometric patterns called geometric cue interaction patterns, and shown how they can be utilized in qual- it ative analysis of three-dimensional flow. References Abelson, Harold; Eisenberg, Michael; Halfant, Matthew; Katzenelson, Jacob; Sacks, Elisha; Suss- man, Gerald J.; Wisdom, Jack; and Yip, Kenneth 1989. Intelligence in scientific computing. Commun,i- cations of the A CM 32:546-562. Guckenheimer, John and Holmes, Philip 1983. Non- linear Oscillations, Dynamical Systems, and Bifurca- tions of Vector Fields. Springer-Verlag. 7Note that this d oes not mean the current approach is not suitable for analyzing chaos. Remember that the exam- ple I have used in this paper is Matsumoto-Chua’s double- scroll attractor which is known as a chaotic attractor. Hirsch, Morris W. and Smale, Stephen 1974. Dif- ferential Equations, Dynamical Systems, and Linear Algebra. Academic Press. Kalagnanam, J ayant 1991. Integration of symbolic and numeric methods for qualitative reasoning. Tech- nical Report CMU-EPP-1991-01-01, Engineering and Public Policy, CMU. Kant, Elaine; Keller, Richard; and Steinberg, Stanly, editors 1992. Working Notes Intelligent Scientijc Computation. American Association for Artificial In- telligence. AAAT Fall Symposium Series. Matsumoto, Takashi; Chua, Leon 0.; and Komuro, Motomasa 1985. The double scroll. IEEE Trunsac- tions on Ciruits and Systems CAS-32(8):798-818. Moon, Francis C. 1987. Chaotic Vibrutions - An In- troduction for Applied Scientists and Engineers. John Wiley & Sons. Nishida., Toyoaki and Doshita, Shuji 1991. A geo- metric approach to total envisioning. In Proceedings IJCAI-91. 1150-l 155. Nishida, Toyoa,ki; Mizutani, Kenji; Kubota, Atsushi; and Doshita, Shuji 1991. Automated phase portrait analysis by integrating qualitative and quantitative analysis. In Proceedings AAAI-91. 811-816. Nishida, Toyoaki 1993. Automating qualitative anal- ysis of three-dimensional flow. (unpublished research note, in preparation). Sacks, Elisha P. 1991. Automatic analysis of one- parameter planar ordinary differential equations by intelligent numeric simulation. Artificial Intelligence 48:27-56. Yip, Kenneth Man-kam 1991. Understanding com- plex dynamics by visual and symbolic reasoning. Ar- tificial Intelligence 51( l-3): 179-222. Zhao, Feng 1991. Extracting and representing quali- tative behaviors of complex systems in phase spaces. In Proceedings IJCAI-91. 1144-l 149. ualitative Reasoning 559 | 1993 | 83 |
1,413 | Franz G. Amador and Adam Finkellstein and mid S. Weld* Department of Computer Science and Engineering, FR-35 University of Washington Seattle, Washington 98195 franz, adam, weld@cs.washington.edu Abstract We present Pika, an implemented self-explanatory simulator that is more than 5000 times faster than SimGen Mk2 [Forbus and Falkenhainer, 19921, the previous state of the art. Like SimGen, Pika auto- matically prepares and runs a numeric simulation of a physical device specified as a particular in- stantiation of a general domain theory, and it is capable of explaining its reasoning and the sim- ulated behavior. Unlike SimGen, Pika’s model- ing language allows arbitrary algebraic and dif- ferential equations with no prespecified causal di- rection; Pika infers the appropriate causality and solves the equations as necessary to prepare for numeric integration. Introduction Science and engineering have used numeric simulation productively for years. Simulation programs, how- ever, have been laboriously hand-crafted, intricate, and difficult to understand and change. There has been much recent work on automating their construction (e.g. [Yang, 1992, Rosenberg and Karnopp, 1983 Abelson and Sussman, 1987, Palmer and Cremer : 19921). To th is, the Qualitative Physics community has contributed the idea of a self-explanatory simulator [Forbus and Falkenhainer, 1990, Forbus and Falken- hainer, 19921. When using such a system, a person need only specify the basic entities, quantities, and equations governing the system to be simulated. From these, the program automatically prepares and runs a numeric simulation. It also keeps a record of its rea- soning so it can explain the simulated behavior. Such a simulator has three primary advantages [For- bus and Falkenhainer, 19901. *Many thanks to the members of the E3 project, espe- cially Mike Salisbury and Dorothy Neville. Pandu Nayak kindly provided Common Lisp causal ordering code. El- isha Sacks and Eric Boesch kindly provided the Runge- Kutta numeric integration code. This research was funded in part by National Science Foundation Grant IRI-8957302, Office of Naval Research Grant 90-J-1904, and the Xerox corporation. e Improved automation: Because the user specifies the simulated system declaratively using equations, creating and modifying simulations is much easier. Improved self-monitoring: The simulator can analyze the equations to produce checks that detect problems with the simulation, such as numerical in- stability. 0 Better explanations: Because the simulator records the deductions needed to prepare the sim- ulation, it can generate custom English-language or graphical explanations for the simulated behavior. Such explanations assist debugging the simulated equations, and they can form the core of an auto- mated tutor that allows the user to explore and learn about the behavior of a simulated system. The first and third of these properties are of particular interest to the Electronic Encyclope- dia/Exploratorium (E3) project at the University of Washington. We are constructing a program via which the user may interact with simulated versions of various engineered artifacts to learn how they work [Amador et al., 19931. The user can perturb the en- vironment of the device to see how it reacts and even modify the device itself as it is “operating,” all the while receiving English or graphical explanations for its behavior. Thus a central part of the E3 program is ef- fectively a combined CAD system and self-explanatory simulator. Unfortunately, existing self-explanatory simulators, namely the SimGen MkI and SimGen Mk2 programs [Forbus and Falkenhainer, 1990, Forbus and Falken- hainer, 19921, do not meet our needs. o Too slow: Both compile the equation model of an artifact into a custom program that simulates it. While SimGen Mk2 is much faster than SimGen Mkl, this compilation process still takes much too long to allow prompt response to user manipulations of the artifact model. For example, SimGen Mk2 re- quires 4 hours to compile a simulator for a model of 9 containers and 12 pipes [Forbus and Falkenhainer, 19921. e Restrictive modeling language: As with their From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. predecessor Qualitative Process Theory [Forbus, 19841, the SimGen programs require equations to be written as uni-directional influences. Thus the flow of causality and information through the model must be chosen by the modeler and remains fixed. We claim it is easier and more natural to describe a model using ordinary non-directional equations. (See “Equations” below .) In this paper we present Pika’, an implemented self- explanatory simulator which overcomes these limita- tions. Its first, unoptimized implementation prepares the above SimGen Mk2 example simulation in under 3 seconds-more than 5000 times faster. Furthermore, the modeling language is based upon ordinary differen- tial and algebraic equations, which Pika automatically manipulates as necessary to simulate the model. The modeling language As with the SimGen programs, our modeling language (known as the Quantified Modeling Language (&ML)) derives from the Qualitative Process Theory [Forbus, 19841. The user defines a model in two parts. The physics that apply to the device are defined in a do- main theory, which is general and can be reused when modeling different devices. This theory is instantiated according to a scenario description, which specifies a particular device for simulation. For example, a do- main theory might describe the various types of elec- trical components while a corresponding scenario de- scription would specify a particular circuit. &ML has two distinguishing features: a simplified modeling language and non-directional equations. Model Fragments Qualitative Process Theory’s entity, process, and vieul definitions are replaced by model fragments (MFs), of which there are two kinds. Unquantified MFs (similar to QP Theory’s entities) are instantiated explicitly in the scenario description. Quantified MFs (similar to QP Theory’s processes and views) are instantiated au- tomatically by the system when their preconditions are met, and their instances are likewise destroyed when those preconditions no longer hold. Were, for example, is a simplistic characterization of boiling. (define-MF boiling (?fluid) (preconditions (instance-of Liquid ?fluid) (>= (temperature ?fluid) (boiling-temperature ?fluid))) (effects (dyn-infl (mass ?fluid) (- (/ (heat-absorption-rate ?fluid) (latent-heat ?fluid)))))) The dyn-infl (dynamic influence) is a non- directional version of a &P-theory “direct influence” IA pika (pronounced lives in alpine rockpiles. pee-kuh) is a small mammal that written using terminology first advocated by Woods [Woods, 19911. P’k 1 a sums all dynamic influences upon a quantity to form an equation that constrains its derivative. Thus if boiling is the only active MF, the derivative of mass is constrained to be equal to the ra- tio of absorption rate and latent heat of vaporization. However, if an MF encoding fluid flow into the con- tainer were active then that influence would be added to the sum constraining the derivative. Algebraic influ- ences (alg-inf 1) are similar-they specify an implicit summation constraining the influenced quantity itself. Equations Pika treats all &ML equations as non-directional con- straints. This follows standard scientific practice bet- ter than do QP Theory’s one-directional influences. Scientists express almost all physical laws as constraint equations. Ohm’s Law (V = IR), for example, makes no commitment as to which variables are dependent and which are independent. In different contexts, such an equation can determine the value of any of its vari- ables. If a resistor MF containing an Ohm’s-Law equa- tion appears in a model in which it is connected to a constant-voltage source, Pika will use the equation to find the current through the resistor. If the resistor is instead connected to a constant-current source, the equation determines the voltage drop across it. Non-directional equation constraints make model writing much easier. The modeler can write the ideal gas law (PV = nRT) in its familiar form, with- out having to decide how it will be used in future simulations. Even &ML’s “influences” specify non- directional equations. The dynamic influence in the boiling MF above provides the modularity of QP Theory’s direct influences without making any com- mitment to causal direction. Such modular specifi- cation of non-directional summation equations pro- vides the modeler with considerable expressive power. For example, to encode Kirchhoff’s current rule for electrical nodes, the modeler need simply include the equation (= (net-current ?self) 0) in a Node MF, together with (alg-infl (net-current ?node) (current ?terminal)) in an MF that will be instan- tiated for each connected node and terminal. From this, Pika will create an equation for each node that constrains the sum of the currents of its connected ter- minals to be zero. Note that the “influenced” quantity remains constant; causality flows among the “influenc- ing” terminal currents as appropriate. This use of in- fluences is impossible in QP Theory. he simulation algorit Unlike the SimGen programs (known collectively here- after as “SimGen”), Pika does not compile the artifact model before simulating it. However, it does compile the domain theory once after it is written or changed. Pika’s domain theory compiler translates each model Wed-Time PBanning & Simulation 563 fragment into a set of functions that speed simula- tion: a model fragment’s preconditions are compiled into functions that query the knowledge base to find bindings for the MF’s arguments, that test whether the quantitative part of the preconditions is satisfied, and that generate numeric integration bounding conditions that halt integration when those quantitative precon- ditions cease to be satisfied (or become satisfied, given that the non-quantitative preconditions are met). The model fragment’s effects are compiled into functions that assert those effects when the MF is activated and that retract them when it is deactivated. This compilation requires little time (less than the LISP compiler requires to compile the resulting code). Furthermore, one need not suffer even this small cost except when the domain theory changes, which hap- pens quite infrequently compared with changes to the model being simulated. Since Pika is still being integrated with the E3 user interface [Salisbury and Borning, 19931, its current im- plementation runs in a “batch” manner. Pika takes as input the compiled domain theory, the scenario de- scription, and a period of time for which to simulate. It simulates for the requested time, recording its rea- soning and the device’s behavior, and then stops to answer the user’s questions. Pika’s simulation algorithm is as follows: Instantiate unquantified MFs from scenario description Repeat until time bound reached Instantiate and deinstantiate quantified MFs based upon world state Causally order the equations Solve the equations for the quantities they determine Create integration bounds from MF preconditions Numerically integrate until a bound is violated Pika’s algorithm differs from SimGen’s in three im- portant ways: MF activation, numeric integration, and equation manipulation. Model fragment act ivat ion Both SimGen programs use an assumption-based truth maintenance system (ATMS) [de Kleer, 19861 to per- form substantial analysis during model compilation. SimGen Mkl generates a total envisionment of the model’s qualitative state space, which is computation- ally infeasible for large models. SimGen Mk2 reduces this cost by finding only the “local states” in which each MF is active. This analysis allows them to reduce the run-time checking needed to determine changes in the set of active MFs. If the simulation is in a qualita- tive state from which there is only one possible transi- tion, then the simulator need check only the limit hy- pothesis corresponding to that transition, and it can switch directly to the set. of active MFs determined during compilation to be active in the next qualitative state. This speeds simulation, but it exacts an enor- mous cost during compilation. Since Pika does no such compile-time analysis, it must test all quantified MF preconditions at runtime, to determine the active set. For each MF it queries the knowledge base for all argument bindings that meet the non-quantitative preconditions (e.g. the instance-of test in the boiling example). In the worst case, this is exponential in the number of MF arguments, but their number is under the modeler’s control and is always small.2 Pika then tests the quantitative preconditions of these candidate MF activations, which requires time linear in the size of the quantitative expressions. Numeric integration SimGen uses custom-generated evolver procedures (which use Euler’s method) to do numeric integration; and state transition procedures to detect transitions in qualitative state. Because Pika does not compile the model, it must instead use a general-purpose numeric integrator. Its current implementation uses a fourth- order Runge-Kutta integrator with adaptive step-size control [Press et al., 19861, which, at a given accuracy, is much faster than Euler’s method.3 In addition to the simulation equations, initial quan- tity values, and integration limit, the integrator also takes as input a set of integration bounds. Each bound is an expression and an interval; if the expression’s value ever leaves the interval, the integrator halts (at the time step immediately before the bound is vio- lated). Pika supplies bounds representing the quan- titative preconditions of all currently active model fragments (known as deactivation bounds) and other bounds representing the quantitative preconditions of all MFs that are inactive only because of their quanti- tative preconditions (activation bounds). Equation manipulation All the equations in a SimGen model are written either as direct influences or as qualitative proportionalities (indirect influences). SimGen converts these into nu- meric integration equations by 1) summing the direct influences upon each quantity’s derivative, 2) sorting the indirect influences into a graph (causal ordering) such that all quantities are determined, and 3) con- verting the influence subgraph that determines each quantity into an algebraic equation via a table (known as the math library). Since this table contains a differ- ent equation for each possible combination of influences upon each quantity, any given qualitative proportion- ality can “mean” different things in different contexts. This arrangement implies a “two-tiered” process of quantitative model construction: the domain theory is instantiated based upon MF preconditions to produce 2Note that SimGen must also confront this exponential when instantiating MFs into the ATMS. 31t is important to emphasize that Pika’s performance advantage over SimGen is not due to the underlying inte- gration technology-the important speed-up is in simula- tion preparation. 564 Amador a qualitative model, and then the math library is in- stantiated based upon the qualitative model influences to produce the quantitative model. This structure may make writing some kinds of models easier, but it re- quires the modeler to write every equation twice: once qualitatively for the domain theory, and once quanti- tatively for the math library. It also sacrifices possible modularity. Influences allow the modeler to specify qualitative equations in pieces that are automatically assembled by SimGen; however, the modeler must fully specify all possible quantitative model equations. With Pika, the modeler specifies the domain the- ory using equations which Pika automatically com- bines and symbolically manipulates as needed to form the quantitative model. &ML thus effectively collapses SimGen’s two-tiered structure into one, allowing mod- ular specification of quantitative model equations. Pika must convert the model’s non-directional equa- tions into the following directional form expected by the Runge-Kutta integrator: ~=.fl<xl,... ,XnJ) yl=gl(x1,..., XJ) . dX,- dt - fn(i,.. .,xn,q yin =sm(i ,..., Xd) The X’s are the state variable used by the inte- grator to advance time; the Y’s are all other vari- ables calculated from the state variables. Pika clas- sifies any quantity that has a derivative (generally due to a dyn-infl) as a state variable. It rear- ranges the equations into the above form by first using a causal ordering routine [Iwasaki and Simon, 1986, Serrano and Gossard, 19871 to find an order in which the equations can be evaluated so as to determine val- ues for all quantities; this ordering may include sets of simultaneous equations. Pika then uses Mathematics’s [Wolfram, 1988] Reduce function to solve each equa- tion for the quantity it determines (unless it is already in “solved” form, i.e. (= determined-quantity expres- sion)). Mathematics also solves any simultaneity for the quantities it determines. However, since causal or- dering abstracts equations to sets of quantities, it can falsely group non-independent equations as simultane- ities. Here we rely upon the fact that Reduce, if it cannot find a solution, will reformulate the equations, discarding non-independent ones, so that another at- tempt at causal ordering will not make the same mis- take. This process repeats until the causal ordering contains no simultaneities. The modeler may denote some quantities as con- stants. Non-constant quantities retain their previous values if they are not determinable from the state vari- ables. Implementing these semantics requires that the equation-directionalizing process make several passes. First, all constants are marked as exogenous, and causal ordering and equation solving discover which quantities are determinable from them. Next, those state variables that remain undetermined (usually all of them) are marked as exogenous, and the algorithm 9: A: 9: A: 9: A: 9: A: Summarize the simulated behavior. At time 0, heat started flowing from STOVE to CAN-OF-WATER. At time 55.96147, the temperature of CAN-OF- WATER reached 100.0, and it started boiling. At time 55.969383, a gas appeared in CAN-OF- WATER. At time 95.961464, the liquid in'CAN-OF-WATER boiled away, and it stopped boiling. At time 165.31618, the pressure of CAN-OF-WATER exceeded 150.0, and the container exploded. What is the value of (TEMPERATURE CAN-OF-WATER) at time 40? (TEMPERATURE CAN-OF-WATER) is 82.10323 at time 40. How is (TEMPERATURE CAN-OF-WATER) changing? (TEMPERATURE CAN-OF-WATER) is increasing at time 40. What happens next? At time 55.96147, the temperature of CAN-OF- WATER reached 100.0, and it started boiling. Figure 1: Example of explanation generation. Queries are translations from a specialized query language; an- swers are actual program output. runs again. Lastly, all remaining undetermined quan- tities are given their previous values and treated as constants under the current set of equations. A fea- ture of this algorithm is that the modeler can force a state variable to be constant during one operating region while allowing it to vary during another. Explanations By keeping a record of its equation manipulations, the history of model fragment activations and deac- tivations, and the data returned by numeric integra- tion, Pika can answer the class of questions answerable by SimGen Mk2. This includes summarizing so-far- simulated behavior in qualitative terms, reporting the equation that determines the value of any quantity at any simulated time, and reporting the value of a quan- tity at any simulated time. It objects if the quantity does not exist at the requested time. Because it does no global envisioning, Pika (like SimGen Mk2) cannot an- swer some questions answerable by SimGen Mkl, such as summarizing currently unsimulated future behavior and describing alternative behaviors. See figure 1 for an example. Implementation status PIKA is fully implemented. It is written in Allegro Common Lisp and uses the LOOM knowledge repre- sentation system (version 1.4.1) [Brill, 19911, the Math- ematica symbolic math system (version 2.0) [Wolfram, Real-Time Planning & Simulation 565 Test Prep4 Total5 RK %‘j Mma %r SimGen Mk2:8 2.7s 3.1s 6 0 9 cant & 12 nine 36 cant & 60 pipe9 34s 38s 6 0 2-rung RC ladder” 4.9s 5.1s 1.5 70 5-rung RC ladder 38s 38s 0.0006 87 Exolodine can” 0.5s 1.8s 5 40 Figure 2: Timing data (Sun SPARCstation IPX). 19881, and a Runge-Kutta numeric integration package that is written in C [Press et al., 1986]. See figure 2 for timing data. The “prep” column is what we are comparing to SimGen’s model-compilation time; it gives the time needed to prepare each model for simulation. This table demonstrates PIKA’s speed and scalability for models that do not produce large sets of simultane- ous equations. However, the a-rung and 5-rung “RC ladder” electrical circuit tests require solving sets of 21 and 51 simultaneous equations, respectively, and suffer accordingly. Second prototype One way to reduce the impact of equation manipula- tion is to avoid redoing it when unnecessary. Pika re- generates the simulation equations “from scratch” ev- ery time the set of active MFs changes. We have reim- plemented Pika (as Pika2) using SkyBlue [Sannella, 19921, a hierarchical constraint manager. SkyBlue ef- fectively maintains a causal-ordering graph which can be updated incrementally as MFs activate and deac- tivate. Only those equations whose causal direction changes (or which form new simultaneities) must Pika2 resolve. Also, Pika2 caches solutions of individual equations, though not of simultaneities. SkyBlue offers other advantages over “traditional” causal ordering methods. Each constraint (a set of variables) has a specified strength. SkyBlue builds the causal-ordering graph from the highest-strength, con- sistent set of constraints, leaving some lower-strength constraints unused if necessary. Pika2 uses this strength hierarchy to implement the semantics of con- 4Elapsed time spent before the first numeric integration. ‘Total simulation elapsed time. The container/pipe and RC ladder tests were simulated until “quiescence”, Le. until all quantities had completed 99% of their possible change. ‘Percent of total time spent doing numeric integration. 7Percent of total time spent solving equations. 80ur implementation of the example described in [For- bus and Falkenhainer, 19921. ‘The SimGen Mk2 example container grid quadrupled. loEach “rung” of an “RC ladder” is a capacitor with some initial voltage in series with a resistor. All rungs are connected in parallel. llSee figure 1. 566 Anmdor stants, state variables, and value persistence without having to repeatedly run a causal-ordering procedure. For example, every quantity has an associated low- strength equation that sets it equal to its most recent simulated value. SkyBlue includes the constraint rep- resenting this equation in the causal ordering only if the quantity is not otherwise determined. SkyBlue also allows one-way constraints, which Pika2 uses to represent the fact that the derivative of a state variable is numerically integrated to determine the state variable’s next value, but not vice versa. This allows a more accurate definition of a state, variable than Pika uses: a state variable is a quantity having a derivative that can be causally determined if the quan- tity is assumed to be a state variable (and hence exoge- nous to each time step). This definition better reflects the cyclic nature of numeric integration, and SkyBlue will use a one-way constraint between derivative and possible state variable only when the definition is sat- isfied. Pika2’s initial simulation-preparation times are about the same as Pika’s, but unfortunately it is not yet stable enough for timings demonstrating the value of incremental constraint management. Related work An important inspiration for much work in self- explanatory simulation was the STEAMER project [Holland et al., 19841, which produced an impressive interactive simulator/tutor for a naval propulsion sys- tem, though all the simulations and explanations were hand-crafted in advance. Many people have worked on easing the construc- tion of fast, accurate numeric simulations. The is- MILE and MISIM systems [Yang and Yang, 1989, Yang, 19921, for example, provide tools for construct- ing a variety of electrical and optical circuit simula- tions. The modeler must define new components us- ing a subset of FORTRAN, however, and the systems do no equation manipulation. The ENPORT program [Rosenberg and Karnopp, 19831 generates numeric sim- ulations from the more declarative bond-graph sys- tem representation, and the Dynamicist ‘s Workbench project [Abelson and Sussman, 19871 generates simu- lations from equation models. None of these systems, however, allow changes in the equations during simu- lation. Besides SimGen, the system closest in spirit to our own is SimLab [Palmer and Cremer, 19921, which al- lows a model-fragment-like specification of equation- based models that it symbolically manipulates to pro- duce numeric simulations. SimLab does not, however, allow changes in the equations during simulation, nor does it generate explanations. We note that the “How Things Work” project at Stanford [Fikes et al., 19921 is addressing issues similar to those those tackled by Pika; however, the Stanford work is too preliminary to discuss extensively. ture work Pika is fast, but it isn’t quite fast enough to drive a truly interactive simulation for the E3 project. We es- timate that we need another factor of ten for practical use and are working on several ways to speed it up. Pika currently uses LOOM [Brill, 19911 for its knowl- edge base, but it uses almost none of LOOM’s infer- encing power. Switching to LOOM’s CLOS subset, or abandoning LOOM altogether, should significantly speed Pika. Using Mathematics to solve equations dramatically slows Pika; solving a set of a dozen linear simultane- ous equations can take several seconds. Pika prepares the SimGen Mk2 example in under 3 seconds partly because that model requires no equation solving. Us- ing a dedicated linear-equation solver instead (when possible) should help. Conclusions We have presented Pika, a self-explanatory simula- tor 5000 times faster than SimGen Mk2, the previ- ous state of the art. Pika also provides a more natu- ral and more expressive modeling language based upon non-directional algebraic and differential equation con- straints. Pika and the SimGen programs represent points along a continuum: SimGen Mkl does an enor- mous amount of model analysis prior to simulation, SimGen Mk2 does less, and Pika does almost none. Pika must therefore pay a greater cost when changing the model during simulation. The highly interactive nature of simulation in the E3 project demands such an architecture. However, the performance results sug- gest that Pika’s strategy may work well for other ap- plications. eferences H. Abelson and G.J. Sussman. The Dynamicist’s Workbench: I Automatic Preparation of Numerical Experiments. AI Memo 955, MIT AI Lab, May 1987. Franz G. Amador, Deborah Berman, Alan Borning, Tony DeRose, Adam Finkelstein, Dorothy Neville, David Notkin, David Salesin, Mike Salisbury, Joe Sherman, Ying Sun, Daniel S. Weld, and Georges Winkenbach. Electronic “How Things Work” Arti- cles: Two Early Prototypes. IEEE Transactions on Knowledge and Data Engineering, To Appear 1993. D. Brill. LOOM Reference Manual. USC-ISI, 4353 Park Terrace Drive, Westlake Village, CA 91361, ver- sion 1.4 edition, August 1991. J. de Kleer. An Assumption-based Truth Mainte- nance System. Artificial Intelligence, 28, 1986. R. Fikes, T. Gruber, and I. Iwasaki. The Stanford How Things Work Project. In Working Notes of the AAAI Full Symposium on Design from Physical Prin- ciples, pages 88-91, October 1992. K. Forbus and B. Falkenhainer. Self-Explanatory Simulations: An integration of qualitative and quan- titative knowledge. In Proceedings of AAAI-90, pages 380-387, 1990. K. Forbus and B. Falkenhainer. Self-Explanatory Simulations: Scaling Up to Large Models. In Pro- ceedings of AAAI-92, page To Appear, 1992. K. Forbus. Qualitative Process Theory. Artificial Intelligence, 24, December 1984. Reprinted in [Weld and de Kleer, 19891. J. Holland, E. Hutchins, and L. Weitzman. STEAMER: An interactive inspectable simulation- based training system. AI Magazine, Summer 1984. Y. Iwasaki and H. Simon. Causality In Device Be- havior. Artificial Intelligence, 29( 1):3-32, July 1986. Reprinted in [Weld and de Kleer, 19891. R.S. Palmer and J.F. Cremer. SimLab: Automati- cally Creating Physical Systems Simulators. In ASME DSC-Vol. 41, November 1992. W. Press, B. Flannery, S. Teukolsky, and W. Vetter- ling, editors. Numerical Recipes. Cambridge Univer- sity Press, Cambridge, England, 1986. R.C. Rosenberg and D.C. Karnopp. Introduction to Physical System Dynamics. McGraw Hill, New York, 1983. M. Salisbury and A. Borning. A User Interface for the Electronic Encyclopedia Exploratorium. In Pro- ceedings of the 1993 International Workshop on In- telligent User Interfaces, Orlando, FL, January 1993. M. Sannella. The SkyBlue Constraint Solver. Tech- nical Report 92-07-02, Department of Computer Sci- ence and Engineering, University of Washington, De- cember 1992. D. Serrano and D.C. Gossard. Constraint Manage- ment in Conceptual Design. In Knowledge Bused Expert Systems in Engineering: Planning and De- sign, pages 21 l-224. Computational Mechanics Pub- lications, 1987. D. Weld and J. de Kleer, editors. Readings in Quulitu- tive Reasoning about Physical Systems. Morgan Kauf- mann, San Mateo, CA, August 1989. S. Wolfram. Muthemuticu: A System for Doing Methemutics by Computer. Addison-Wesley, Red- wood City, CA, 1988. 6. Woods. The Hybrid Phenomena Theory. In Pro- ceedings of the 5th intenutionul workshop on quulitu- tive reasoning, pages 71-76, May 1991. A.T. Yang and S.M. Yang. iSMILE: A Novel Circuit Simulation Program with Emphasis on New Device Model Development. In Proceedings of ACM/IEEE 1989 Design Automation Conference, pages 630-633, 1989. A.T. Yang. MISIM User’s Manual. Department of Electrical Engineering, University of Washington, 1992. | 1993 | 84 |
1,414 | A Comparison of Action-Based Hierarchies an Performance* ecision Trees for Real-Time David Ash KSL / Stanford University 701 Welch Rd. Bldg. C Palo Alto, CA 94304-1709 ash@sumex-aim.stanford.edu Abstract Decision trees have provided a classical mechanism for progressively narrowing down a search from a large group of possibilities to a single alternative. The structuring of a decision tree is based on a heuristic that maximizes the value of the information gained at each level in the hierarchy. Decision trees are effective when an agent needs to reach the goal of complete diagnosis as quickly as possible and cannot accept a partial solution. We present an alternative to the decision tree heuristic which is useful when partial solutions do have value and when limited resources may require an agent to accept a partial solution. Our heuristic maximizes the improvement in the value of the partial solution gained at each level in the hierarchy; we term the resulting structure an action-based hierarchy. We present the results of a set of experiments designed to compare these two heuristics for hierarchy structuring. Finally, we describe some preliminary work we have done in applying these ideas to a medical domain--surgical intensive care unit (SICU) patient monitoring. 0 Introduction Traditionally, decision trees have provided a mechanism for progressively narrowing down a search of a wide range of possibilities to a single alternative efficiently. However, in this context efficient has generally meant that the agent needs to reach its goal after consuming the minimal amount of resources. Suppose, however, that its goal is to * This research was supported by DARDA through NASA grant NAG2-581 (ARPA order 8607) and the DSSA project (DARPA contract DAAA21-92-C-0028). We acknowledge the many useful discussions we have had with our Guardian colleagues Adam Seiver, David Gaba, Juli Barr, Serdar Uckun, Rich Washington, Paul-Andre Tourtier, Vlad Dabija, Jane Chien and Alex Macalalad. Thanks to Ed Feigenbaum for sponsoring the work at the Knowledge Systems Laboratory. 568 Ash diagnose one of a number of faults, and that the agent does not have the resources to reach its goal before it needs to act to remedy the situation. If the domain is such that partial information about the correct alternative is much better than nothing, then we will define the effectiveness of our solution not merely by how efficiently the agent reaches its goal but also by the relative worth of the partial answers it finds along the way. In this paper, we present an alternative to the traditional decision tree, the action- based hierarchy, which takes into account the value of partial information and performs better when this is factored in. Our ideas build on several themes in the literature. The basic structure of the action-based hierarchy we will describe is quite similar to that of decision trees as used by Quinlan [QU833. Other researchers ([FA92], [CH91]) have addressed the question of how to structure a decision tree, as we do, but without specifically concerning themselves with the value of the partial answers reached along the way. We follow the reactive planning approach taken in [AG87], [NI89], [R089], and [SCS7]. The algorithm which is utilized with an action-based hierarchy at run-time may be viewed as a type of anytime algorithm [DE88]. Our ideas are also connected to the work on bounded rationality ([DE88], [H087], [RU91]); we guarantee response without consuming more than a certain maximal amount of expendable resources. 1 Action-Based Action-based hierarchies are used to discriminate among a set of alternatives or faults. Each leaf node in the hierarchy corresponds to a single fault; higher nodes in the hierarchy correspond to classes of faults. The class of faults associated with a node is always the union of the classes of faults associated with its children, and the top node has associated with it the From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. set of all faults. In order to help the agent achieve its goal, there are tests available that it can perform. Each outcome of a test will identify the correct fault as being in one of a number of not necessarily disjoint subsets of the set of all faults. Each test has an associated cost which is a heuristic estimate of the difficulty of performing the test. For instance, in a real-time domain, the cost of a test might be the time consumed by the test. There are also actions which can be performed; the agent tries to identify the correct fault because it needs to find an action to remedy that fault. Therefore, for every (action, fault) pair there is a value representing a heuristic estimate of the value of performing a particular action if a particular fault is present. This value ranges from -1 (undesirable action) through 0 (neutral) to +l (ideal solution). The main idea behind an action-based hierarchy is that associated with each node in the hierarchy there is a corresponding action which has the highest expected value given that the agent knows the fault falls within the set associated with this node, but has not yet discriminated among the node’s children. This action can then be performed if the agent is required to act (e.g. it reaches a deadline) but has not yet completed its diagnosis. The action might be suboptimal (depending upon which fault actually turns out to be present), but is likely to be much better than doing nothing. Figure 1 shows a sample action-based hierarchy. There is a set of four faults Fl, F2, F3, and F4, and the top of the diagram shows a hierarchy with a test used to distinguish among the children of each node. The faults associated with each node, as well as the action with the best expected value, are shown in the boxes at each level in the hierarchy. The box at lower right shows how performing each test enables us to distinguish among faults; the box at lower left gives values for each (action, fault) pair. 2 Hierarchy Structuring Given a set of faults, and the associated tests, costs of tests, actions, and values of actions, there are many hierarchies that could be constructed, although not all such hierarchies would be equally useful. Indeed, a hierarchy could be constructed by starting with the root node, and recursively selecting tests which.help discriminate among the faults associated with the current node, and adding children to the current node with one child corresponding to each possible outcome of the test. More formally, the following algorithm could be run to structure a hierarchy for use by the agent in real time: Actions Al A2 A3 A4 A5 Tests Fl Faults z F4 Figure 1: Sample Action-Based Hierarchy Start at the top of the yet-to-be-built hierarchy, sociating the set of all faults with this top node. Pick a leaf node in the hierarchy in DFS order, or stop if there are no more leaf nodes to expand. Associate with this leaf node the action which has the highest expected value for the set of faults associated with this node. If the leaf node cannot be expanded further, go ck to step 2 and pick another leaf node. Find all tests relevant to the set of faults associated with the current node. If no tests were found in step 4, go back to step 2 d pick another leaf node. 0 Determine which test found in step 4 is best according to some heuristic. Expand the current node with one child corresponding to each possible outcome of the test found in step 6, and the associated fault sets suitably adjusted. 0 Go back to step 2 and pick one of the children of the current node. Note that this algorithm will never get stuck in a local maximum at any node as long as there are relevant tests for expanding a node. Once we have constructed a hierarchy in this manner, it is very simple to use it--the agent simply performs the following algorithm in real time: Real-Time Planning & Simulation 569 Label the root node of the hierarchy as the current best hypothesis. 0 Perform the test associated with the current best hypothesis. If an action is required before this test can be completed, perform the action associated with e current best hypothesis. When the test results come back, refine the current best hypothesis to the one of its children pointed to by the result of the test. 2.1 Different Heuristics for Hierarchy Structuring The variable we are most interested in in the current paper is the particular heuristic used in step 6 of the hierarchy structuring algorithm. The traditional decision tree approach is to take the prior probability P(f) of each fault f, the set of faults F associated with the current node (a subset of all the faults), and the set of outcomes T of the test (each outcome is a subset of F), and then to compute the following function: C IYtMt) where &T The effective result of this approach is that the agent maximizes the expected information content of the result of the test. This guarantees that the agent will reach its goal of diagnosing a single fault in the minimum possible time, but does nothing to guarantee that it will find actions of high value along the way. We take a different approach. In addition to the functions defined above, we use a cost function C(t), and a value function V(f,a) giving the value of action a for fault f. The function we wish to maximize is: VCWWF~ C(T) where V(Fo) = ma%,.g v(a,FoJ and Fg is a any set of faults. Also c VWPW V(T) = ST c PM and kT W(aSo) VCa,Fg) = - P&o) (Note that an assumption is being made here that if one fault appears in more than one of the outcomes of a test, then if the fault is present, each of those outcomes is equally likely. This assumption will be made throughout the paper.) The intuition above is that the agent performs the test which will yield the highest expected increase in the value of the best action available to date. 2.2 Evaluating Hierarchy Performance In order to effectively compare the performance gained through the use of different hierarchies, we need to have a method of evaluating that performance. We assume that the cost of a test (the resources consumed by the test) is equal to the real time consumed by the test, but our ideas apply just as well to other types of cost which can be measured by a real number. Because of the assumed real time flavor, we will refer to the maximum time the agent is allowed to consume before acting as the deadline. On any given occasion where the hierarchy is used and a fault is present, the agent will start at the top of the hierarchy and refine the current best hypothesis until it reaches a leaf node or its deadline. At any given point in time, therefore, there will be a current best hypothesis, an associated action, and a value of that action for the fault. It is this value which forms the basis for our evaluation. The value of the action associated with the current best hypothesis becomes a step function of time representing the performance of the hierarchy in one particular instance. 1.0 Action A5 0.8 Action A3 Value of 0.6 Best Available Action 0.4 0.0 0 1 2 3 4 Time Consumed Figure 2: Hierarchy Performance on Fault F4 For example, Figure 2 shows the performance of the hierarchy illustrated in Figure 1 on the fault F4, 570 Ash assuming that all tests have constant time (or cost) of 1. We can now define the average hierarchy performance as follows: ECf,t) = P(qF)Hf,pf,t) Pf where the summation is over all paths Pf from the root node to a leaf node corresponding to fault f, and ECf,pf,t’ denotes the performance of the hierarchy over time assuming that the path pf is taken to the leaf node. The function P(F?I) denotes the probability that path pfis taken given that fault f is present. Having defined the performance of the hierarchy for a particul ar fault, we can define the overall performance of the hierarchy as the weighted average of the performances for each fault: EW gives the performance of the hierarchy at various possible deadlines f. It is possible to prove that under certain well-defined assumptions, the performance of action-based hierarchies is optimal. Specifically, we need to make the following assumptions: The values of tests are additive. The value of a st is given by the function V(T) defined in section 2. Two tests T1 and ‘I’2 may be combined into a single test T by combining their results: T = {fj nt2 : f 101 A t2 E T2]. Additivity means that the value of the combination is the sum of the values of each test: VU) = VCT1) + VCT2). In a similar manner, additivity must also apply to combinations of more that two tests. The value of any test or combination of tests is constant for all faults. That is, if this constant is 0.7 for a given combination of tests, then for any set of faults corresponding to particular outcomes of these test, the best action available will have value 0.7 for 1 the faults in the set. The deadline occurs immediately following a successful refinement of the current best hypothesis. The structuring of the hierarchy conforms with e algorithm given at the start of section 2. For any set of faults, there is always an action available which has positive expected value so that doing something is better than doing nothing. The proof is based on the fact that the action-based hierarchy structuring algorithm is a greedy algorithm, and when test values are additive the agent does best by simply performing the apparently best test first without considering lookahead. 4 Formal Experiments The assumptions outlined in the last section clearly do not hold in general. Therefore, we designed experiments to test the following two hypotheses: Using the action-based hierarchy will provide substantially better performance than the decision tree when evaluated as described in section 2. Using the decision tree will provide substantially better performance when only speed in reaching a leaf node matters. The experimental method was to randomly assign prior probabilities, test outcomes, and values of actions for faults. The prior probabilities were assigned by dividing the interval [O,l] into uniformly distributed partitions. Each test was randomly assigned to divide the fault set into two equal parts (the number of faults was always even). No fault appeared as part of both outcomes of a particular test. The values of actions for faults were also randomly distributed in the interval [O,l]. d 0.9 can Value 0.8 of Best 0.7 Action 0.6 0.5’ I 2 4 6 8 10 Deadline Figure 3: Hierarchy Performance with Different Deadlines and Figure 3 shows the difference in performance, over time, of action-based hierarchies, decision trees, and the randomly structured hierarchies. This shows ail-Time PBanning & Simulation 571 that action-based hierarchies always do at least as well as the other approaches, no matter what the value of the deadline (the amount of resources available for diagnosis), but that this advantage varies over time from nil at time zero, to a maximum for deadline values of around 3 or 4, back to nil for large deadline values. This corresponds, roughly speaking, to the fact that all hierarchies perform equally well at their root and their leaf nodes, but that at the intermediate nodes there is an advantage for certain hierarchies over others. The reader may wonder why there are not certain time points at which decision trees would have an advantage over action-based hierarchies. The answer appears to be that for individual performance profiles DT might have an advantage at certain time points, but this charateristic is lost when we compute the average. Now, suppose we are not interested in intermediate nodes but only in getting to a leaf node as quickly as possible. In Figure 4, we show a graph of the probability that a leaf node has been diagnosed by particular deadline values. Looking at this figure it is apparent that over a relatively small range of the deadline (between about 4 and 7) using a decision tree offers a very significant advantage over the other approaches. However, for other values of the deadline, it makes very little difference. 1 0.8 I Mean 1 Value 0.61 of Best Oa4 I Action0 . 2 0 t- 2 4 6 8 10 Deadline Figure 4: Hierarchy Performance using All-or-Nothing Evaluation 5 Implementation in a Medical (ReAct) We have used the idea of an action-based hierarchy to structure the knowledge base for a system, called ReAct, designed to provide fast response time in a surgical intensive care unit (SICU). ReAct is described in detail in [AS931 and is part of the Guardian SICU patient monitoring system [HA921. Because it uses an action-based hierarchy, ReAct is able to meet deadlines. This is in contrast with other medical AI systems ([BR87], [CL891, [FA801, and [HO8911 which perform diagnosis efficiently but without specifically addressing deadline issues. Knowledge regarding faults, tests, costs of tests, probabilities of faults, etc., has been provided by a medical expert; the hierarchy structuring algorithm has been used to produce an action-based hierarchy. A small part of the hierarchy is shown in Figure 5. In applying these ideas to a real-world domain, we are interested in determining whether the same results as reported earlier in this paper apply. In this domain, the theoretical assumptions made in section 3 will definitely not apply; nor will the variables necessarily fit the random distributions used to run the experiments described in section 4. 1 Action: Transfuse RBC Conditional 1 Test: HCT, Chest Tube Outuut, Radial MAP 1 Fault: Uncontrol. Vessel 11 Fault: Low Pk. Count I 1 Action: Consider Surged! Action Simple Pltlts. I Figure 5: Portion of edical Action-Based Hierarchy Figure 6 shows the results of an experiment run using the medical problem similar to the ones described in section 4. In this experiment we assumed that all tests took constant time to come back (30 minutes). As can be seen, for small values of the deadline, action-based hierarchies enjoyed a modest advantage over decision trees. For larger values of the deadline (not shown here) there was no discernable difference. We anticipate that if we took test times into account, the advantage of ABH over DT would be much more noticeable, even for larger values of the deadline. In addition to the domain experiments described in section 5, we intend to explore a number of different avenues in our ongoing research. The first question is whether there are other, still better heuristics 572 Ash which could be used to structure the hierarchy. We [CH91] Chrisman, L., Simmons, R. Sensible have tried combining the two heuristics, and we planning: focusing perceptual attention. Ninth expect that a more sophisticated combining function National Conference on Artificial Intelligence, 756-761, than we have used could lead to behavior which is 1991. better than either algorithm in certain circumstances. [CL891 Clarke, J., et.al. TraumAID: a decision aid Another question is whether better performance for managing trauma at various levels of resources. could be obtained by altering the basic structure of Proceedings of the Thirteenth Annual Symposium on the solution. At present, only one test is performed Computer Applications in Medical Care, Washington, at each node in the hierarchy. In the domain DC, November, 1989. described in section 5, this approach would be the [DE881 Dean, T., Boddy, M. An analysis of time- exception rather than the rule: generally physicians dependent planning. Seventh National Conference on will order a whole battery of tests to help them Artificial Intelligence, 49-54,1988. distinguish among the children of the current node. [FA80] Fagan, L. VM: representing time- Thus, our hierarchies should be able to perform dependent relations in a medical setting. PhD similarly if we hope to achieve performance Thesis, Computer Science Department, Stanford comparable to that obtained by physicians. University, June, 1980. Mean Value of Best Action 0.4 0.3 0.2 0.1 DT 0 30 60 Deadline (in minutes) 90 Figure 6: Hierarchy Performance in Medical Problem Our measures for cost and value (numbers between 0 and 1) are quite crude and may not be sufficient in many domains. Therefore, we intend to explore the process of hierarchy structuring in domains where we have more sophisticated notions of cost and value. References [AG87] Agre, P., Chapman, D. Pengi: an implementation of a theory of activity. Proceedings of the Sixth International Conference on Artificial Intelligence, 268-272,1987. [AS931 Ash, D., et.al. Guaranteeing real-time response with limited resources. Artificial Intelligence in Medicine, 5( 1):49-66,1993. [BR87] Bratko, I., Mozetic, I., Lavrac, N. Automatic synthesis and compression of cardiological knowledge. Machine Intelligence 2 I, pp 435-454,1987. [FA92] Fayyad, U., Irani, K. The attribute selection problem in decision tree generation. Tenth National Conference on Artificial Intelligence, 104-l 10,1992. [HA921 Hayes-Roth, B., et.al. Guardian: a prototype intelligent agent for intensive-care monitoring. Artificial Intelligence in Medicine, 4:165- 185,1992. [HO871 Horvitz, E. Reasoning about beliefs and actions under computational resource constraints. Proceedings of the 2987 AAAI Workshop on Uncertainty in Artificial Intelligence, 1987. [HO891 Horvitz, E., et.al. Heuristic abstraction in the decision-theoretic Pathfinder system. Proceedings of the Thirteenth Annual Symposium on Computer Applications in Medical Care, Washington, DC, November, 1989. [NI891 Nilsson, N. J. Action Networks. Proceedings from the Rochester Planning Workshop: From Formal Systems to Practical Systems, J. Tenenberg et al., ed., University of Rochester, 1989. [QU83] Quinlan, J.R. Inductive inference as a tool for the construction of high-performance programs. In R.S. Michalski, T.M. Mitchell, and J. Carbonell, Machine Learning, Palo Alto, Calif .: Tioga, 1983. [R0891 Rosenschein, S. J. Synthesizing information-tracking automata from environment descriptions. Proceedings of Conference on Principles of Knowledge Representation and Reasoning, Morgan Kaufmann, San Mateo, CA, 1989. [RU91] Russell, S., Wefald, E. Principles of metareasoning. Artificial Intelligence, 49:361-395, 1991. [SC871 Schoppers, M. Universal plans for reactive robots in unpredictable environments. Tenth International Joint Conference on Artificial Intelligence, 1987. Real-Time Planning 2% Simulation 573 | 1993 | 85 |
1,415 | Planning Wit stic s Thomas ean, Leslie Pack Kaelbling, Jak irman, Ann Nicholson Department of Computer Science Brown University, Providence, RI 02912 tld@cs.brown.edu Abstract’ We provide a method, based on the theory of Markov decision problems, for efficient planning in stochastic domains. Goals are encoded as reward functions, expressing the desirability of each world state; the planner must find a policy (mapping from states to actions) that maximizes future rewards. Standard goals of achievement, as well as goals of maintenance and prioritized combinations of goals, can be specified in this way. An optimal policy can be found using ex- isting methods, but these methods are at best polynomial in the number of states in the do- main, where the number of states is exponen- tial in the number of propositions (or state vari- ables). By using information about the start- ing state, the reward function, and the transi- tion probabilities of the domain, we can restrict the planner’s attention to a set of world states that are likely to be encountered in satisfying the goal. Furthermore, the planner can gener- ate more or less complete plans depending on the time it has available. We describe exper- iments involving a mobile robotics application and consider the problem of scheduling differ- ent phases of the planning algorithm given time constraints. Introduction In a completely deterministic world, it is possible for a planner simply to generate a sequence of actions, knowing that if they are executed in the proper order, the goal will necessarily result. In nondeterministic worlds, planners must address the question of what to do when things do not go as expected. The method of triangle tables [Fikes et al., 19721 made plans that could be executed robustly in any circumstance along the nominal trajectory of world states, allowing for certain classes of failures and serendipitous events. It is often the case, however, that an execution error will move the world to a situation that has not been previously considered by the planner. Many systems (SIPE, for example [Wilkins, 1988]) can monitor for plan “failures” and initiate replanning. Re- planning is often too slow to be useful in time-critical domains, however. Schoppers, in his universal plans [Schoppers, 19871, gives a method for generating a re- action for every possible situation that could transpire during plan execution; these plans are robust and fast to execute, but can be very large and expensive to gen- erate. There is an inherent contradiction in all of these approaches. The world is assumed to be deterministic for the purpose of planning, but its nondeterminism is accounted for by performing execution monitoring or by generating reactions for world states not on the nominal planned trajectory. In this paper, we address the problem of planning in nondeterministic domains by taking nondeterminism into account from the very start. There is already a well-explored body of theory and algorithms address- ing the question of finding optimal policies (universal plans) for nondeterministic domains. Unfortunately, these methods are impractical in large state spaces. However, if we know the start state, and have a model of the nature of the world’s nondeterminism, we can restrict the planner’s attention to a set of world states that are likely to be encountered on the way to the goal. Furthermore, the planner can generate more or less complete plans depending on the time it has avail- able. In this way, we provide efficient methods, based on existing techniques of finding optimal strategies, for planning under time constraints in non-deterministic domains. Our approach addresses uncertainty result- ing from control error, but not sensor error; we assume certainty in observations. We assume that the environment can be modeled as a stochastic automaton: a set of states, a set of actions, and a matrix of transition probabilities. In the simplest cases, achieving a goal corresponds to performing a se- quence of actions that results in a state satisfying some proposition. Since we cannot guarantee the length of a sequence needed to achieve a given goal in a stochastic domain, we are interested in building planning systems that minimize the expected number of actions needed to reach a given goal. 574 Dean From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. In our approach, constructing a plan to achieve a goal corresponds to finding a policy (a mapping from states to actions) that maximizes expected perfor- mance. Performance is based on the expected accu- mulated reward over sequences of state transitions de- termined by the underlying stochastic automaton. The rewards are determined by a reward function (a map- ping from states to the real numbers) specially formu- lated for a given goal. A good policy in our framework corresponds to a universal plan for achieving goals quickly on average. In the following, we refer to the automaton modeling the environment as the system automaton. Instead of generating the optimal policy for the whole system au- tomaton, we formulate a simpler or restricted stochas- tic automaton and then search for an optimal policy in this restricted automaton. The state space for the restricted automaton, called the envelope, is a subset of the states of the system automaton, augmented with a special state OUT that represents being in any state outside of the envelope. The algorithm developed in this paper consists of two basic subroutines. Envelope extension adds states to the restricted automaton, making it approximate the system automaton more closely. Policy generation computes an optimal policy for the restricted automa- ton; a complete policy for the system automaton is constructed by augmenting the policy for the restricted automaton with a set of default actions or reflexes to be executed for states outside the envelope. The algorithm is implemented as an anytime algo- rithm [Dean and Boddy, 19881, one that can be inter- rupted at any point during execution to return an an- swer whose value at least in certain classes of stochastic processes improves in expectation as a function of the computation time. We gather statistics on how enve- lope extension and policy generation improve perfor- mance and use these statistics to compile expectations for allocating computing resources in time-critical sit- uations. In this paper, we focus primarily on the details of the algorithm and the results of a series of computational experiments that provide some indication of its merit. Subsequent papers will expand on the representation for goals and deal with more complicated models of in- teraction that require more sophisticated methods for allocating computational resources. lanning Algorithm Definitions We model the entire environment as a stochastic automaton. Let S be the finite set of world states; we assume that they can be reliably identified by the agent. Let A be the finite set of actions; ev- ery action can be taken in every state. The transition model of the environment is a function mapping ele- ments of S x A into discrete probability distributions over S. We write PR(s~, a, sq) for the probability that the world will make a transition from state si to state sz when action a is taken. A policy 7r is a mapping from S to A, specifying an action to be taken in each situation. An environment combined with a policy for choosing actions in that environment yields a Markov chain [Kemeny and Snell, 1960]. A reward function is a mapping from S to 8, speci- fying the instantaneous reward that the agent derives from being in each state. Given a policy 7r and a reward function R, the value of state s E S, VT(s), is the sum of the expected values of the rewards to be received at each future time step, discounted by how far into the future they occur. That is, VT(s) = ~~oytE(Rt), where Rt is the reward received on the tth step of ex- ecuting policy 7r after starting in state s. The dis- counting factor, y, is between 0 and 1 and controls the influence of rewards in the distant future. Due to properties of the exponential, the definition of V can be rewritten as vi&) = R(s) + y x PR(s, r(s), s’)h(s’) . (1) S’ES We say that policy T dominates (is better than) X’ if, for all s E S, V,(s) >_ V’l (s), and for at least one s E S, VT(s) > V+(s). A policy is optimal if it is not dominated by any other policy. One of the most common goals is to achieve a cer- tain condition p as soon us possible. If we define the reward function as R(s) = 0 if p holds in state s and R(s) = -1 th o erwise, and represent all goal states as being absorbing, then the optimal policy will result in the agent reaching a state satisfying p as soon as pos- sible. Absorbing means that all actions result in the same state with probability 1; Vu E A, PR(s, a, s) = 1. Making the goal states absorbing ensures that we go to the “nearest” state in which p holds, independent of the states that will follow. The language of reward func- tions is quite rich, allowing us to specify much more complex goals, including the maintenance of properties of the world and prioritized combinations of primitive goals. A partial policy is a mapping from a subset of S into actions; the domain of a partial policy r is called its envelope, &. The fringe of a partial policy, F,, is the set of states that are not in the envelope of the policy, but that may be reached in one step of policy execution from some state in the envelope. That is, F7r = {s E s 1 3s’ E 2% s.t. PR(s’, r(s’), s) > 0) . To construct a restricted automaton, we take an en- velope 8 of states and add the distinguished state OUT. For any states s and s’ in C and action a in ,A, the tran- sition probabilities remain the same. Further, for every s E & and a E A, we define the probability of going out of the envelope as PR(S, a, OUT) = 1 - x PR(s, a, s’) . S’EE The OUT state is absorbing. eal-Time Planning & Simulation 575 The cost of falling out of the envelope is a param- eter that depends on the domain. If it is possible to re-invoke the planner when the agent falls out of the envelope, then one approach is to assign V(OUT) to be the estimated value of the state into which the agent fell minus some function of the time to construct a new partial policy. Under the reward function described earlier, the value of a state is negative, and its mag- nitude is the expected number of steps to the goal; if time spent planning is to be penalized, it can simply be added to the magnitude of the value of the OUT state with a suitable weighting function. Overall Structure We assume, initially, that there are two separate phases of operation: planning and execution. The planner constructs a policy that is fol- lowed by the agent until a new goal must be pursued or until the agent falls out of the current envelope. More sophisticated models of interaction between planning and execution are possible, including one in which the planner runs concurrently with the execution, sending down new or expanded strategies as they are devel- oped. Questions of how to schedule deliberation are discussed in the following section (see also [Dean et al., 19931). E xecution of an explicit policy is trivial, so we describe only the algorithm for generating policies. The high level planning algorithm, given a descrip- tion of the environment and start state .sc is as follows: 1. Generate an initial envelope 8 2. While (, # S) and (not deadline) do 2.1 Extend the envelope 8 2.2 Generate an optimal policy r for restricted tomaton with state set f U W-JT~ au- 3. Return K The algorithm first finds a small subset of world states and calculates an optimal policy over those states. Then it gradually adds new states in order to make the policy robust by decreasing the chance of falling out of the envelope. After new states are added, the optimal policy over the new envelope is calculated. Note the interdependence of these steps: the choice of which states to add during envelope extension may depend on the current policy, and the policy generated as a re- sult of optimization may be quite different depending on which states were added to the envelope. The al- gorithm terminates when a deadline has been reached or when the envelope has been expanded to include the entire state space. In the following sections, we consider each subcomponent of this algorithm in more detail. Generating an initial envelope This high-level al- gorithm works no matter how the initial envelope is chosen, but if it is done with some intelligence, the early policies are much more useful. In our examples, we consider the goal of being in a state satisfying p as soon as possible. For such simple goals of achieve- ment, a good initial envelope is one containing a chain of states from the initial state, SO, to some state satis- fying p such that, for each state, there is some action with a non-zero probability of moving to the next state in the chain. In the implemented system, we generate an initial envelope by doing a depth-first search from SO consid- ering the most probable outcome for each action in decreasing order of probability. This yields a set of states that can be traversed with fairly high probabil- ity to a goal state. More sophisticated techniques could be used to generate a good initial envelope; our strat- egy is to spend as little time as possible doing this, so that a plausible policy is available as soon as possible. Generating an optimal policy Howard’s policy it- eration algorithm is guaranteed to generate the optimal policy for the restricted automaton. The algorithm works as follows: ’ 1. Let ?r’ be any policy on C 2. While K # 7r’ do loop 2.1 7r := 7r’ 2.2 For all s E z, calculate V*(s) by solving the set of 111 linear equations in 18 1 unknowns given by equation 1 2.3 For all s E C, if there is some action a E A s.t. bb) + 7 &E u{OUT} PR(s, “, a>v&‘>l 3 V*(s), then r’(s) := a; otherwise r’(s) := r(s) 3. Return 7r The algorithm iterates, generating at every step a pol- icy that strictly dominates the previous policy, and terminates when a policy can no longer be improved, yielding an optimal policy. In every iteration, the val- ues of the states under the current policy are com- puted. This is done by solving a system of equations; although this is potentially an O(l s 12.“) operation, most realistic environments cannot transition from ev- ery state to every other, so the transition matrix is sparse, allowing much more efficient solution of the equations. The algorithm then improves the policy by looking for states s in which doing some action a other than r(s) for one step, then continuing with ?r, would result in higher expected reward than simply executing ?r. When such a state is found, the policy is changed so that it always chooses action a in that state. This algorithm requires a number of iterations at most polynomial in the number of states; in practice for for an instance of our domain with 6000 world states, it has never taken more than 16 iterations. When we use this as a subroutine in our planning algorithm, we gen- erate a random policy for the first step, and then for ‘Since V(OLJT) is fixed, and the OUT state is absorb- ing, it does not need to be explicitly included in the policy calculations. 576 Dean all subsequent steps we use the old policy as the start- ing point for policy iteration. Because, in general, the policy does not change radically when the envelope is extended, it requires very few iterations (typically 2 or 3) of the policy iteration algorithm to generate the op- timal policy for the extended envelope. Occasionally, when a very dire consequence or an exceptional new path is discovered, the whole policy must be changed. Extending the envelope There are a number of possible strategies for extending the envelope; the most appropriate depends on the domain. The aim of the envelope extension is to judiciously broaden the subset of the world states, by including states that are out- side the envelope of the current policy but that may be reached upon executing the policy. One simple strat- egy is to add the entire fringe of the current policy, F,; this would result in adding states uniformly around the current envelope. It will often be the case, however, that some of the states in the fringe are very unlikely given the current policy. A more reasonable strategy, similar to one advocated by Drummond and Bresina [Drummond and Bresina, 19901, is to look for the IV most likely fringe states. We do this by simulating the restricted automaton and ac- cumulating the probabilities of falling out into each fringe state. We then have a choice of strategies. We can add each of the N most likely fringe states. Al- ternatively, for goals of achievement, we can take each element of this subset of the fringe states and finda chain of states that leads back to some state in the envelope. In the experiments described in the follow- ing sections, fringe states are added rather than whole paths back to the envelope. Example In our approach, unlike that of Drummond and Bresina, extending the current policy is coupled tightly and naturally to changing the policy as required to keep it optimal with respect to the restricted view of the world. The following example illustrates how such changes are made using the algorithm as described. The example domain is mobile-robot path planning. The floor plan is divided into a grid of 166 locations, L, with four directional states associated with each location, V = {N, S, E, W}, corresponding to the di- rection the robot is facing, resulting in a total of 664 world states. The actions available to the robot are {STAY, GO,TURN-RIGHT, TURN-LEFT,TURN-ABOUT}. The transition probabilities for the outcome of each action may be obtained empirically. In our experimen- tal simulation the STAY action is guaranteed to suc- ceed. The probability of success for GO and turning ac- tions in most locations were 0.8, with the remainder of the probability mass divided between undesired results such as overshooting, over-rotating, slipping sideways, etc. The world also contains sinks, locations that are difficult or impossible to leave. On average each state has 15.6 successors. Figure 1 shows a subset of our domain, the locations surrounding a stairwell, which is a complete sink, i.e., there are no non-zero transitions out of it; also, it is only accessible from one direction, north. In this fig- ure there are four small squares associated with each location, one for each possible heading; thus each small square corresponds to a state, the direction of the ar- row shows the policy for the robot in that location and with that heading. Figure 1 (a) shows the opti- mal policy for a small early envelope; Figures l(b) and (c) show two subsequent envelopes where the policy changes to direct the robot to circumvent the stair- well, reflecting aversion to the risk involved in taking the shortest path. eliberation Scheduling Given the two-stage algorithm for generating policies provided in the previous section, we would like the agent to allocate processor time to the two stages when faced with a time critical situation. Determining such allocations is called deliberation scheduling [Dean and Boddy, 1988]. In this paper, we consider situations in which the agent is given a deadline and an initial state and has until the deadline to produce a policy after which no further adjustments to the policy are allowed. The interval of time from the current time until the deadline is called the deliberation interval. We address more complicated situations in [Dean et al., 19931. Deliberation scheduling relies on compiling statis- tics to produce expectations regarding performance im- provements that are used to guide scheduling. -In gen- eral, we cannot guarantee that our algorithm will pro- duce a sequence of policies, %c, %I, ii2, . . ., that increase in value, e.g., V+,(SO) < V& (SO) < V*,(Q) - . ., where the iii are complete policies constructed by adding re- flexes to the partial policies generated by our algo- rithm. The best we can hope for is that the algo- rithm produces a sequence of policies whose values in- crease in expectation, e.g., E[V+,(so)] < E[V+,(so)] < E[Ve, (so)] * * -> where here the initial state SO is con- sidered a random variable. In all0 cat ing processor time, we are concerned with the expected improve- ment, EIV+l+l(so) - V*,(so)], relative to a given al- location of processor time. If envelope extension did not make use of the cur- rent policy, we could just partition the deliberation interval into two subintervals, the first spent in en- velope extension and the second in policy generation. However, since the two stages are mutually dependent, we have to consider performing multiple rounds where each round involves some amount of envelope extension followed by some amount of policy generation. Let tEE, (tpG,) be the time allocated to envelope ex- tension (policy generation) in the ith round of the al- gorithm and & be the envelope following the ith round envelope extension. To obtain an optimal deliberation schedule, we would have to consider the expected value eal=Time Planning & Simulation 577 One location in Goal not in env not in env not in env I I “Best” Path env ACTIONS I,& r Turn right 1 Turnleft Turn about Stairwell Figure 1: Example of policy change for different en- velopes near a complete sink. The direction of the ar- row indicates the current policy for that state. (a) Sink not in the envelope: the policy chooses the straightfor- ward shortest path. (b) S k in included: the policy skirts north around it. (c) All states surrounding the stair- well included: the barriers on the south, east and west sides allow the policy take a longer but safer path. For this run y = 0.999999 and V(OUT) = -4000. of the final policy given k = 1,2, . . . rounds and all pos- sible allocations to tEEi and tpG, for 1 2 i 5 k. we suspect that finding the optimal deliberation schedule is NP-hard. To expedite deliberation scheduling, we use a greedy algorithm based on the following statis- tics. 1. 2. The expected improvement starting with an enve- lope of size m and adding n states: i?[V~,+,(s~) - V+,(S0)lm =I& I,m+ 7-h =IEi+ll]; The expected time required to extend by n states an envelope of size nx and compute the opti- mal policy for the resulting restricted automaton: E[tEE,+PG,)m=l~~I,m+n=I~~+l)]. After each round of envelope extension followed by pol- icy generation we have an envelope of some size nz; we find that n maximizing the ratio of (1) and (2), add n states, and perform another round, time permitting. If the deadline occurs during envelope extension, then the algorithm returns the policy from the last round. If the deadline occurs during policy generation, then the algorithm returns the policy from the last iteration of policy iteration. Results In this section, we present results from the iterative re- finement algorithm using the table lookup deliberation scheduling strategy and statistics described in the pre- vious sections. We generated 1.6 million data points to compute the required statistics for the same robot- path-planning domain. The start and goal states were chosen randomly for executions of the planning algo- rithm using a greedy deliberation strategy, where N, the number of fringe states added for each phase of envelope extension, was determined from the delibera- tion scheduling statistics. We compared the performance of (1) our planning algorithm using the greedy deliberation strategy to (2) policy iteration optimizing the policy for the whole do- main. Our results show that the planning algorithm using the greedy deliberation strategy supplies a good policy early, and typically converges to a policy that is close to optimal before the whole domain policy it- eration method does. Figure 2 shows average results from 620 runs, where a single run involves a particular start state and goal state. The graph shows the av- erage improvement of the start state under the policy available at time t, V+(Q), as a function of time. In order to compare results from different start/goal runs, we show the average ratio of the value of the current policy to the value of the optimal policy for the whole domain, plotted against the ratio of actual time to the time, T,,t , that the policy iteration takes to reach that optimal value. The greedy deliberation strategy performs signifi- cantly better than the standard optimization method. We also considered very simple strategies such as adding a small fixed N each iteration, and adding the 578 Dean Value 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.0 -- 1 Tim 0.00 O-20 0.40 0.60 0.80 1.00 Figure 2: Comparison of planning algorithm using greedy deliberation strategy (dashed line) with the pol- icy iteration optimization method (solid line). whole fringe each iteration, which performed fairly well for this domain, but not as well as the greedy pol- icy. Further experimentation is required to draw defini- tive conclusions about the comparative performance of these deliberation strategies for particular domains. elated Work and Conchsions Our primary interest is in applying the sequential de- cision making techniques of Bellman [Bellman, 19571 and Howard [Howard, 19601 in time-critical applica- tions. Our initial motivation for the methods discussed here came from the ‘anytime synthetic projection’ work of Drummond and Bresina. [Drummond and Bresina, 19901. We improve on the Drummond and Bresina work by providing (i) coherent semantics for goals in stochastic domains, (ii) theoretically sound probabilis- tic foundations, (iii) and decision-theoretic methods for controlling inference. The approach described in this paper represents a particular instance of time-dependent planning [Dean and Boddy, 19881 Horvitz’ [Horvitz, and borrows from, among others, 1988] approach to flexible com- putation. Boddy [Boddy, 19911 describes solutions to related problems involving dynamic programming. Hansson and Mayer’s BPS (Bayesian Problem Solver) [Hansson and Mayer, 19891 supports general state- space search with decision-theoretic control of infer- ence; it may be that BPS could be used as the ba- sis for envelope extension thus providing more fine- grained decision-theoretic control. Christiansen and Goldberg [Christiansen and Goldberg, 19901 also ad- dress the problem of planning in stochastic domains. The approach is applicable to stochastic domains with certain characteristics; typically there are mul- tiple paths to the goal and the domain is relatively be- nign. If there is only one path to the goal all the work will be done by the procedure finding the initial en- velope, and extending the envelope only improves the policy if the new states can be recovered from. Our fu- ture research plans involve extending the approach in several directions: allowing more complex goals; per- forming more complicated deliberation scheduling such as integrating online deliberation in parallel with the execution of policies; relaxing the assumption of obser- vation certainty to handle sensor error. Acknowledgements. Thomas Dean’s work was sup- ported in part by a National Science Foundation Presiden- tial Young Investigator Award IRI-8957601, by the Ad- vanced Research Projects Agency of the Department of Defense monitored by the Air Force under Contract No. F30602-91-G-0041, and by the National Science founda- tion in conjunction with the Advanced Research Projects Agency of the Department of Defense under Gontract No. IRI-8905436. Leslie KaelbIing’s work was supported in part by a National Science Foundation National Young Investi- gator Award IRI-9257592 and in part by ONR Contract N00014-91-4052, ARPA Order 8225. References Bellman, R. 1957. Dynamic Programming. Princeton Uni- versity Press. Boddy, M. 1991. Anytime problem solving using dynamic programming. In Proceedings AAAI-91. AAAI. 738-743. Ghristiansen, A., and Goldberg, K. 1990. Robotic manipulation planning with stochastic actions. In DARPA Workshop on Innovative Approaches to Plan- ning, Scheduling and Control. San Diego,Galifornia. Dean, T., and Boddy, M. 1988. An analysis of time- dependent planning. In Proceedings AAAI-88. AAAI. 49- 54. Dean, T.; KaelbIing, L.; Kirman, J.; and Nicholson, A. 1993. Deliberation scheduling for time-critical sequential decision making. Submitted to Ninth Conference on Un- certainty in Artificial Intelligence. Drummond, M., and Bresina, J. 1990. Anytime synthetic projection: Maximizing the probability of goal satisfac- tion. In Proceedings AAAI-90. AAAI. 138-144. Fikes, R. E.; Hart, P. E.; and Nilsson, N. J. 1972. Learning and executing generalized robot plans. Artificial Intelli- gence 3~251-288. Hansson, O., and Mayer, A. 1989. Heuristic search as evidential reasoning. In Proceedings of the Fifth Workshop on Uncertainty in AI. 152-161. Horvitz, E. J. 1988. Reasoning under varying and uncer- tain resource constraints. In Proceedings AAAZ-88. AAAI. 111-116. Howard, R. A. 1960. Dynamic Programming and Markov Processes. MIT Press, Cambridge, Massachusetts. Kemeny, J. G. and SneII, J. L. 1960. Finite Markov Chains. D. Van Nostrand, New York. Schoppers, M. J. 1987. Universal plans for reactive robots in unpredictable environments. In Proceedings IJCA I 10. IJGAII. 1039-1046. Wilkins, D. E. 1988. Practical Planning: Extending the Classical A I Planning Paradigm. Morgan-Kaufmann, Los Altos, GaIifornia. Real-Time Planning & Sinmlatisn 579 | 1993 | 86 |
1,416 | Computer Science Department University of Massachusetts Amherst, MA 01003 EMAIL: garvey@cs.umass.edu Abstract Design-to-time is an approach to real-time scheduling in situations where multiple methods exist for many tasks that the system needs to solve. Often these meth- ods will have relationships with one other, such as the execution of one method enabling the execution of another, or the use of a rough approximation by one method affecting the performance of a method that uses its result. Most previous work in the scheduling of real-time AI tasks has ignored these relationships. This paper presents an optimal design-to-time sched- uler for particular kinds of relationships that occur in an actual AI application, and examines the performance of that scheduler in a simulation environment that models the tasks of that application. Introduction One of the major difficulties in the real-time scheduling of AI tasks is their lack of predictable durations. This difficulty occurs in non-AI systems, but it is especially prominent in AI problem-solving because of the inherent nondeterminism of most AI problem-solving techniques due to their exten- sive use of search. For this reason, most AI systems use some form of approximation to reduce the nondeterminism and make system performance more predictable. At least two broadly different kinds of approximation algorithms have been examined. They are: e Iterative refinemenl-where an imprecise answer is gen- erated quickly and refined through some number of iter- ations. There are several variations including milestone methods where a procedure explicitly generates interme- diate results as often as is deemed useful, and sievefunc- tions where intermediate results are refined by running them through a series of functions (known as sieves) that improve the results [Liu et al., 19911. Multiple methods-where a number of different algo- rithms are available for a task, each of which is capable *This work was partly supported by NSF contract CDA 8922572, ARPA under ONR contract N00014-92-J-1698 and ONR contract N00014-92-J-1450. The content of the information does not necessarily reflect the position or the policy of the Government and no official endorsement should be inferred. 580 Garvey of generating a solution. These algorithms emphasize different characteristics of the problem, which might be applicable in different situations. These algorithms also make tradeoffs of solution quality versus time. The scheduling problem for approximate algorithms is to decide how to allocate processing time among ap- proximations for different tasks so as to optimize the total performance of the system. Several approaches to this scheduling problem have been described in the literature [Dean and Boddy, 1988; Liu et al., 1991; Russell and Zilberstein, 19911. Nearly all of these ap- proaches assume that tasks are either totally independent or have only hard precedence constraints between them. However, often AI applications do not consist of indepen- dent tasks, but rather of a series of interrelated subproblems whose consistent solution is required for an acceptable an- swer. The importance of taking these relationships into account in scheduling decisions has been observed in our work in sensor interpretation [Garvey and Lesser, 1993; Lesser and Corkill, 19831. One important reason why other work has not focused on relationships is undoubt- edly the difficulty of scheduling related tasks efficiently. While we don’t offer a proof of it here due to space limitations, it is evident that the scheduling problems we are investigating fall into the class of NP-Hard prob- lems, as others have shown for similar problems not involving task interrelationships [Graham et al., 1979; Liu et al., 19911. As we will discuss below, we have de- veloped a scheduling algorithm for a specific class of ap- proximation algorithms and task structures that in the worst case has exponential performance, but, in practice, is able to schedule tasks effectively. Our new scheduling algorithm that exploits task interrela- tionships is appropriate for what we have called the design- to-time approach to real-time problem-solving [Decker et al., 1990; Garvey and Lesser, 19931. Design-to-time (a generalization of what we have previously called approx- imate processing [Lesser et al., 19881) is an approach to solving problems in domains where multiple methods are available for many tasks and satisficing solutions are ac- ceptable. These methods make tradeoffs in solution quality versus execution time, and may only be applicable in par- ticular environmental situations. From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. The methodology is known as design-to-time because it advocates the use of all available time to generate the best solutions possible. It is a problem-solving method of the type described by D’Ambrosio [D’Ambrosio, 19891 as those which “given a time bound, dynamically construct and execute a problem solving procedure which will (probably) produce a reasonable answer within (approximately) the time available.” Design-to-time can only be successful if the duration and quality associated with methods is fairly predictable. The predictability issue was investigated in detail in a previous paper [Garvey and Lesser, 19931 with the result that the predictability necessary for execution times is based on a complex set of factors that include how busy the agent is and how difficult it is for the agent to determine when a method is not performing as expected. An agent can tolerate uncertainty in its predictions if monitoring can be done quickly and accurately, so that when a task will not meet its deadline enough time re- mains to execute a faster method, or intermediate results can be shared among methods, so that when it is necessary to switch to a faster method the intermediate results generated by the previous method can be used, or there exists a fall back method that quickly generates an minimally acceptable solution. The next section presents a model of task structures that supports satisficing real-time tasks. The following section describes a particular class of task structure, then presents an algorithm for scheduling that class and gives an example of that algorithm scheduling a set of task groups. That is followed by a section that examines the performance of that algorithm in a simulation environment. Finally, we summarize our results and discuss future directions. Task Structures This section defines a model of task structures that has the complexity necessary to describe the unpredictability of tasks and the interactions among tasks’. In this model a problem consists of a set of independent task groups. Each task group contains a structured set of dependent tasks. Task groups 7 occur in the environment at some frequency, and induce tasks T to be executed by the agent under study. Each task group has a deadline D(7). In this model the value of performing a task is known as the quality of the task. The term quality summarizes several possible properties of actions or results in a real system: cer- tainty, precision, and completeness of a result, for example [Decker et aZ., 19901. Task group quality (Q(T)) is based on the subtask relationship. In the experiments described in this paper tasks accumulate quality using minimum or max- imum functions, i.e., a task’s quality at time t is either the minimum or maximum of the qualities of each of its subtasks at time t. This quality function is constructed recursively; ‘A more detailed mathematical description of this model can be found in [Decker et al., 19921. each task group consists of tasks, each of which consists of subtasks, etc., until individual executable tasks (known as executable methods) are reached. Executable methods have a base level quality and duration, which in this work are generated randomly for the experimental evaluation, but are correlated with one another (i.e., higher quality methods tend to take longer than lower quality methods). Besides task/subtask relationships, tasks can have other relationships to tasks in their task group. Many such rela- tionships are possible including: e enables constraints - Task A must be executed before Task B. This is usually a hard constraint. facilitates relationships [Decker and Lesser, 19911 - If Task A is executed before Task B, then Task B will have increased quality and/or decreased duration. This could result, for example, from Task A performing part of the work that would have been done by Task B. hinders relationships - If Task A is executed before Task B, then Task B will have decreased quality and/or increased duration. This could result, for example, from Task A using an approximation that reduces the precision with which Task B can be performed. These relationships can affect the base level quality and duration of affected methods. In this work we have exam- ined task structures that have acyclic enables and hinders constraints. For each task in a task structure there may be multiple sets of subtasks that can be combined to solve the task, although a particular scheduling algorithm may not enumerate all such combinations. Each of these sets is known as a method for solving the task. At least some of these methods may involve approximations and thus be satisficing. The scheduling problem for sets of task groups is to find an ordered set of executable methods that generate non-zero quality for each task group, T, maximize the total quality, Q(7), of all task groups added together (possibly weighted by the importance of the task group, although that is not examined in this paper), do not execute any executable methods after the deadline of their task group, D(T). Figure 1: An example task group. The dark lines indicate subtask relationships. The thin gray lines represent enables constraints. The thick gray lines represent hindering con- straints. The standard notation for minimum as and and maximum as or are used. Figure 1 is an example of a simple task group. In this task group Composed Task is solved by solving each of e&Time Planning & Simulation 581 Task A, Task B and Task C in order. (In order because of the enables constraints.) Each of these tasks has multiple solution methods available for solving it, where increasing method number means longer, more complete method. The thick gray lines represent hindering constraints from Method AI to each of Task B and Task C. This means that if Method A% (presumably a fast, imprecise method) is used to solve Task A then Tasks B and C will take longer to complete and/or produce lower quality results. A Design-to-time Scheduler This section describes an algorithm for scheduling the exe- cution of executable methods in environments where: The task/subtask relationship forms a tree with a single root for each task group. This means that each task and method has exactly one supertask. Tasks generate quality using one of minimum (AND) or maximum (OR). Enables relationships may exist among the subtasks of tasks that accumulate quality using minimum. The en- ables relationships are mutually consistent (i.e., there are no cycles). This corresponds to the situation where there is a body of work that must be completed to satisfy a task and this work must be done in a particular order. Hinders relationships may exist in situations where en- ables may exist and an enabling subtask has a maximum quality accumulation function. In this situation there may be a hinders relationship from the lowest quality method for solving the subtask to the tasks that the subtask en- ables. This corresponds to the situation where using a crude approximation for a task can have negative effects on the behavior of tasks that use the result of the approx- imated task. These environmental characteristics closely model charac- teristics seen in a sensor interpretation application. In par- ticular, the enables relationships appear as requirements that low level data be processed before high level interpretations of that data are made, and the hinders relationships appear in the situation where fast, imprecise approximations of low level data processing can both increase the duration and de- crease the precision of high level results [Garvey and Lesser, 1993; Lesser and Corkill, 19831. The Algorithm Briefly, this algorithm recursively finds all methods for exe- cuting each task in the task structure, pruning those meth‘ods that are superseded by other methods that generate greater or equal quality in equal or less time. In calculating the expected quality of a method it takes enables and hinders constraints into account. When it has found all unpruned methods for every task group, it orders the task groups by deadline and finds the combination of methods for the task groups that generates the highest total quality while meet- ing all deadlines. It then schedules the execution of each individual executable method using a simple algorithm that ensures that no enables constraints are violated and avoids hinders constraints if possible. If no schedule can be found that generates quality for all task groups, the scheduler re- turns a schedule that generates some quality for as many task groups as possible. This algorithm works its way up from the leaves of the tree. In all examples in this paper, those leaves are ex- ecutable methods, however, there is no reason why they could not be higher level tasks (with an estimated duration and quality) whose detailed execution is scheduled at a later time. The optimality of this algorithm follows from the fact that it is effectively generating all possible alternatives, then choosing the ones that generate the maximum quality pos- sible without missing any deadlines. As the alternatives for each task are generated, ones that could never be cho- sen because other alternatives exist that are always better are pruned. This pruning is effective only because we can make this determination locally, because of the constraints on where relationships can occur. In the worst case this algorithm takes time exponential in the number of tasks, but, in practice, pruning and a clus- tering effect (to be described) usually make it much more efficient. The pruning reduces the number of alternatives that need to be considered for each task. In fact, pruning can reduce the number of alternatives for a task to no more than the number of distinct quality values possible. In the case where quality is a real number, this is not particularly helpful. However, if quality values are symbols from a small set of possible values or are somehow otherwise lim- ited, this can significantly reduce the number of alternatives considered. As illustrated by the example in the next sec- tion, our experiments achieved a clustering effect by using small integer values for quality and combining them using minimum and maximum, resulting in a small set of possible quality values. 1 Task Group 11 D(TG1) = 25 ITaskA 1 \TaskB 1 Figure 2: An example problem consisting of two task groups. The dark lines indicate subtask relationships. The thin gray lines represent enables constraints. The thick gray line represents a hinders constraint. 582 Garvey Method Duration Quality Method Al 5 5 Method A2 7 7 Method A3 10 9 Method Bl 9 8 Method B2 12 12 Method Cl 4 4 Method C2 7 6 Method Dl 6 5 Method D2 5 5 Table 1: Duration and quality for the executable methods in the example problem. An Exampk To describe the details of the algorithm more specifically we now show how it would schedule the two task groups shown in Figure 2 with associated durations and qualities shown in Table 1. Task group 1 (TGl) has both a hinders and an enables relationship, while TG2 has only an enables relationship. TGl and TG2 have deadlines of 25 and 35 respectively. The hinders relationship has the effect of re- ducing quality by 50% and increasing duration by 25%. First the algorithm recursively finds all alternatives for each element of the task structure. Each executable method has exactly one alternative, the method itself. Task’s A, B, and C each accumulate quality using maximum, so they only need to execute one of their subtasks, giving them 3, 2, and 2 alternatives respectively*. Task D accumulates quality using minimum, so it has only one alternative, that which executes both of its subtasks. No pruning is possible in any of these situations. Finally, the algorithm finds the alternatives for each task group by combining alternatives from the associated subtasks. The possible alternatives for TGl shown in Table 2. In this case alternatives 1,2 and 4 can be pruned (as indi- cated by the lines through them) because other alternatives exist that can generate equal or higher quality in equal or shorter time. Note that the effects of the hinders relation- ship from Method Al to Task B are shown in the reduced qualities and increased durations of Methods Bl and B2 in alternatives 1 and 2. Similarly the possible alternatives for TG2 (neither of which can be pruned) are shown in Table 3. Finally, the alternatives for the entire set of task groups are shown in Table 4. Alternatives 4,5, and 6 can be pruned because TG2 does not meet its deadline of time 35. Alternative 3 can be pruned because it is redundant with Alternative 2. The scheduler chooses Alternative 2, which generates the maximum pos- @sible quality while meeting all deadlines. It then finds an ordering for the chosen alternative that meets all enables 2There is no need to consider alternatives that involve executing more than one of these subtasks, because no possible gain could result. However, in cases where such gain could result, for example when quality is accumulated in an additive fashion, all possible subgroupings must be considered. constraints, for example it could choose the schedule: 82, Bl, C2, Dl, D2. aximum number of distinct qualities Figure 3: Maximum number of possible quality values ver- sus the average runtime of the scheduler. I I I I I I 10 20 30 Number of executable methods Figure 4: Number of executable methods versus the average runtime of the scheduler. The middle line is the median runtime. The upper and lower line are the 95th and 5th quartiles respectively. Real-Time Planning & Simulation 583 ID Set of methods Expected quality 1 Expected duration 4 ra I A.4 .I1 v--v . n\ 1 Ir,.amF.n 4/ I , m-4-+-, 9 ’ - .LJ -F / - . IV {Method A3, Method B2) min(9,12) = 9 110+12=22 Table 2: Alternatives for Task Group 1. r ID Set of methods Expected quality Expected duration 1 {Cl, Dl, D2) min(4,5,5) = 4 4 + 6 + 5 = 15 2 {C2, Dl, D2) min(6,5,5) = 5 7+6+5=18 Table 3: Alternatives for Task Croup 2. Experimental Results In order to be practically useful, a design-to-time scheduling algorithm needs to be subject to the same kind of controls that it expects from domain level tasks. In particular it needs to be able to tradeoff the quality of its schedules as a function of the time devoted to scheduling. This section describes two measures of the performance of our scheduling algorithm as a function of the task struc- tures it is scheduling. The first experiment measures the ef- fect of the number of distinct possible quality values on the performance of the scheduler. The second experiment mea- sures the effect of the size of the task structure (as reflected in the number of executable methods) on the performance of the scheduler. Our experiments were conducted on randomly generated sets of task groups with enables and hinders relationships of the form described above. In the first experiment the number of task groups varied from 1 to 4 (to vary the size of the problems significantly); in the second experiment there was always 1 task group (to isolate the effect of the number of methods on scheduler performance). We controlled the size of the trees generated by having a maximum branch- ing factor and a maximum depth-in these experiments the maximums were set to 5. We also controlled the likelihood that enables and hinders relationships would appear in sit- uations where they were possible-these values were 50% and 100% respectively. Figure 3 shows the effect of the maximum number of distinct quality values on the performance of the sched- uler. This experiment was conducted by generating a task structure, then randomly assigning quality values to the ex- ecutable methods by choosing them uniformly from the set of possible quality values. As this graph shows the runtime of the scheduler appears to increase in a logarithmic fashion as the number of possible quality values increases. Figure 4 shows the effect of the number of methods in the task structure on the performance of the scheduler. This experiment was conducted by generating random task struc- tures, scheduling them using the design-to-time scheduling 584 Garvey algorithm, recording a number of statistics including both the runtime of the scheduler and the number of executable methods in the task structure. We then collected together all of the data from each of several thousand runs and found the average runtime for each distinct number of executable methods. This suggests that the performance of the sched- uler is polynomial in the number of executable methods, and that performance becomes significantly less predictable as the number of methods increases. The results of these experiments suggest that a design- to-time scheduler could control its own performance by dynamically modifying the task structures it is scheduling. The result relating to the number of possible quality val- ues suggests that a scheduler could reduce its runtime by reducing the number of distinct quality values in the task structure it is scheduling. It could do this by bucketizing the quality values into a smaller set of buckets and treating all quality values in the same bucket as identical. This ap- proximation will have the effect of reducing the precision of the final schedule, because the scheduler will not con- sider fine-grained distinctions among methods. However, because it does not throw away any methods, the sched- uler will always find a schedule in those situations where it would have found a schedule originally; it just might not be as good a schedule. The result concerning the number of methods suggests that if a scheduler could reduce the number of methods it had to consider it could reduce its runtime. It could do this by reducing the number of methods considered for tasks that generate quality in a maximum fashion. It is probably best to not remove the fastest method or the highest quality method, but methods in between can be ignored. This approximation will have the effect of reducing the ” completeness of the schedule. Not all possible schedules will have been considered, so the best schedule may not be found. However, if the scheduler does not throw away the fastest methods, it will always be able to find a schedule in those situations where it could find one originally. Another approximation that we have thought of, but not ID ] Set of methods ] Expected quality 1 Expected finish times 1 Table 4: Alternatives for set of task groups. yet investigated carefully, is to schedule without consider- ing hinders relationships. Preliminary investigation sug- gests that this has the positive effect of reducing the run- time of the scheduler, but the negative effect of having the scheduler occasionally produce schedules that do not meet deadlines (because the scheduler mis-estimates the duration of executable methods). One approach to this problem is to monitor the execution of methods. For a more detailed discussion of monitoring see [Garvey and Lesser, 19931. We intend to investigate these issues and build schedulers that take their own performance into account when schedul- ing. This should result in schedulers for design-to-time tasks that are themselves design-to-time in character. Conclusions and Future Work Previously we have examined the scheduling of tasks with multiple methods, but few task interdependencies, in both the Distributed Vehicle Monitoring Testbed (DVMT) and in a simulation environment [Garvey and Lesser, 19931. Currently we are working on developing a more sophis- ticated scheduler that efficiently schedules more complex task structures that include additional types of relationships between tasks such as facilitates, another relationship that occurs in the DVMT environment. We are also looking at scheduling for distributed agents that are cooperating to solve complex, real-time problems. Finally, we intend to study this scheduler in a sound understanding application [Lesser et al., 19931. More generally, we would like to investigate the issues raised in the Experimental Results section by moving in the direction of building design-to-time schedulers that can control their own performance. These schedulers should be able to trade off the quality of the schedules they produce with the time it takes to produce them. This will have the effect of creating schedulers for design-to-time tasks that have a design-to-time character. eferences D’Ambrosio, Bruce 1989. Resource bounded-agents in an uncertain world. In Proceedings of the Workshop on Real- Time Artificial Intelligence Problems, IJCAI-89, Detroit. Dean, T. and Boddy, M. 1988. An analysis of time- dependent planning. In Proceedings of the Seventh Na- tional Conference on Artificial Intelligence, St. Paul, Min- nesota. 49-54. Decker, Keith S. and Lesser, Victor R. 1991. Analyzing a quantitative coordination relationship. COINS Technical Report 91-83, University of Massachusetts. To appear in the journal Group Decision and Negotiation, 1993. Decker, Keith S.; Lesser, Victor R.; and Whitehair, Robert C. 1990. Extending a blackboard architecture for approximate processing. The Journal of Real-Time Sys- tems 2(1/2):47-79. Decker, Keith S.; Garvey, Alan J.; Lesser, Victor R.; and Humphrey, Marty A. 1992. An approach to modeling en- vironment and task characteristics for coordination. In Petrie, Charles J. Jr., editor 1992, Enterprise Integration Modeling: Proceedings of the First International Confer- ence. MIT Press. Garvey, Alan and Lesser, Victor 1993. Design-to-time real-time scheduling. IEEE Transactions on Systems, Man and Cybernetics 23(6). To appear. Graham, R.L.; Lawler, E. L.; Lenstra, J. K.; and Kan, A. H. G. Rinnooy 1979. Optimization and approximation in deterministic sequencing and scheduling: A survey. In Hammer, P. L.; Johnson, E. L.; and Korte, B. PH., editors 1979, Discrete Optimization II. North-Holland Publishing Company. Lesser, Victor R. and Corkill, Daniel D. 1983. The dis- tributed vehicle monitoring testbed. A/Magazine 4(3):63- 109. Lesser, Victor R.; Pavlin, Jasmina; and Durfee, Edmund 1988. Approximate processing in real-time problem solv- ing. AI Magazine 9( 1):49-61. Lesser, Victor; Nawab, Hamid; Gallastegi, Izaskun; and Klassner, Frank 1993. IPUS: An architecture for integrated signal processing and signal interpretation in complex en- vironments. In Broceedings of the Eleventh National Con- ference on Artificial Intelligence. Liu, J. W. S.; Lin, K. J.; Shih, W. K.; Yu, A. C.; Chung, 9. Y.; and Zhao, W. 1991. Algorithms for scheduling im- precise computations. IEEE Computer 24(5):58-68. Russell, Stuart J. and Zilberstein, Shlomo 1991. Com- posing real-time systems. In Proceedings of the Twelfth International Joint Conference on Artificial Intelligence, Sydney, Australia. 212-217. eal-Time Planning & Sim 585 | 1993 | 87 |
1,417 | Visual Under&a Matthew Brand, L Northwestern University The Institute for the Learning Sciences 1890 Maple Avenue, Evanston IL 60201 brand@ils.nwu.edu Abstract An important result of visual understanding is an explanation of a scene’s causa.1 structure: How action-usually motion-is originated, con- strained, and prevented, and how this determines what will happen in the immediate future. To be useful for a purposeful agent, these explanations must also capture the scene in terms of the func- tional properties of its objects-their purposes, uses, and affordances for manipulation. Design knowledge describes how the world is organized to suit these functions, and causal knowledge describes how these arrangements work. We have been exploring the hypothesis that vision is an explanatory process in which causal and function- al reasoning plays an intimate role in mediating the activity of low-level visual processes. In par- ticular, we have explored two of the consequences of this view for the construction of purposeful vision systems: Causal and design knowledge can be used to 1) drive focus of attention, and 2) choose between ambiguous image interpretations. Both principles are at work in SPROCKET, a system which visually explores simple machines, integrat- ing diverse visual clues into an explanation of a machine’s design and function. Visual understanding A fundamental purpose of vision is to relate a scene to the viewer’s beliefs about how the world ought to be-to “make sense” of the scene. Understanding is the preparation we make for acting, hence our beliefs are fundamentally causal in nature; they describe the world’s capacity for action and change. “Making sense” of a scene means assessing its potential for action, whether instigated by the agent, or set in motion by forces already present in the world. *This work was supported in part by the National Science Foundation, under grant number IR19110482. The Institute for the Learning Sciences was established in 1989 with the support of Andersen Consulting, part of The Arthur Andersen Worldwide Organization. The Institute receives additional support from Ameritech, North West Water Plc, Institute Partners, and from IBM. 588 Brand a b Figure 1: A scene explored by SPROCKET, and a schematic representation of its explanation. In the physical world of everyday experience, action is usually motion. One way scenes make sense is that they organize potential motions in a way that addresses some function. At the very least, scene elements must preserve their integrity: Plants, mountains, buildings, and furniture are all structured to address the forces of gravity. Often, scenes are structured for motion: Mechanical devices are structured to contain, direct, and transform motion. Ritual spaces, control panels, and computer interfaces are structured to organize users’ gestures into meaningful messages. In every case, the way we visually make sense of these scenes is by explaining their configuration relative to a theory of how the world works. Because we are purposeful, we organize that causality into theories of how scenes are designed-how structures are organized to address functions. We are investigating the role that knowledge of causality and design play in the perception of scenes. In particular, we are building systems for explanation- mediated vision-vision systems whose output is not feature lists, shape descriptions, or model classifica- tions, but high-level explanations of why the scene makes sense: How it addresses its function, how it might causally unfold in the future, how it could be manipulated, or how it might have come to be. In previous papers, we have shown that the expla- nation of static scenes in terms of their stability and structural integrity is an important aspect of From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. understa.nding. In particular, causal analysis of this sort can be used to address such issues as image segmentation [Cooper et ab. 19931 and focus of visual attention [Birnbaum et ad. 19931. Previous systems have worked in the domains of children’s blocks and tinkertoys. In this paper, we extend the scope of the a.pproach to encompass more complex mechanical structures. This paper describes a system currently under development-SPROCKET-which explores and explains the design of simple machines. Our previ- ous systems-BUSTER, which explains the stability of stacked blocks towers, BANDU, a variant which plays a rudimentary game of blocks, and FIDO, which resolves occlusions in tinkertoy structures by causal analysis- are discussed briefly at, the end of the paper. These vision systems have several interesting prop- erties. First, they are vertically integrated, i.e. they encompass both low-level and high-level visual pro- cessing. Second, the output of these systems is in the form of meaningful explanations about the internal dynamics of the scene. These explanations could potentially be used to make predictions, to plan to enter and manipulate the scene, or to reason about how the scene came to be. Third, these systems employ causal theories which, although relatively small, are sufficiently powerful to generate explanations of highly complex structures. Fourth, they share an explicit model of what is interesting in a scene: They use func- tional and causal anomalies in ongoing explanations to genera.te visual queries, thus providing control of focus of attention in low-level visual processing. Fifth, causal and functional explanation forms a framework for the integration of evidence from disparate low-level visual processes. We suggest that these properties are natural consequences of building perception systems around explanation tasks. Explanation-mediated vision The goal of explanation-mediated vision is. to assign each part in the scene a function, such that all spatial relations between parts in the scene are consistent with all causal relations between functions in the explanation [Brand et ad. 1992; Birnbaum et ad. 19931. In previous papers, we have shown how this constraint can guide the visual perception and interpretation of complex static structures-structures where the possible motions of each part must be arrested. The same basic insight applies to structures in which pos- sible motions are merely constrained. Thus a logical extension for explanation-mediated vision is the visual perception and interpretation of complex dynamical structures. Visual cognition of machines The visual understanding of simple machines is an approachable task because (1) causal relations are mediated by proximal spatial relations, and (2) the domain supports a very rich causal/functional seman- tics. This extends from simple principles of struc- tural integrity to sophisticated knowledge about the design and layout of drivetrains. Design principles for kinematic machines are abundant generators of scene expectations (see [Brand 1992; Brand & Birnbaum 19921). For example, a function of a gear is to transmit and alter rotational motion. To find a gear in the image is to find evidence of a design strategy which requires other torque-mediating parts in certain adjacencies and orientations, specifically an axle and one or more toothed objects (e.g., gears or racks). Understanding machines is a matter of apprehending their design and function. Designs are strategies for decomposing functions into arrangements of parts; thus the overall strategy of SPROCKET is to reconstruct the designer’s intent by applying the principles and practicum of machine design to hypotheses about the configuration of parts. As they are discovered, each part is assigned a function, such that (1) causal relationships are consistent with spatial relationships, and (2) the functions of individual parts combine to give the machine a, sensible overall purpose. To bridge the gap between structure and function, we have developed a surprisingly small qualitative design theory for gearboxes (18 rules, not including spatial reasoning and stability rules). It begins with axioms describing the overall functional mission of a machine, and progresses through design rules to describe the way in which gears, axles, rods, hubs, and frames may be assembled to make workable subassemblies. The rules ground out in the predicates of rigid-body physics and spatial relations: adjacent parts restrict each other’s translational motion; varieties of containment limit rotational degrees of freedom; etc. The design theory, adequate for most conventional spur-gear gearboxes, incorporates the following knowledge: 1. Knowledge about explanations: (4 (b) (4 An explanation describes a structure of gears, rods, frame blocks, and a ground plane. A structure is explained if all parts are shown to be constrained from falling out and are functionally labelled. A scene is considered explained if all structures in it are explained and all generated regions of interest have been explored. 2. Knowledge about function: (a) A moving part must transduce motion between two or more other parts. (b) A singly-connected moving part may serve as input or output. (c) A machine has one input and one output. (d) A fixed part serves to prevent other parts from disengaging (either by force of gravity or by action of the machine). 3. Knowledge about gears: Reasoning about Physicall Systems 589 In order to mesh, two gears must be copla.nar and touching at their circumferences. Meshed gears a.re connected, and restrict their rotat,ion to opposite directions and speeds inverse- ly proportioned to their radii. A gear may transduce motion from a fixed axle (rod) to a meshing gear. A gear may transduce motion between two mesh- ing gears. 4. Knowledge about axles and hubs: (a) If a rod interse& an object but does not pass through, then the object (implicitly) contains a hub, which penetrates it. The rod penetrates the hub, restricting the rotational axis of the object. The object and the rod restrict each other’s axial translation. (b) If an object penetrates a.nother object, it elimi- na.tes freedom of non-axial rotation, and freedom of translation perpendicular to its axis. (c) If a rod goes through an object, then it passes through an implicit hub. (d) If a non-round rod penetrates a non-round hub and the inscribed circle of the hub isn’t larger than the circumscribed circle of the rod, then the hub and the object share the same axis and rate of rotation. (e) If a hub penetrates a rod and either is round or the circumscribed circle exceeds the inscribed circle, then the rod restricts the hub to its principal axis of rotation. 5. Knowledge about frames and stability: (a) Frames are stable by virtue of attachment to the table or to other frame pieces. (b) Objects are stable if all of their motions downward are arrested; Perception and pathologies Beyond the design theory, much of the domain knowledge necessary to properly interpret mechanical devices resides in the practicum of the mechanical design: common part configurations, design patholo- gies, and knowledge of typical shapes and textures. This knowledge forms the basis for strategies for scene inspection, hypothesis formation, anomaly detection, and hypothesis revision. Design pathologies are particularly important in our approach, since evidence gathering and hypothesis revision are driven by anomalies in the explanation [Schank 1986; R am 1989; Kass 19891. Anomalies are manifest as gaps or inconsistencies in the explana- tion. In machine explanations they take the form of inexplicable gears, assemblies that appear to have no function, or assemblies that appear to defeat their own function. These anomalies reflect underlying design pathologies in the system’s current model of the machine, if not in the machine itself. We are currently developing a catalogue of gearbox design pathologies. 590 Brand Each pathology is indexed to a set of repair hypotheses. A repair hypothesis describes previous assumptions that may be suspect, and proposes scene inspections that may obtain conclusive evidence. The example below illustrates at length a repair hypothesis which compensates for a known weakness of one of our visual specialists: Gear inspections will sometimes construe two meshed gears as a single large gear. This generally leads to design pathologies which make the perceived structure dysfunctional or physically impossible. Visual specialists Figure 2a shows the output of a visual specialist built to look for groupings of short parallel contrast lines. This “tooth specialist” is used to look for mechanically textured surfaces, which are usually gears. The specialist uses a simple syntactic measure for gear candidates: groups of four or more short parallel contrast edges in the image. Like most specialists, it is meant to be deployed in limited regions of interest in the image to answer spatially specific queries such as, “Is there a gear oriented vertically and centered on this axle?” These queries may include initial guesses as to position, orientation, and scale hints. If a specialist succeeds, it will return more precise information on the location of a part, along with a confidence measure. Other specialists scan regions of interest for shaped areas of consistent color, for example rectangles, parallelograms, and ellipses (see [Brand 19931 for details). Example For the purposes of this exposition, the “tooth special- ist” has been applied to the entire image (figure 2a). The specialist correctly identified 4 gears, made two spurious reports (that there is a horizontal gear and that there is a small vertical gear in the lower right corner), fused the two middle gears into one large one, and made a partially correct report of small gear in the lower right. To simplify this example, we will ignore reports to the right of the middle and concentrate on the diagnosis and correction of the fused gears. Figure 2b illustrates the state of the explanation after a first-pass application of the design rules to the candidate gears found by the “tooth specialist.” This preliminary model assumes that: 1. 2. 3. 4. 5. An axle Al has been surmised to support and connect gear Gl and gear G2 from frame piece Fl. Gear Gl must be fixed to axle Al and apparently lacks a meshing gear. Gear G2 meshes with gear G3 and must be fixed to axle Al (otherwise it rotates freely on Al and a gear must be found below it). An axle A2 has been surmised to support gear G3 from frame piece Fl . An axle A* has been surmised to support the large gear G*. The axle must either connect to frame piece a. gear candidates b. initial hypothesis c. first repair d. second repair e. full model Figure 2: Three successively more sensible interpretations of the gear specialist’s reports. Fl, in which case it runs in front of or behind gears Gl and G2, or to frame piece F2, or to both. 6. Gear G* must be meshed with a two gears above and below, or one gear above or below plus a fixed axle A* which must then connect to some other part of the drivetrain, or gear G* may also be for interface. 7. Frame pieces Fi, F2, F3, and F4 are all attached. These hypotheses are established by explaining how each part satisfies as mechanical function, using the above-described rules about function and configura- tion. The most productive constraint is that every moving part must have a source and a sink for its motion. Gears, for example, must transduce rotational motion, either between two meshing gears, between one meshing gear and a fixed axle, or between the outside world and a meshing gear or fixed axle (as an interface).’ Given this con&raint, two anomalies stand out immediately: gears Gl and G* are not adequately connected to be useful. Visual inspection determines that Gl is indeed unmeshed, and spatial reasoning determines that it lies outside of the frame. As a matter of practicum, this suffices to support the hypothesis that Gl is indeed for interfacing, and the anomaly is resolved. To resolve the anomaly in hypothesis number 6, the system must find at least one meshing gear above or below G*. However, there is no room between G* and frame piece F4 for such a gear, and there is no room for an axle to support a gear above G*. Thus G* is reclassified as an inexplicable gear anonlaly.2 Since we know t(hat fusion is one of the typical errors made by the gear-finding specialist, we have written a repair method for this anomaly which attempts to split fused gears. The method looks for nearby axles to support the gears resulting from the split, and if present, uses these axles as guides for the division. In this case, G* is split into gear G4, which is put on axle A2, and gear G5, which is put on axle Al. Conjectured axle A* is discarded at the same time. This state of affairs is illustrated in figure 2c. The new gears mesh 1 Currentsly SPROCKET is ignorant of chains, racks, ratch- ets, and other more sophisticated uses of gears. 2 Ostensibly, meshing g ears could be behind G*. This is beyond SPROCKET's abilities, as it is limited to machines where the drivetrain is laid out in a small number of visually accessible planes. with each other, and in order to transmit motion, both are assumed to have fixed axles. This is necessary because there is still no room to place additional meshing gears. It also makes all four gears G2-5 appear functionally viable; each gear will transmit motion to and from another part. Unfortunately, there is an anomaly in this config- uration: gears G2-5 are now in a locked cycle, and will not turn. This is detected when propagation of constraints reveals that each gear is required to spin at two different speeds simultaneously. The three elements of the new hypothesis-that 1) G* is split into G4 and G5, 2) G4 is fixed on axle A2, 3) G5 is fixed on axle Al-must be re-examined. Retracting (1) returns to the original anomaly, so this option is deferred. Retracting (2) deprives G4 of its axle and leaves gear G2 dangling without a sink for its motion. Retracting (3) merely leaves G5 without an axle. This is the cheapest alternative, since, as a matter of practicum, it requires only a single alteration and axle-additions are generally low-overhead alterations. The required axle (A3) can be surmised coming from frame piece F2. This leaves gears G2-4 properly explained, unlocks the drivetrain, and eliminates all design pathologies except for a yet undiscovered sink for G5, which will lead to the discovery of the rest of the mechanism. The repair produces a model illustrated in figure 2d. Final analysis Through this process of exploration, hypothesis, anomaly, evidence procurement, and reformulation, SPROCKET develops a functionally and causally con- sistent explanation of how the individual elements of the mechanism work together to conduct and mod- ify motion. Our goal is to have SPROCKET analyze the resultant model of the drivetrain to provide a functional assessment of the entire device, e.g.: The machine is for the conversion of high-speed low-torque rotation into high-torque low-speed reversed rotation (figure 2e), or vice-versa. recursor systems: Seeing stability The most ubiquitous aspect of our causal world, at least physically, is the force of gravity. Nearly every- thing in visual experience is structured to suit the twin constraints of stability and integrity. To understand a static scene such as a bridge or building, one explains Reasoniang about Physical Systems 591 a b Figure 3: Snapshots of BUSTER's analysis of a three- block cantilever. Regions of interest are highlighted; the rest is faded. See the text for explanation. how it meets these constraints. We have built a number of vertically integrated end-to-end systems that do this sort of explanation, perusing objects that stand up, asking (and answering) the question, “Why doesn’t this fall down?” 0 Understanding blocks struet ures BUSTER [Birnbaum et al. 19931 does exactly this sort of visual explanation for structures made out of children’s blocks. BUSTER can explain a wide variety of blocks structures, noting the role of each part in the stability of the whole, and identifying functionally significant substructures such as architraves, cantilevers, and balances. In static stability scenes, the internal function of each part is to arrest the possible motions of its neighbors. BUSTER's treatment of cantilevers provides a simple illustration of this constraint. In figure 3a BUSTER has just discovered the large middle block, and noticed a stability anomaly: It should roll to the left and fall off of its supporting block. To resolve this anomaly, BUSTER hypothesizes an additional support under the left end of the block, but finds nothing in that, area (3b). A counterweight is then hypothesized above the block and to the right of the roll point (3c), and a search in that area succeeds, resulting the discovery of a new block (3d). BUSTER thus assesses the structure as a cantilever. Figure 3e shows the attentional trace for the entire scene analysis. Playing with blocks One of the immediate uses of causal explanations is in reasoning about actions to take within the scene. Depending on the goals of the vision system, an explanation might also answer such questions as, “How did this scene come to be?” or “Is there a safe path to navigate through the scene?” With a child’s goals, the robot may also want to know, “Where can I add I/;l;j\; c ’ ’ Figure 4: Stages in the explanation of the tinkertoy scene (a). (b) B oundary contours are extracted from stereo pairs. (c) Potential rod segments are extracted from the contours. (d) Rodlets are merged and extend- ed to conform to knowledge about stable structures. blocks to make the tower more precarious?” “ What is the best way to knock it down?” Given that children play blocks partly to learn about structural integrity in the world, these are probably fruitful explanation tasks. BANDU, an augmented version of BUSTER, answers the latter two questions in order to play a rudimentary competitive block-stacking game. The aim is to pile a high and precarious tower, making it difficult or impossible for the opponent to add a block without destabilizing the whole structure. Figure 3f shows an a.ddition that BANDU proposes for the cantilever structure. Interpreting Tinkertoy assemblies FIDO [Cooper et al. 19931 is a vertically integrated end- to-end vision system that uses knowledge of static sta- bility to segment occluded scenes of three-dimensional link-and-junction objects. Link-and-junction domains are a nice composable abstraction for any complex rigid object. Occlusions tend to be both common and serious in link-and-junction scenes; FIDO uses naive physical knowledge about three-dimensional stability to resolve such problems. Visually, FIDO works with scenes of Tinkertoys assemblies, though its algorithms are generally applicable to a wide variety of shapes. FIDO's input from early processing is a piecemeal description of the visible scene parts. The footprint of each object is determined to be the convex hull of places where the object touches the ground. If the object’s center of mass is in the footprint, it is stable. If a part’s object is not stable in this sense, FIDO invokes a series of rules whereby parts try to connect to each other. In this way, invisible connections are hypothesized, parts are extended through unseen regions to touch the ground plane, and entire parts may be completely hallucinated, in order to generate stable subassemblies. FIDO can successfully process relatively complex scenes like that shown in 4a; images b-c show the data this image yielded and the stable structure that was inferred by causal analysis. 592 Brand Related work Causal and functional knowledge are enjoying a renais- sance in computer vision. Researchers such as [Pent- land 19861 and [Terzopoulis & Metaxas 19901 have pointed out that the normal causal processes in the world have consequences for the way objects are shaped. On the functional side, [Stark & Bowyer 19931 have used structural concommitants of function- containment for cups, stability for chairs-to verify category decisions for CAD representaGons of objects. [Rimey 19921 1 las developed a system which visually explores table settings, using models of typical arrange- ments to identify, say, a formal dinner. Although these systems do not have an explicit and generative notion of function-e.g. what cups are for and why they should be cylindrical, or why formal settings have two forks--they do serve as impressive demonstrations of the value of high-level knowledge in visual paradigms. There is an extensive literature on qualitative causal reasoning about kinematics. [Forbus et al. 1987; Faltings 19921 have produced kinematic analyses of ratchets, escapements, and even an entire clock using qualitative reasoning methods. [Joskowicz & Sacks 19911 have developed an algorithm for the kinematic a.nalysis of machines that brea.ks the devices down into subassemblies, analyzes their configuration spaces, and combines the results into a description of the mecha- nism’s overall kinematic behavior. These approaches feature quantitative shape analysis and rigid-body modelling algorithms that a.re quite a bit more exten- sive and more general than we use. Rather than con- centrate on the universality of our kinematic models, we have chosen to focus on their compability with perceptual and teleological representations. SPROCKET is limited to machines built of rectangular and cylin- drical shapes, with smooth and toothed surfaces, and conjoined by means of attachment, containment, friction, and compression-e.g., Lego machines. Conclusion SPROCKET and its sister programs are vertically inte- grated vision systems that, achieve consistent expla- nations of complex scenes through the application of causal and functional semant#ics. Using modest genera- tive theories of design and naive physics, these systems purposefully explore scenes of complex structures, gathering evidence to explain the stability, integrity, and functional coherence of what they see. Anomalies in the ongoing explanation drive hypothesis forma- tion, visual exploration, and hypothesis reformulation. Considered as vision systems, they demonstrate the surprising leverage that high-level semantics provide in the control of visual a.ttention, and in the interpre- tation of noisy and occasionally erroneous information from low-level visual processes. Considered as eviden- tia.1 reasoning systems, they highlight the importance of building content theories that describe not just the possibilities of a domain, but the domain’s most likely configurations, the way in which the domain is manifest in perception, and the characteristic and confusions of the perceptual system itself. errors Acknowledgments Thanks to Ken Forbus, Dan Halabe, and Pete Prokopowicz for many helpful and insightful com- ments. eferences [Birnbaum et al. 19931 Lawrence Birnbaum, Matthew Brand, & Paul Cooper. Looking for trouble: Using causal semantics to direct focus of attention. To appear in Proceedings of the Seventh International Conference on Computer Vision, 1993. Berlin. [Brand & Birnbaum 19921 Matthew Brand & Lawrence Birnbaum. Perception as a matter of design. In Working Notes of the AAAI Spring Symposium on Control of Selective Perception, pages 12-16, 1992. [Brand et ua. 19921 Matthew Brand, Lawrence Birnbaum, & Paul Cooper. Seeing is believing: why vision needs semantics. In Proceedings of the Fourteenth Meeting of the Cognitive Science Society, pages 720-725, 1992. [Brand 19921 Matthew Brand. An eye for design: Why, where, & how to look for causal structure in visual scenes. In Proceedings of the SPIE Workshop on Intedli- gent Vision, 1992. Cambridge, MA. [Brand 19931 Matthew Brand. A short note on region growing by pseudophysical simulation. To appear in Pro- ceedings of Computer Vision and Pattern Recognition, 1993. New York. [Cooper et aI. 19931 Paul Cooper, Lawrence Birnbaum, & Daniel Halabe. Causal reasoning about scenes with occlusion. 1993. To appear. [Faltings 19921 Boi Faltings. A symbolic approach to qual- itative kinematics. Artificial Intelligence, 56(2-3):139- 170, 1992. [Forbus et al. 19871 Ken Forbus, Paul Nielsen, & Boi Falt- ings. Qualitative kinematics: a framework. In Proceed- ings of IJCAI-87, 1987. [Joskowicz & Sacks 19911 L. Joskowicz & E.P. Sacks. Computational kinematics. Artificial Intelligence, 51( l- 3):381-416, 1991. [Kass 19891 Alex K ass. Adaptation-based explanation. In Proceedings of IJCAI-89, pages 141-147, 1989. [Pentland 1986] A.P. Pentland. Perceptual organization and the representation of natural form. Artificial Intel- ligence, 28(3):293-332, 1986. s [“;gg$!; h win Ram. Question-driven understanding. Department University, i989. of Computer Science, Yale [Rimey 19921 Ray Rimey. Where to look next using a bayes net: The tea-l system and future directions. In Working Notes of the AAAI Spring Symposium on Control of Selective Perception, 1992. Stanford, CA. [Schank 19861 Roger Schank. Explanation Patterns. L. Erlbaum Associates, NJ, 1986. [Stark & Bowyer 19931 L. Stark & K. Bowyer. Function- based generic recognition for multiple object categories. To appear in CVGIP: Image Understanding, 1993. [Terzopoulis & Metaxas 19901 Demetri Terzopoulis & Dimitri Metaxas. Dynamic 3d models with local and global deformations: Deformable superquadrics. In Proceedings of the Fourth International Conference on Computer Vision, pages 606-615, 1990. easoning about Physical Systems 593 | 1993 | 88 |
1,418 | esig Thomas Elhan John Keane Mark Schwabacher Department of Computer Science, Hill Center for Mathematical Sciences Rutgers University, New Brunswick, NJ 08903 {ellman,keane,schwabac}@cs.rutgers.edu Abstract Models of physical systems can differ according to computational cost, accuracy and precision, among other things. Depending on the problem solving task at hand, different models will be appropri- ate. Several investigators have recently developed methods of automatically selecting among multi- ple models of physical systems. Our research is novel in that we are developing model selection techniques specifically suited to computer-aided de- sign. Our approach is based on the idea that arti- fact performance models for computer-aided design should be chosen in light of the design decisions they are required to support. We have developed a technique called “Gradient Magnitude Model Se- lection” (GMMS), which embodies this principle. GMMS operates in the context of a hillclimbing search process. It selects the simplest model that meets the needs of the hillclimbing algorithm in which it operates. We are using the domain of sail- ing yacht design as a testbed for this research. We have implemented GMMS and used it in hillclimb- ing search to decide between a computationally ex- pensive potential-flow program and an algebraic approximation to analyze the performance of sail- ing yachts. Experimental tests show that GMMS makes the design process faster than it would be if the most expensive model were used for all design evaluations. GMMS achieves this performance im- provement with little or no sacrifice in the quality of the resulting design. 1. Introduction Models of a given physical system can differ along sev- eral dimensions, including the cost of using the model, the accuracy and precision of the results, the scope of applicability of the model and the data required to exe- cute the model, among others. More than one model is often needed because different tasks require differ- ent tradeoffs among these dimensions. A variety of cri- teria and techniques have been proposed for selecting among various alternative models of physical systems. For example, some techniques select appropriate mod- els by analyzing the structure of the query the model is intended to answer [Falkenhainer and Forbus, 19911, [Ling and Steinberg, 19921, [Weld and Addanki, 19911. Another approach selects an appropriate model by rea- soning about the simplifying assumptions underlying the available models [Addanki et al., 19911. Yet another ap- proach reasons about the accuracy of the results the model must produce [Weld, 19911, [Falkenhainer, 19921. We are are developing model selection techniques specifically suited to computer-aided design. Our ap- proach is based on the idea that artifact performance models for computer-aided design should be chosen in light of the design decisions they are required to sup- port. We have developed a technique called “Gradient Magnitude Model Selection” (GMMS), which embodies this principle. GMMS operates in the context of a hill- climbing search process. It selects the computationally cheapest model that meets the needs of the hillclimbing algorithm in which it operates. Intelligent model selection is crucial for the overall performance of computer-aided design systems. The se- lected models must be accurate enough to ensure that the final artifact design is optimal with respect to some performance criterion, or else satisfactory with respect to specific performance objectives. The selected models must also be as computationally inexpensive as possible. Cheaper models enable a design system to spend less time on evaluation and more time on search. Broader search typically leads in turn to superior designs. These facts will remain true, even with the widespread use of supercomputers. The combinatorics of most realis- tic design problems are such that exhaustive search will probably never be feasible. There will always be an ad- vantage in using the cheapest model that supports the necessary design decisions. Model selection is a task that arises often in the day to day work of human design engineers. A human en- gineer’s expertise consists, in part, of the ability to in- telligently choose among various exact or approximate models of a physical system. In particular, as an engi- neer accumulates experience over his career, he learns which models are best suited to each modeling task he typically encounters in his work. This knowledge is one of the things that makes him an expert. Therefore, to the extent that GMMS successfully solves the model se- lection task, it automates a component of the computer- aided design process that is currently handled by hu- man experts. GMMS may also be seen as a technique for attacking a standard AI problem: using knowledge to guide search. In particular, GMMS uses knowledge in the form of exact and approximate models, to guide hillclimbing design optimization. Related knowledge- based techniques for controlling numerical design opti- mization are described in [Cerbone and Dietterich, 19911 and [Tcheng et al., 19911 594 Ellman From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. Figure 1: The Stars and Stripes ‘87 Hull 2. Yacht Design: A Testbed Domain The GMMS technique has been developed and tested in the domain of 1Zmeter racing yachts, the class of yachts that race in the America’s Cup competition. An example of a 12-meter yacht, the Stars and Stripes ‘87, is shown in Figure 1. This yacht won the America’s cup back from Australia in 1987 [Letcher et al., 19871. Racing yachts can be designed to meet a variety of ob- jectives. Possible yacht design goals include: Course Time Goals, Rating Goals and Cost Goals. In our re- search we have chosen to focus on a course time goal, i.e., minimizing the time it takes for a yacht to traverse a given race course under given wind conditions. Our system evaluates CourseTime using a “Velocity Pre- diction Program”, called “VPP”. The organization of VPP is described in Figure 2. VPP takes as input a set of B-Spline surfaces representing the geometry of the yacht hull. Each surface is itself represented as a matrix of “control points” that define its shape. VPP begins by using the “hull processing models” to deter- mine physically meaningful quantities impacting on the performance of the yacht, e.g., wave resistance (R,), friction resistance (Rf), effective draft (T,ff), vertical center of gravity (Vcg) and vertical center of pressure (Zcp), among others. These quantities are then used in the “velocity prediction model” to set up non-linear equations describing the balance of forces and torques on the yacht. The velocity prediction model uses an iterative method to solve these equations and thereby determine the “velocity polar”, i.e., a table giving the velocity of the yacht under various wind speeds and di- rections of heading. Finally, the “race model” uses the velocity polar to determine the total time to traverse the given course, assuming the given wind speed. 3. illclirnbing for Design Opt irnizat ion Hillclimbing search is useful for attacking design opti- mization when the number of parameters is so large that exhaustive search methods are not practical. Our system uses steepest-descent as our basic hillclimbing method [Press et al., 1986]. The steepest-descent algo- rithm operates by repeatedly computing the gradient of the evaluation function. (In the yacht domain, this re- quires computing the partial derivatives of CourseTime with respect to each operator parameter.) The algo- rithm then takes a step in the direction of the gradient, and evaluates the resulting point. If the new point is bet- ter than the old one, the new point becomes the current Course Time Design Goal: Race Course Velocity(Wind-Speed,Heading) (Velocity Prediction Model) (Hull Processing Models) 77=- Yacht Geometxy /I \ OPl OP2 *a* OPN Shape Modification Operators Figure 2: Velocity Prediction Program one, and the algorithm iterates. The algorithm termi- nates if the gradient is zero, or if a step in the direction of the gradient fails to improve the design. A number of enhancements to the hillclimbing algo- rithm have been adopted to deal with practical difficul- ties arising in the yacht design domain. The program we use to compute Cour seTime a commercial software product. (VPP) is Nevertheless, it sugers from a num- ber of deficiencies that make hillclimbing difficult. For example, it may return a spurious root of the balance of force equations that it solves. It may also exhibit discontinuities, due to numerical round-off error, or due to discretization of the (theoretically) continuous yacht hull surface. These deficiencies can produce “noise” in the evaluation function surface over which the hillclimb- ing algorithm is moving. The algorithm can easily get stuck at a point that appears to be a local optimum, but is nevertheless not locally optimal in terms of the true physics of the yacht design space. To overcome these difficulties, we have endowed the l~illclimbing al- gorithm with some special features. To begin withy we arrange for the algorithm to use a range of different step sizes. The algorithm does not terminate until all of the step sizes fail to improve the design. The algorithm can therefore jump over hills of width less than the maxi- mum step size. In addition, we provide the algorithm with an estimate of the magnitude of the noise in the evaluation function. The algorithm attempts to climb over any hills with height equal to the noise magnitude or lower. The resulting algorithm is more robust than the original algorithm. 4. Modeling Choices in Yacht esign A number of modeling choices arise in the context of sail- ing yacht design. These choices are outlined in Figure 3. Reassning abouk Physical Systems 595 e Algebraic Approximations v. Computational Fluid- Dynamics: The effective draft !Z’=rr of a yacht can be es- timated using an algebraic approximation or by using a potential flow code called “PMARC”. e Reuse of Prior Results v. Recomputation of Results: Some physical quantities may not change significantly when a design is modified. For a given physical quantity, its value may be retrieved from a prior candidate design, or its value may be recomputed from scratch. o Linear Approximations v. Non-Linear Models: Velocity polars can be computed as linear functions of resistances and geometric quantities or by directly solving non-linear force and torque balancing equations. Figure 3: Modeling Choices in Yacht Design Probably the most important is the choice of models for estimating the effective draft (Z’,ff) of a yacht. Effec- tive draft is a measure of the amount of drag produced by the keel as a result of the lift it generates. An ac- curate estimate of this quantity is quite important for analyzing the performance of a sailing yacht. Unfor- tunately, the most accurate way to estimate effective draft is to run a highly expensive potential flow code called PMARC. (This code takes approximately one hour when running on a Sun Microsystems Sparcstation 2 Workstation.) Effective draft can also be estimated using an algebraic approximation with the general form outlined below: Teff = Q/D2 - 2A,-,& D = Maximum Keel Draft A = Midship Hull Cross Section Area This angdebraic model is based on an approximation that treats a sailing yacht hull as an infinitely long cylinder and treats the keel of the yacht as an infinitely thin fin protruding from the cylinder. The constant K is chosen to fit the algebraic model to data obtained from wave tank tests, or from sample runs using the PMARC po- tential flow code. Although the algebraic approximation is comparatively easy to use, its results are not as accu- rate as those produced by the PMARC potential flow code. Another important modeling choice involves the deci- sion of when to reuse the results of a prior computation. The importance of this type of decision is illustrated by Figure 4. Suppose one is systematically exploring com- binations of canoe-bodies and keels of a sailing yacht. In order to evaluate the performance of a yacht, one must evaluate the yacht’s wave resistance R, as well as its effective draft T,f f . the canoe-body o Wave resistance depends mainly on the yacht and is not significantly in- fluenced by the keel. When only the keel is modified, wave resistance will not significantly change. Instead of recomputing wave resistance for the new yacht, the system can reuse the prior value. On the other hand, effective draft depends mainly on the keel of the yacht and is not significantly influenced by the canoe-body. When only the canoe-body is modified, effective draft will not significantly change. Instead of recomputing effective draft for the new yacht, the system can reuse 596 Ellman Figure 4: Reuse of Prior Results the prior value. In fact, the entire matrix of yachts can be evaluated by computing wave resistance for a single row, and computing effective draft for a single column. By intelligently deciding when to reuse prior evaluation results, one can significantly lower the computational costs of design. 5. Gradient Magnitude Model Selection Gradient Magnitude Model Selection (GMMS) is a tech- nique used in the Design Associate for selecting evalua- tion models in the context of a hillclimbing search pro- cedure. The key idea behind this technique is illustrated by Figure 5. Suppose the system is running a hillclimb- ing algorithm to minimize CourseTime as estimated by some approximate model. The values of CourseTime returned by this approximate model are indicated by the curved line. Suppose further that the system is con- sidering the hillclimbing step illustrated in the figure. If the error bars shown with solid lines reflect the un- certainty of the approximate model, the system can be sure that the proposed step will diminish the value of CourseTime. On the other hand, using the error bars shown with dotted lines, the system would be uncer- tain as to whether the true value of CourseTime would improve after taking the proposed hillclimbing step. In the first case, the system could safely use the approx- imate model to decide whether to take the proposed hillclimbing step, while in the second case, the approx- imate model would not be safe to use for that decision. Thus GMMS evaluates the suitability of an approximate model by comparing error estimates to the magnitude of the change in the optimization criterion as measured by the approximate model. GMMS actually operates in a manner that is slightly more general than outlined above. In particular, GMMS is implemented in the form of a function: ModeZSeZect(pl,p2, K, MI, . . . . Mn). The parameters pl Approximate Course Time 1. Let A be a sparse point set in the design space (~1, . . . , G). (a) Run PMARC to find Tefj(u) f or each point in set A. (b) Fit coefficients in AZg(A) t 0 minimize average error over set A. 2. Let B be a dense point set in the design space (~1,. . . , G). (a) Run PMARC to find !Z’..ff(u) for each point in set B. (b) Fit coefficients of AZg(B) t 0 minimize average error over set B. 3. Estimate the error of AZg(A) using the PMARC as the “gold standard”: e Absohte-Error(AZg(A)) = Average error in Terr over all points in B - A. e Difference-Emor(Alg(A)) = Average error in ITerr - Terr(u’)l as function of AuI,. . . , Aun, over all pairs (u, u’) of points in I3 - A. Iteration Figure 6: Error Estimation Technique Figure 5: Gradient Magnitude Model Selection and pa represent artifacts under consideration during the design process (e.g., two different sailing yachts). The parameters Ml, . . . . M, are an ordered list of the avail- able models for evaluating artifact performance, where Ml is the cheapest, and M, is the most expensive. The ModelSelect routine returns the cheapest model that is sufficient for evaluating the following inequality: M(PI) - M(p2) L K Thus the selected model is sufficient for determining whether the performance of pi and p2 differ by at least K. In order to evaluate forward progress in steepest- descent hillclimbing, as illustrated in Figure 5, the con- stant K is chosen to be zero. Our robustness-improving enhancements to steepest-descent hillclimbing occasion- ally require comparing artifacts using a non-zero tol- erance level. In such cases, the ModelSelect routine takes a parameter K not equal to zero. GMMS can, in principle, be applied to any search algorithm that needs only to access the physical models in order to evaluate inequalities of the form shown above. Likewise, GMMS can in principle be applied to any of the modeling choices outlined in Figure 3. 6. Model Fitting and Error Estimation We have experimented with GMMS using the choice of models for effective draft, T, f, as a test case. Thus GMMS chooses between the a f gebraic approximate model and the PMARC potential flow model described above. The accuracy of the al ebraic approximation (relative to the PMARC model 7 can be optimized by adjusting the value of the coefficient K. Our system fits the algebraic model and obtains an error estimate us- ing the procedure outlined in Figure 6. The procedure takes”as input two sets, A and B, of sample points in the space of candidate yacht designs. The set A is a small, sparsely distributed point set, while set B is a larger, more densely distributed point set. The system constructs two versions of the algebraic model, by choos- ing values for the fitting coefficient K. AZg(A) is fitted against the “true” values from the sparse point set A. AZg(B) is fitted against the “true” values from the dense point set B. In each case the “true” values are deter- mined using the PMARC as the “gold standard”. Since AZg(B) is fitted against the denser point set, this model is actually used during hillclimbing search; however, its error is estimated using AZg(A), which was fitted against the sparser point set. In particular, the error in AZg(B) is estimated by comparing AZg A) to PMARC for all points in the set B - A. Two 6 ifferent error estimates result from this procedure: Absolute-Error is based on the assumption that errors in the algebraic model at nearby points in the design space are independent of each other. Difference-Error takes into account the possibility that errors for nearby points may be corre- lated. GMMS operates in a slightly different manner de- pending on which type of error estimate is available. Consider first how Absolute-Error estimates are used. Given two candidate yacht designs Da and Di+l, the system first evaluates the effective draft Tefp of each candidate using the algebraic approximation. The esti- mate of Absolute-Error is then used to find upper and lower bounds on the Teff of each candidate. Each pair of bounds is then propagated through the rest of the veloc- ity prediction program (Figure 2) to obtain an upper and lower bound on the CourseTime of each candidate. If the CourseTime intervals do not overlap, then the sys- tem knows that the step from Di to Di+l can be taken using the algebraic model. If the intervals do overlap, then the system must use PMARC to obtain a better estimate of effective draft Teff for each candidate. When Difference- Error estimates are available, GMMS operates differently. After computing the effec- tive draft of each candidate, the system considers two scenarios: (1) All of the Difference-Error occurs in the T,ff of Di, (2) All of the and none occurs in the T,ff of Di+l; Diff D erence-Error occurs in the T,,f of i+l, and none occurs in the T,ff of Di. In each case the Reasoning about Physical Systems 597 system propagates the Difference-Error through the rest of the velocity prediction program, to obtain bounds on the CourseTime of each candidate. The algebraic model is considered acceptable only if the CourseTime intervals are disjoint under both scenarios. Similar meth- ods can be used to apply Gradient Magnitude Model Se- lection to the other modeling choices shown in Figure 3. Since GMMS is a heuristic method, it is not neces- sary that the error estimate be exact for each point in the design space. Overestimating the error will result in too little use of the approximate model, raising the cost of evaluation. Underestimating the error will re- sult in overuse of the approximate model, leading the optimization along (possibly) less direct paths to the solution. Nevertheless, hillclimbing with GMMS should lead to a nearly optimal solution even when the approx- imate model is over-used. Recall that hillclimbing only terminates at local minimum points of the search space. Before stopping, the hillclimber usually encounters a re- gion that is sufficiently flat to require use of the exact model in order to distinguish the performance of can- didate designs. GMMS thus forces the hillclimber to switch to the expensive, but exact model in order to verify the presence of a local optimum. The performance of GMMS can be enhanced by dy- namically re-fitting the approximate model during the optimization process. This method proceeds from the observation that the optimal value of the fitting coef- ficient K in the T, the region of the cf f formula will generally depend on esign space in which the formula is applied. Suppose the algebraic model is periodically re- calibrated during the search process, by adjusting this coefficient. The resulting approximate model will be more accurate in evaluation of designs near the latest recalibration point. It can therefore be used more of- ten and provide greater savings over PMARC than is possible with a fixed approximation. We have implemented a “recalibrating” version of GMMS, “Recal-GMMS”, to test out this strat- egy. Recal-GMMS operates as follows: Whenever ModelSelect indicates that the current algebraic model cannot be used, the system runs PMARC on the cur- rent design. The computed value of Te,,f is then used to recalibrate the coefficient of the algebraic model. Two different algebraic models are fit to the current region of the design space. In one model, the fitting coeffi- cient K is treated as a constant. In the other model, K is expressed as a linear function of the parameters of the design space. This linear function is fit using d + 1 PMARC evaluations for a d dimensional de- sign parameter space. The required P MARC evalua- tions are obtained by selecting the d + 1 most recent PMARC evaluations that yield a non-degenerate fit- ting problem. (Degeneracy is detected using a standard numerical singular value decomposition code.) In case no such non-degenerate set can be found, the system generates additional design parameter points that yield a non-degenerate set, and then evaluates them in order to carry out the fitting process. Of the two recalibrated algebraic models, the linear model is actually used to compute effective draft Teff. The error of this linear model is estimated to be the absolute value of the differ- ence between the linear model and the constant model. 7. Experimental Results We have tested our approach to model selection in a se- ries of experiments comparing various model selection strategies. In particular, we investigated the five model selection strategies listed in Figure 7. The five strategies were each run on four separate design optimization prob- lems. The problems differed in both the initial yacht prototypes, and in the yacht design goals. Two shape modification operators were used for the optimizations, i.e., Scale-Keel, which changes the depth of the keel, and Invert-Keel, which alters the ratio between the lengths of the top and bottom edges of the keel. The results of these runs are summarized by the table in Figure 8. For each model selection strategy, the table gives a measure of the quality of the final design, and a measure of the computational cost needed to find the final design, each averaged over all four test problems. The quality of a design D is measured as the difference’ between the CourseTime of D and the CourseTime of the “optimal” design, i.e. the best design found by any of the five strategies. The computational cost of finding a design is measured by counting the number of PMARC evaluations needed to carry out the design op- timization, since PMARC is by far the most expensive part of the design process. 0 Q-Only: Only the algebraic model is used for eval- uation of effective draft. PMARC-Only: Only the PMARC potential flow code is used for evaluation of effective draft. GMMS: Gradient magnitude model selection is used to select between the algebraic and PMARC mod- els for effective draft. Errors are estimated using the Difference- Error formula. Errors are propagated through VPP using the difference error propagation method. Recal-GMMS: GMMS, with the addition that the algebraic model is recalibrated according to PMARC data collected during the optimization. Errors are estimated by comparing locally fit constant and linear models. Errors are propagated through VPP using the absolute error propagation method. @TO (Cheap-to-Optimal): Only the algebraic model is used until an initial optimum is reached. Then only the P MARC model is used until a final optimum is reached. Figure 7: Model Selection Strategies The results in Figure 8 illustrate a tradeoff between computational cost and the quality of the optimiza- tion. In terms of computational expense, measured by the number of PMARC evaluations, the strategies can be ranked in the order shown, with “Alg-Only” being the cheapest and “PMARC-Only” being the most ex- pensive. “Alg-Only” comes out being the cheapest be- cause it never invokes the PMARC potential flow code. “PMARC-Only” is the most expensive because it al- ways invokes the PMARC potential flow code. Notice that each of the three non-trivial model selection strate- gies (“Recal-GMMS”, “GMMS” and “CTO”) is cheaper 598 Ellman than the “PMARC-Only” stratkgy. Each avoids some of the PMARC runs that occur under the “PMARC- Only” strategy. In fact , the computationally cheapest of the three, “Recal-GMMS”, incurs only about 59% of the computational expense of the “PMARC-Only” strategy. Notice that the “Alg-Only” strategy yields yacht designs of lower quality than those produced using the other strategies. Lower quality designs are obtained because the algebraic model causes the hillclimber to terminate at a point that is not a local optimum in terms of the more accurate “PMARC” model. In contrast to this, all of the other four strategies achieve the same quality lev- els. Higher quality designs are obtained because these strategies cause the hillclimber to terminate at points that really are locally optimal in terms of the PMARC model. Strategy Compute Cost Design Quality (PMARC Evals) (Lag in Seconds) Alg-Only Recal-GMMS CT0 GMMS PMARC-Only Figure 8: Comparison of Model Selection Strategies The “Alg-Only” and “Recal-GMMS” strategies are Pareto optimal. Neither of these two strategies is dom- inated in quality and computation cost by any of the other three strategies. In contrast, none of the other three strategies (“PMARC”, “GMMS” and “CTO”) is Pareto optimal. Each is dominated by the “Recal- GMMS” strategy, since the “Recal-GMMS” strategy achieves the same quality as each at a lower compu- tational cost. In order to choose between “Alg-Only” and “Recal-GMMS” one must supply some criterion for balancing the quality of a design against the amount of computation cost expended during the design process. In the yacht design domain, the choice is fairly easy. America’s Cup yacht races are often won and lost by a few seconds. Considerations of quality therefore tend to outweigh considerations of computation cost. In this application domain, our results indicate that “Recal- GMMS” is the best model selection strategy. 8. Ongoing Research Ongoing research is aimed at applying our GMMS tech- niques to other model selection choices that arise dur- ing hillclimbing search in the yacht design domain, as described in Figure 3. We are especially interested in using GMMS to decide when to reuse prior evaluation results, and when to use linear approximation models. These two types of approximation are very general and can be applied to a wide variety of design problems. If GMMS can be shown useful for these decisions, it will be established as a widely applicable model selection tech- nique. We also plan to test our GMMS techniques in domains other than yacht design, Longer term research is aimed at investigating model selection problems that arise in parts of the design pro- cess other than hillclimbing search. Models of physical systems can be used to support computer-aided design in a variety of ways other than direct evaluation of candi- date designs. For example, physical models can be used in sensitivity analyses that enable engineers to decide which design parameters to include in the search space. Each design task that depends on a physical model will lead to a distinct model selection problem. We are there- fore attempting to classify the modeling tasks that arise in computer-aided design and to develop model selection methods for each of them. 9. This research was supported by the Defense Advanced Research Projects Agency (DARPA) and the Na- tional Aeronautics and Space Administration (NASA). (DARPA-funded NASA grant NAG 2-645.) It has ben- efited from discussions with Saul Amarel, Martin Fritts, Andrew Gelsey, Haym Hirsh, John Letcher, Chun Liew, Ringo Ling, Gerry Richter, Nils Salvesen, Louis Stein- berg, Chris Tong, Tim Weinrich, and Ke-This Yao. eferences S. Addanki, R. Cremonini, and J. Scot. Graphs of models. Artificial IntelEigence, 50, 1991. G. Cerbone and T. Dietterich. Knowledge compilation to speed up numerical optimization. In Proceedings of the Eighth International Workshop on Machine Learn- ing, Evanston, IL, 1991. B. Falkenhainer and K. Forbus. Compositional mod- eling: Finding the right model for the job. Artificial Intelligence, 50, 1991. B. Falkenhainer. A look at idealization. Working Notes of the AAAI Workshop on Approximation and Ab- straction of Computational Theories, San Jose, CA, 1992. J. Letcher, J. Marshall, J. Oliver, and N. Salvesen. Stars and stripes. Scientific American, 257(2), August 1987. R. Ling and L. Steinberg. Model generation from phys- ical principles: A progress report. Technical Report CAP-TR-9, Department of Computer Science, Rutgers University, 1992. W. Press, B. Flannery, S. Teukolsky, and W. Vetter- ling. Numerical Recipes. Cambridge University Press, New York, NY, 1986. D. Tcheng, B. Lambert, S. Lu, and L. Rendell. Aims: An adaptive interactive modeling system for support- ing engineering decision making. In Proceedings of the Eighth International Workshop on Machine Learning, Evanston, IL, 1991. D. Weld and S. Addanki. Query-directed approxima- tion. Technical Report 90-12-02, Department of Com- puter Science and Engineering, University of Washing- ton, Seattle, WA, 1991. D. Weld. Reasoning about model accuracy. Technical Report 91-05-02, Department of Computer Science and Engineering, University of Washington, Seattle, WA, 1991. easoning about Physical Systems 599 | 1993 | 89 |
1,419 | isudizati Marc Goodman* Cognitive Systems, Inc. 234 Church Street New Haven, CT 06510 Abstract This paper describes Projective Visualization, which uses previous observation of a process or ac- tivity to project the results of an agent’s actions into the future. Actions which seem likely to suc- ceed are selected and applied. Actions which seem likely to fail are rejected, and other actions can be generated and evaluated. This paper presents a description of the architecture for Projective Vi- sualization, preliminary results on learning to act from observations of a reactive system, and a com- parison of two types of Case Projection (how sit- uations are projected into the future). Introduction An agent must balance a variety of competing goals. An action which satisfies one goal may cause other goals to become unsatisfiable. For example, while standing on a street corner I may have two competing goals: 1) cross the street, and 2) avoid getting hit by a car. If I step out into the street towards the other side, an oncoming car might hit me. On the other hand, if I stand on the corner, I’m in little risk of getting hit, but I won’t be getting to the other side of the street. One method of selecting an action is to project the situation into the future. If I project to a state where both goals are satisfied (I’ve crossed the street without getting hit), then I know the action is appropriate. On the other hand, if one or more goals are defeated (I get hit by a car, or I fail to cross the street within a certain time period), then I know the action is unlikely to succeed. [Goodman, 19891 uses a simple version of projection where a battlefield commander can evalu- ate the effectiveness of battle plans by projecting the outcome of the battle. The focus in the battle plan- ning example, as in this paper, is on how observation of previous experience can be used to project and to guide action. *Thanks to David Waltz and Richard Alterman for use- ful discussion. This work was supported in part by DARPA under contract no. DAAHOl-92-C-R376. Computer Science Department Brandeis University Waltham, MA 02254 Two approaches to representing experience are: 1) to track perceptual observations of an environment (a concrete approach), and 2) to extract semanti- cally or causally relevant features from the experi- ence and represent those (an abstract approach). For example, Troop movements could be represented ei- ther as a series of observations of the positions and orientations of individual soldiers (the concrete ap- proach) or as summary information about the num- ber of soldiers, the type of maneuver (i.e. frontal, en- veloping, etc.), and their overall distance to the front (an abstract approach). Previous CBR work has fo- cussed on applying abstract representations of expe- rience to planning [Hammond, 1986; Alterman, 1988; Alterman et al., 19911. A concrete approach to representation is preferable because of: Knowledge Engineering Difficulty. An abstract approach requires that an expert interpret each sit- uation to extract the relevant semantic and causal features. A concrete approach stores experience di- rectly from sensor readings or a perceptual system without human intervention. As the number of cases in a system rises, the bottleneck created by an ab- stract representation becomes severe. Information Loss. An abstract representation discards features which are deemed irrelevant by a knowledge engineer or domain expert. Unfortu- nately, what the knowledge engineer or domain ex- pert decide is irrelevant may turn out to be quite rel- evant. A concrete representation tracks all available information lessening the possibility of information loss. Psychological Evidence. [Gentner , 19891, sug- gests that the most common type of reminding is based on surface-level features rather than abstract features. A representation which facilitates surface- level remindings at the expense of additional re- quired work for abstract remindings is, therefore, plausible. A concrete representation satisfies these requirements since surface level features are imme- diately available for retrieval, but abstract features must be extracted from the representation for ab- 54 Goodman From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. Figure I:- A picture from The Bilestoad. Should the agent (light grey) continue fighting or disengage from its opponent? stract reminding. [Waltz, 19891 points out that since the perception-cognition-action loop typically takes 1OOms or less, not much time is available for extract- ing abstract features from the perceptual world. A concrete representation avoids this problem by deal- ing directly with perceptual features. The method of projection presented in this paper uses a concrete representation of a process or activity to create concrete projections into the future. Since these projections have the same structure and content as perceptual observations of the world, we consider this a form of imagery and call the technique Projective Visualization by analogy to a form of spatial, visual imagery in humans. This paper will present results which indicate that Projective Visualization can be used to learn to act through observation. This paper also presents some ex- periments on evaluating different techniques for Case Projection, the underlying engine through which im- agery is achieved. A new version of the video game entitled The Bilestoad, which was published by Datamost Software in 1983, serves as the test-bed for the projective visualizer (a picture of a game in progress appears in Figure 1). The game is a simulation of combat between two gladiators armed with battle axes and shields on islands which contain strategic devices. A reactive system was de- signed and built as an opponent for human players, and serves as a standard of performance for the pro- jective system. At each frame of the game, approximately 200 pieces of perceptual information about the agent and its envi- ronment are collected and processed. This information includes the Cartesian coordinates of each joint in the bodies of the agents, absolute angles of those joints, state information about the joints and the agent over- all, the location of devices in the simulation, motor- control of the agents, etc. The agent must satisfy conflicting goals to succeed. Consider a scenario where the agent wishes to engage a fleeing opponent. One course of action is to chase the opponent. Another course of action is to navi- gate to a transportation device, and then chase the opponent. Projection can be used in this situation to determine that approaching the opponent directly will fail to cause engagement, since the opponent is moving away from the agent at the same rate of speed as the agent is approaching. The agent visualizes itself chas- ing the opponent until the opponent reaches its goal. Navigating to the transportation device will result in engagement, and the agent visualizes itself catching its opponent. Action must be initiated quickly, since the longer the agent spends deciding what to do, the more of a lead the opponent will have. The agent must also successfully anticipate the ac- tions and objectives of its opponent. Consider a sit- uation where the agent is pursuing its opponent and the opponent begins to alter its bearing. If the agent reacts to this by approaching its opponent directly, the agent may reduce the distance to its opponent but al- low the opponent to circle around the agent. On the other hand, if the agent projects the effects of the oppo- nent ’ bearing change, visualizes the opponent circling around, and maintains a position between the oppo- nent and the goal device, the agent will prevent its opponent ’ escape. Hence, reacting to the opponent ’ actions is not enough, the agent must anticipate as well. In the Battle Planner, abstract representations of his- torical battles were used to project the outcome of a new battle. The system induced a discrimination tree where features of the historical cases (such as ra- tio of attacking and defending troops, air superiority, amount of artillery, etc.) which were good predictors of the winner of the battle served as indices for case re- trieval. There are several techniques for inducing such a discrimination tree, including ID3 [Quinlan, 19861, CART [Brieman et al., 19841, and Automatic Inter- action Detection [Hartigan, 19751. Though each algo- rithm has slightly different characteristics with respect to convergence, noise sensitivity, and sensitivity to rep- resentation, they all share the characteristic of asymp- totically approaching an accuracy limit as the number of examples increases, and they can be automatically reapplied as more experience is gathered without ad- ditional knowledge engineering. Projecting a single result (e.g. the outcome of a battle) from a situation description in one step is in- appropriate for the following reasons: Sensitivity to Initial Conditions. Consider Ben Case-Based Reasoning 55 Franklin’s old adage, “For want of a nail, the shoe was lost; For want of the shoe, the horse was lost...” which demonstrates that minor differences between situations can compound as the situation evolves. Capturing this sensitivity to initial conditions in a single discrimination tree requires a fairly exhaus- tive set of examples. The world, being a complex place, makes having an exhaustive set of examples impractical. a Interactions between Features. In battle plan- ning, having more tanks than your opponent is gen- erally beneficial. However, if you are fighting in marshy, heavily wooded, or urban terrains, your tanks will get bogged down, and having more tanks may actually hurt you. To learn this interaction with a single discrimination tree requires that you see cases with many tanks in urban terrain, few tanks in urban terrain, many tanks in flat terrain, few tanks in flat terrain, etc. Once again, the system requires a fairly exhaustive set of examples. A Projective Visualizer avoids these problems by cre- ating a projected situation which is temporally (and causally) near to the current situation. Instead of sim- ply jumping to the outcome of the battle from the ini- tial conditions, the system simulates how the situation evolves over time. Each step of projection lessens the causal distance between a situation and its conclusion. In The Bilestoad, the agent might have 2 hits of dam- age to the shoulder of its arm which holds the axe, while its opponent only has 1 hit of damage. Even if we know the relative locations and orientations of the agent and the opponent it may be hard to say which will win the fight. It’s easier to say that since the op- ponent’s axe is in contact with the agent’s shoulder, on the next time step the agent’s shoulder will have 3 hits of damage. Meanwhile, the opponent continues to have only 1 hit of damage, since the agent’s axe is not in contact with its shoulder. Continuing to project, we visualize the agent with 4 hits, 5 hits, 6 hits, 7 hits, until the agent’s axe arm is severed at the shoulder. At this point, it’s much easier to say that the agent will lose its battle. We have reduced the causal distance between a situation and its conclusion by projecting the situation forward in time. Projective Visualization uses Case Projection as an underlying engine for imagery. Case Projection is the process of creating a projected situation from a current situation. A Case Projector is built from a library of experiences by inducing a separate discrimination tree for each feature of a situation. The decisions in the tree correspond to features of the current situation which are good predictors of the next value of the feature we wish to predict. Given Ic concrete features which represent a situation, we induce k: discrimination trees. Projection consists of traversing these Ic discrimina- tion trees with a case representing the current situa- tion, making a prediction on the next value of each feature, and storing these predictions into a new case. This projected case then serves as a basis for further projection. Hence, a current situation can be driven forward arbitrarily far, at a cost of compounding er- rors from earlier retrieval. This process continues until the system is able to make an evaluation of the pro- jected situation (is the projected result good or bad), or until some pre-defined limit on projection is reached. Cases include observations of the world as well as “operators.” Each case in the system represents a fine- grained approximation of a continuously evolving sit- uation, in the same way as a motion picture is a fine- grained approximation of the recorded experience. In any particular case, agents are in the process of carry- ing out actions. For example, in a situation represent- ing a quarterback throwing a football, in one case the quarterback might be moving his left leg backwards, moving his right hand forward, and squeezing with the fingers on his right hand. “Operator” refers to this pattern of control signals. There is, therefore, a one-to- one mapping between operators and cases. Note that a linguistic term like “throw” or “dodge” actually corre- sponds to a sequence of cases and their corresponding operators. The indices used for Projective Visualization in The Bilestoad and in comparisons of techniques for Case Projection are generated inductively from the case li- brary with an extended version of the Automatic Inter- action Detection algorithm [Hartigan, 19751. Induction is guided and enhanced in a variety of ways, includ- ing methods for enriching the case representation and methods for preselecting good discriminators. Each of these enhancements reduces the amount of experience needed to reach a given level of accuracy, but does not change the property that accuracy will asymptotically approach a fixed limit. Therefore, the same ultimate effects as reported in this work can be reproduced us- ing off-the-shelf versions of CART, ID3, AID, or other learning algorithms, even without these enhancements, but a larger base of examples may be needed. Dynamics of Projective Visualization Projective Visualization layers on top of Case Projec- tion to provide a framework for controlling action. The basic idea of Projective Visualization is to pick a likely operator to perform and run the situation forward into the future until one of three things happens: 1) the sys- tem will run into an obviously bad situation, in which case the operator should be avoided and another opera- tor tried, 2) the system will run into an obviously good situation, in which case the operator should be ap- plied and real action taken in the world, 3) the system projects the case farther and farther into the future, without conclusive evidence one way or another. If the system is unable to reach a conclusion by the time it’s projected a prespecified amount of time, it can make a guess as to whether the operator is good or bad by retrieving the closest case and chasing pointers until the case was resolved either positively or negatively. 56 Goodman Case Retrieval can be used to select likely operators. Given all the cases which lead to the successful satis- faction of a goal, we build a discrimination tree where the discriminations in the tree are features of the cur- rent situation which are good indicators of the type of operation. Suggesting an operation to apply becomes a matter of traversing this discrimination tree and col- lecting operations that were applied in the retrieved cases (we refer to this process as Action Generation). Different discrimination trees built from cases where different goals were satisfied can be used to suggest dif- ferent operators. For example, in The Bilestoad, one action generator may consist of all the actions which led directly to using a transportation device, another action generator may consist of actions which led to killing the opponent, a third action generator may con- sist of actions which prevented the agent from taking damage, etc. Evaluating whether a situation is good or bad can be treated as a case retrieval task. For example, in battle planning, a projected situation where 90% of your sol- diers have been killed might retrieve a different battle where 90% of the soldiers were killed, and the mission failed. Since the mission failed in this retrieved case, we conclude that the mission will fail in the current (pro- jected) situation as well, a bad thing. In a process con- trol domain, where continuous feed roasters are being used to roast coffee beans, evaluation might be based on Neuhaus color readings on the ground beans as well as by moisture content of the beans. If the projected coffee beans retrieved cases where the color and mois- ture content differed from ideals specified in the roast- ing recipe, then the evaluation would be negative. In The Bilestoad, retrieving cases where the opponent is dead indicates a positive outcome, and retrieving cases where the agent is dead indicates a negative outcome. The “branching factor,” or number of different paths the system pursues in projection can depend on both the number of different operators which are suggested by a set of retrieved cases and the number of pos- sible projected values for key features of the case. [Goodman, 19891 indicates that by ignoring predictions where only a few examples are retrieved yields a higher overall accuracy. This suggests that a measurement of confidence can be generated from the number and dis- tribution of retrieved cases, and this confidence mea- sure can be used to prune the search tree so that only the most likely paths are explored. When the system is in a time-critical situation, response-time can be improved by reducing the win- dow of projection (how far the situation is projected into the future) as well as by considering fewer alter- native actions (reducing the branching factor). Such a system behaves more and more as a purely situated or reactive system would [Suchman, 1987; Agre and Chapman, 1987; Chapman, 19901. On the other hand, when more time-resources are available, the system can project the effects of its actions farther and farther, and consider a greater number of alternatives, causing the system to exhibit more and more planful behavior. This type of system can learn in several ways. First, through observation of an agent performing a task, it can learn to suggest new candidate actions. It can also refine its ability to suggest actions based on im- provements in indexing as more experience is gath- ered. Next, by carrying along projections and match- ing them against the world as the situation plays out, it can detect where projection has broken down. Pro- jection can then be improved by storing these expe- riences into the appropriate projectors and generat- ing new indices to explain these failures. Finally, it can improve its ability to evaluate situations by not- ing whether its evaluations match outcomes in the real world. Through observation, it can both add new eval- uations as well as refine its ability to evaluate by im- proving evaluation indices. Evaluation of Projective Approximately 28,000 frames of two reactive agents competing in hand-to-hand combat were captured as raw data from The Bilestoad. This represents approx- imately 1 hour and 18 minutes of continuous play. A case projector was built using this data which predicts whether the agent will cause damage to its opponent. in the next frame. Cases representing situations wher the agent caused damage to its opponent in the nex frame were selected (approximately 3,300 such case existed in the 28,000 frames), and used to build an ac tion generator. The action generator consists of eigh sets of indices, one for each control signal for the agent These control signals include moving the axe clockwise moving the axe counterclockwise, moving the shield clockwise or counterclockwise, turning the torso clock- wise or counterclockwise, and walking forward or back- ward. Given these controls, there are 34 = 8 1 signif- icant patterns of control signals which the agent can receive. Cases representing situations where the agent successfully avoided damage (as indicated by a frame where damage was taken followed by a frame where no damage was taken) were selected (amounting to 1,100 cases out of the original 28,000 frames) and used to build an additional action generator. Projection was used to mediate between these two of action gener- ators, such that if the agent could cause damage to its opponent in the next frame it would, otherwise it would use the avoidance action generator. The mean difference in score between the projec- tive agent and the reactive agent favored the projective agent by 18.01 points, with a standard error of 12.93 for 1000 games. Since the mean difference of 18.01 is within the 95% confidence interval of 25.34, we accept the null hypothesis that there is no significant differ- ence between the reactive and projective agents. We have, therefore, successfully learned to act as well as the reactive agent through observation of the reactive agent, a result which Chapman was unable to achieve Case-Based Reasoning 57 in [Chapman, 19901. Evaluation of Case Projection If we wish to project a situation Ic steps into the fu- ture (which we refer to as a Projection Window of rE steps), two contrasting techniques exist for perform- ing this projection. The first technique, called Projec- tive Simulation, projects the situation forward 1 step, then projects the projection 1 step, and so on, until the situation has been projected k steps. The second technique, termed One-Step Projection, performs one retrieval for each feature of the situation, to predict what the value of that feature will be b steps in the future. For example, if we wish to project a situation 5 steps into the future we can perform 5 steps of pro- jection of 1 time unit each (Projective Simulation), or perform 1 step of projection of 5 time units (One-Step Projection). One of the exhibits at the Boston Museum of Science is a big enclosed table with a circular hyperbolic slope in it, leading to a hole. Steel balls are released from the edge of the table and go around and around the slope until their orbits decay enough that they fall into the hole. One of the things that makes this exhibit fun to watch is that when the ball is heading toward the hole it accelerates, and if it misses the hole it “sling-shots” around, leading to much visual surprise and merriment. After watching, however, one quickly learns to predict how the steel ball will move around the track, a form of case projection. One-Step Projection and Projective Simulation were compared in the domain of prediction of the position of an object in Newtonian orbit around a gravity source (which is what the exhibit represents). For a full treatment of the physics involved, see [Hall- iday and Resnick, 19781. For the following tests, the Gravitational Constant, the mass of the gravity source, the mass of the object in orbit and the initial velocity of the object were chosen to yield a reasonable level of granularity in the orbit. The initial position of the object was varied randomly within a fixed range. For the values chosen, the av- erage number of time steps for an object to complete an orbit around the gravity source was 44.75. A train- ing set of 20 orbits was created by randomly selecting the initial position and deriving subsequent positions from a quantitative model, yielding 895 cases. The set of features on each case was limited to the current position of the object in two-dimensional Cartesian co- ordinates, the change in position from the previous ob- servation for each coordinate, and the previous change in position for each coordinate. No reference to the position of the gravity source, the masses of the object and source, or the laws of physics, were given to the system for use in projection. Figure 2 shows the percentage of orbits where One- Step Projection was more accurate than Projective Simulation, for Projection Windows of varying size, given a training set of 20 orbits. When the Projection 0.9 0.0 0.7 0.b 0.5 0.4 0.3 0.2 0.1 0 l::!:!:::!+!::!!:!:!!:!!!:!!!:!:I 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 mJuthnwtndrv Figure 2: Percentage of Trials where One-Step Projec- tion is More Accurate than Projective Simulation, Size of Training Set=20 Orbits. Window is equal to 1 time step, Projective Simulation and One-Step Projection are functionally equivalent. For Projection Windows between 2 and 20, 300 ini- tial starting positions were randomly selected and the mean error per orbit was determined and compared. The resulting percentages are accurate to within 4.8%, given a 95% confidence interval based on a one-tailed distribution. For Projection Windows between 21 and 33, 100 initial starting positions were used, yielding a 95% confidence interval of at most 8.3%, based on a one-tailed distribution. Projective Simulation was more accurate than One- Step Projection for small Projection Windows. Specif- ically, for Projection Windows between 2 and 11, the null hypothesis that the two methods are equivalent could be rejected for 8 of the 10 points with 95% confi- dence. On the other hand, One-Step Projection was more accurate than Projective Simulation for large Projection Windows. For Projection Windows be- tween 22 and 33, the null hypothesis could be rejected for all points with 95% confidence. Figure 3 shows the mean of the mean error per or- bit for Projective Simulation and One-Step Projection with training sets of size 5 Orbits and 20 Orbits, as the size of the Projection Window is varied. These data points are based on a random sample of 200 initial positions. The error rate is roughly linear in the size of the Projection Window for small Projection Win- dows. The slope of these linear components of the er- rors is slightly smaller for Projective Simulation, hence Projective Simulation is more accurate for small Pro- jection Windows. For larger Projection Windows, the change in error rate for One-Step Projection begins to decrease, and One-Step Projection becomes more ac- curate than Projective Simulation. Discussion The central difference between One-Step Projection and Projective Simulation explains these results. At 58 Goodman 60 I 3 5 7 9 11l315171921232527293133 Pr*Juth Whdef *03P,ls=20 oPs,n-20 -cDSP.ls-5 9 P3,ls=L Figure 3: Error VS. Projection Window for Training Sets of Size Five Orbits (TS=5) and Twenty Orbits (TS=20) for One-Step Projection (OSP) and Projec- tive Simulation (PS). each step in Projective Simulation, the system is free to select new previous situations for subsequent pro- jection, where One-Step Projection is forced to follow one set of previous situations to their conclusion. This benefits Projective Simulation in the short term, since Projective Simulation can account for cascading dif- ferences that would render any one previous situation obsolete. In other words, as minor initial differences between the current situation and a retrieved situa- tion begin to compound, Projective Simulation auto- matically chooses better predictors of future values. One-Step Projection does not have this flexibility, and must follow one set of precedents no matter how much the current situation begins to deviate. Hence, the slope of the error rate for Projective Simulation will be less than the slope of the error rate for One-Step Projection. On the other hand, One-Step Projection offers a ben- efit which is lacking in Projective Simulation. One- Step Projection guarantees that the projected situ- ation will be internally consistent. Specifically, the space of possible projections is bounded by the case base, and since One-Step Projection relies on a single set of precedents, the projection can never “break out” of these bounds. For domains where there are inherent limitations on values of features, this imposes a maxi- mum error rate for any particular orbit. For example, in the orbit domain, given a range of initial positions and a fixed initial velocity, the minimum and maxi- mum Cartesian coordinates of an object are bounded in a fixed space. As the size of the Projection Window grows, the mean error rate will approach the error re- sulting from a random selection of points from the fixed space for projection, which is constant for sufficiently large samples. Hence, the error rate for One-Step Pro- jection will asymptotically approach a constant limit. The constant will, of course, depend on the particular domain. The flexibility inherent in Projective Simula- tion allows it to break out of these bounds, and the error rate retains its linear characteristic. Conclusions We have demonstrated that systems can learn to project the effects of their actions through observa- tion of processes and activities. We have demonstrated two techniques for projection, One-Step Projection and Projective Simulation, and have indicated that Projec- tive Simulation is appropriate for projecting effects in the short-term and that One-Step Projection is appro- priate for projecting effects in the long-term. Finally, we have demonstrated that a system which controls an agent acting in an environment can benefit from pro- jection, even when projection is not 100% accurate. eferences Philip E. Agre and David Chapman. Pengi: An im- plementation of a theory of activity. In Proceedings of the Sixth National Conference on Artificial Intelli- gence, pages 268-272, 1987. Richard Alterman, Roland ZitoWolf, and Tamitha Carpenter. Interaction, comprehension, and instruc- tion usage. Journal of the Learning Sciences, l(4), 1991. Richard Alterman. Adaptive planning. Cognitive Sci- ence, 12:393-421, 1988. L. Brieman, J. Friedman, R. Olshen, and C. Stone. Classification and Regression Trees. Wadsworth, 1984. David Chapman. Vision, Instruciton, and Action. PhD thesis, Massachusetts Institute of Technology, Cambridge, Mass, April 1990. Dedre Gentner. Finding the Needle: Accessing and Reasoning from Prior Cases. In Proceedings the Sec- ond DARPA Workshop on Case Based Reasoning, pages 137-143, 1989. Marc Goodman. CBR In Battle Planning. In Proceed- ings the Second DARPA Workshop on Case Based Reasoning, pages 312-326, 1989. David Halliday and Robert Resnick. Physics, Parts I and 2. John Wiley and Sons, 1978. Kristian J. Hammond. CHEF: A model of case-based planning. In Proceedings of the Fifth National Confer- ence on Artificial Intelligence, pages 267-271, 1986. J. Hartigan. Uustering Algorithms. John Wiley and Sons, 1975. J. Ross Quinlan. Induction of decision trees. Machine Learning, 1:81-106, 1986. Lucy A. Suchman. Plans and Situated Actions. Cam- bridge University Press, Cambridge, 1987. David Waltz. Is Indexing Used for Retrieval? In Proceedings the Second DARPA Workshop on Case Based Reasoning, pages 41-44, 1989. Case-Based Reasomhg 59 | 1993 | 9 |
1,420 | Ideal Brian Falkenhainer Xerox Corporate Research & Technology Modeling & Simulation Environment Technologies 801-27C, 1350 Jefferson Road, Henrietta, NY 14623 Abstract Accuracy plays a central role in developing models of continuous physical systems, both in the context of developing a new model to fit observation or ap- proximating an existing model to make analysis faster. The need for simple, yet sufficiently accurate, models pervades engineering analysis, design, and diagnosis tasks. This paper focuses on two issues related to this topic. First, it examines the process by which ide- alized models are derived. Second, it examines the problem of determining when an idealized model will be sufficiently accurate for a given task in a way that is simple and doesn’t overwhelm the benefits of hav- ing a simple model. It describes IDEAL, a system which generates idealized versions of a given model and spec- ifies each idealized model’s crecld’bo’laty domcss’n. This allows valid future use of the model without resorting to more expensive measures such as search or empir- ical confirmation. The technique is illustrated on an implemented example. Introduction Idealizations enable construction of comprehensible and tractable models of physical phenomena by ig- noring insignificant influences on behavior. Idealized models pervade engineering textbooks. Examples in- clude frictionless motion, rigid bodies, as well as entire disciplines like the mechanics of materials. Because idealizations int,roduce approximation errors, they are not credible representations of behavior in all circum- stances. In better textbooks, their use is typically re- stricted by a vague set of condit,ions and tacit experi- ence. Consider the following from the standard refer- ence for stress/strain equations [18, page 931, which is more precise than most texts: 7.1 Straight Beams (Common Case) Elastically Stressed The formulas of this article are based on the following assumptions: (1) The beam is of homogeneous mate- rial that has the same modulus of elasticity in tension and compression. (2) The beam is straight or nearly so; if it is slightly curved, the curvature is in the plane of bending and the radius of curvature is at least 10 times the depth. (3) Th e cross section is uniform. (4) 600 Falkenhainer The beam has at least one longitudinal plane of sym- metry. (5) All loads and reactions are perpendicular to the axis of the beam and lie in the same plane, which is a longitudinal plane of symmetry. (6) The beam is long in proportion to its depth, the span/depth ratio being 8 or more for metal beams of compact section, 15 or more for beams with relatively thin webs, and 24 or more for rectangular timber beams. (7) The beam is not disproportionately wide. (8) The maximum stress does not exceed the proportional limit. . ..The limitations stated here with respect to straight- ness and proportions of the beam correspond to a max- imum error in calculated results of about 5%. Our goal in this research is to provide answers to the following questions: How are these conditions derived? What is the pro- cess by which a model is converted to a simpler, ide- alized version ? What are the principles behind the form and content of the standard textbook rules of thumb? What do these conditions mean? For what “nearly straight”, “disproportionately wide” beams will er- ror begin to exceed 50/o? Bow can the conditions be relaxed if only 20% accuracy is needed? What if 1% accuracy is needed? What is the best method by which an autom.uted mod- eling system should determine when an approximate model is credible? Tile answer to this may not nec- essarily be the same as the answer to question 1. This paper examines these issues for algebraic and ordinary differential equation models of up t’o second order. It describes IDEAL, a system which generates idealized versions of a given model and provides mea- surable information about the model’s error. The key enabler is recognizing the centrality of context in the idealization process - the idealizations that are gener- ated and the limits that are placed on their use reflect the (intended) user’s typical cases. We begin by de- scribing how idealized models are derived. Section exainines how approximation error should be rnanaged in an automated modeling setting, while Section de- scribes the principles behind the kinds of conditions stated above and a technique, called credibidlty domain From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. synthesis, for generating them. It closes with a discus- sion of how the same functionality might be achieved for more complex systems. In particular, our ultimate goal is to be able to reproduce the above passage, which requires the analysis of 3-dimensional, 4th-order par- tial differential equations. Idealizations A model M contains a set of (algebraic and ordi- nary differential) equations E describing the behav- ior of some physical system in terms of variables V = {t, ~2,. . . , yk-l,~~,. . . ,pn], where gi represents a de- pendent variable, and pi represents a constant, model parameter (i.e., pa is a function of elements external to the model).’ At most one varying independent vari- able t is allowed (which typically denotes time). Each model also has an associated set of logical precondi- tions, as described in [6]. We make the simplification that all analyses occur in a single operating region (i.e., the status of the preconditions does nob change and thus can be ignored for the purposes of this paper). Given that M” is an idealization of M, the error function e of M” ‘s approximate behavior V* is mea- sured with respect to M’s predicted behavior v and couched in an appropriate scalar norm e = 11 V* - v 11, the standard rneasure of goodness of an approxima- tion [3]. The results are independent of the particular norm. Jn the examples we will use the maximum (L,) norm for the relative error ea(vi) = max 1 2$(t) - q(t) WO,-tf 1 vi(t) where e= [el, . . . , e,] and ei is the error norm for vari- able vi. At times (section ), the instantaneous error value as a function of t will be used in place of the absolute value norm. A behavior is a vector v = [VI, . . . , un] of assignments to V as a function of t over the interval t E [0, t/3. A set of boundary conditions B specify values for tf , the model parameters, and yi (0) for all y; such that B and M uniquely specify a behavior v = BEHAVIOR(M,B). A model is credible with respect to error tolerance -T = [Ta,..., Q] if ea (vi) 5 ri for every variable vi for which a tolerance has been specified. Not all variables need have an associated tolerance; the error of these variables is unmonitored and unconstrained. The cr.& ibilily domain of an idealized model is the range of model parameters and t for which t,he simplified model is a credible representation of the original model [2]. Idealization process An idealized model M” arises from bhe detection of order of magnitude relationships, such as those de- A model may be idealized in at least two settings. In scribed in [lo; 71, which enable the elimination of neg- the on-demand setting, idealizations are made during ligible terms. This can produce significant simplifi- the course of using a model to analyze a specific sys- cations by reducing simultaneities and nonlinearities, tem. In the compilation setting, a model is idealized and enabling closed-form, analytic solutions. In t!his into a set of simpler models a priori by assuming the paper, we consider the use of the two most common different potential relationships that enable use of the idealization rules. Because much of the on-demand idealization assumptions: setting is a special case of the compilation setting, we two idealization rules without the associated machin- ery t#o propagate their consequences.2 DOMINANCE-REDUCTION: A + B M A given IAl >> IBI 9 + ISO-REDUCTION: T = 0 given X0 will focus solely on the latter. The key then is tdpre- identify the set of enabling relationships that might arise. One straw-man approach would be to system- -- D Dominance-reduction ignores negligible influences on a quantity and is the basis for idealizations like friction- less motion. When applied to derivative pairs, it offers one approach to time-scale approximation: atically explore all algebraic combinations and assume hypothetical situations in which the idealizations’ con- ditions would hold. For example, for every pattern A + 8, we could make one reduction based on A >> B and another based on A < B (when consistent with TIME-SCALE-REDUCTION: $$f = 0 given 1 +$I B I $$I the given equations). To make &efv/ idealizations, we Iso-reduction assumes constancy and is the basis for idealizations like quasi-statics, homogeneous materials, and orthogonal geometries. It is often the key enabler to obtaining analytic solutions. In general, order of magnitude reasoning requires a carefully designed set of inference rules (e.g., approx- imate equality is not transitive [lo]). For the class of ODES currently being studied, algebraic operations across a set of equations are unnecessary and these is- sues do not arise. Thus, IDEAL currently uses only the must have informat.ion about what relationships are possible or likely in practice. This is critical both in guiding the idealization process and in characterizing each idealized model’s credibility domain (as discussed in section ). The more specific the information about what is likely, the more the idealized models may be tuned for one’s specific future needs. Information about the pop- ulation of analytic tasks is represented as a set of dis- tributions over parameter values across problems and their variability within each problem (see Table 1). ‘This is also kno wn as an esogenozcs variable in the eco- nomics and AI literature. Throughout, we will try to use standard engineering terminology and indicate synonyms. ‘The current implementation is in Mathematics, which is a hindrance to implementing the kinds of order of mag- nitude systems described in [lo; 71. Reasoning about Physical Systems 601 Table 1: Some distributions characterizing an analyst’s typical problem set. lhstrlbutlon of parameter values Simple Ranges p E [0.1..1.5] Independent uniform, normal, truncated Joint (e.g., A mav never be small when B is large) I L)dmbutlon of functron types 1 Constant %Y= 0 Nearly Constant & dz =O Dependent _- Y = Yb:), 121 > 0 Distributions on parameter values indicate which in- equalities are likely or possible. They provide informa- tion about the population of tasks as a whole. Param- eters specified by simple ranges are treated as having uniform distributions over their range. Distributions on function types provide information about the per- task behavior of parameters. For example, material densities may have wide variance across different anal- yses, but are normally constant throughout the ma- terial during any single analysis. These distributions are currently given as input; in the context of a CAD environment, they could easily be obtained by saving information about each analytic session. IDEAL is guided by two factors - the form of the original equations and the problems to which they are typically applied. Given a model and associated dis- tributions, it proceeds as follows:3 1. Syntactically identify candidates for reduction. Based on the two reduction rules, a candidate is ei- ther a sum or a derivative. 2. For sums, cluster the addenda into all possible dom- inant/negligible equivalence classes based on the given distributions. Each parameter’s possible range is truncated at three standard deviations (otherwise it could be infinite and lead to spurious order of mag- nitude relationships). 3. For each possible reduction rule application, derive an idealized model under the assumption of the rule’s applicability condition. 4. Repeat for consistent combinations of idealization assumptions (c-f., assumption combination in an ATMS [4]). . Example (sliding motion) Figure 1 illustrates the problem of determining the velocity and position of a block as it slides down an incline. The given model considers the influences of gravity, sliding friction, and air resistance. Due to the nonlinear response to air resistance, the model has no analytic solution.* The 3This is a more complete version of the algorithm de- scribed in [13]. F or example, the earlier algorithm did not consider the ISO-REDUCTION rule. 4Well 7 at least not one that Mathematics can find. Mgfd: dv z = ag + af + ad ;li-=v Gruvity: ai7 = g sin 0 ,!%&ng Fricts’on: af = -pkg cos &gn( v) Ah- Resistance: ad = -Cdp,irL 2v2sgn(v)/2M lhstrlbutlons 1 parameter I type pdf i! 1 truncated normal - Lz (t E [0..00]) e 2. 4.52 8 uniform [30°..600] * Pk truncated, skewed Pk - CL; i- 2.54 (,!& E [0.2..0.55]) dv jg dt dependent -’ dependent Figure 1: A block slides down an inclined plane. Need we model sliding friction, air drag, or both? In the table, pdf = probability &f&y function methods apply to higher-dimensions, but to enable 3- dimensional visualization and simplify the presenta- tion, the initial velocity, ~0 = 0, and the air resistance coefficient ( CdPair L2) will be treated as constants. IDEAL begins by identifying patterns for which the idealization rules may apply. In this case, there is the single sum ag + Uf + Ud The assumption of IAl > ]Uj is limited by requiring that at least IA/B] 2 10 must be possible. Using this constraint and the given distributions, only one partial ordering is possible: I ag + a~ 1 >> 1 a,l/. This enables, vi a dominance-reduction, the derivation of a simple linear approximation M,J : e - A,f da: - - = ag +“f, fro:1 which we can derive dl-v v(t) = A,f t, x:(t) = *t2 + 2(J usserrnhg A,f >> ad (J&f) Had the distributions covered wider ranges for angle and time, and allowed air resistance to vary, a space of possible models, each with its own assumptions and credibility domain, would be derived. For example, high viscosity, long duration, and low friction would make the friction term insignificant with respect to the drag terns, resulting in another idealized model: 602 Falkenhainer dv - = g sin 0 - Cdpaa., L”v”sgn(v)/2M dt cbssetming Agd >> af rror management for automated modeling The idealized model M,f derived in the example offers a considerable computational savings over its more de- tailed counterpart. Unfortunately, it is also quite non- operational as stated. What does A,f > ad mean? When should one expect 5%, lo%, or 50% error from the model? What we would like is a mechanism for bounding the model’s error that is (1) easy to com- pute at problem solving time - it should require much less time than the time savings gained by making the idealization, and (2) reliable - failure, and subsequent search for a more accurate model, should be the excep- tion rather than the rule. One appealing approach lay in the kinds of thresh- olds illustrated in the introduction, but augmented with some clarifying quantitative information. How- ever, it is not as simple as deriving e as a function of Asf/ad or sampling different values for A,f /ad and computing the corresponding error. For a specified error threshold of 5%, the meaning of A,J > ad is strongly influenced by the model parameters and the independent variable’s interval. Figure 2 illustrates the error in position as a function of A,f = ag + af and time t. The problem is further compounded in the con- text of differential equations. Not only does ad change with time, the influence of error accumulation as time progresses can dominate that of the simple A,J > ad relationship. Second, much of the requisite informa- tion cannot be obtained analytically (e.g., e(A,f, t)). For each value of A,J and TV, we must numerically in- tegrate out to tf. Thus, any mechanism for a ptiorz bounding the model’s error presupposes a solution to a difficult, N-dimensional error analysis problem. The key lies in the following observation: only an approximate view of the error’s behavior is needed - unlike the original approximation, this “meta- approximation” need not be very accurate. For exam- ple, a 5% error estimate that is potentially off by 20% means that the error may actually be only as much as 6%. This enables the use of the following simple procedure: 1. Sample the error’s behavior over the specified butions to obtain a set of datapoints. dist,ri- 2. Derive an approximate equation for each ea as a func- tion of the independent variable and model param- eters by fit,ting a polynomial to the N-dimensional surface of dat apoints. If the error is moderately smooth, this will provide a very reliable estimate of the model’s error.5 For M,! ‘s Figure 2: Percentage error in position z for A,f = a, + af and time t produced over the ranges by M,f- - error in position rt: (shown in approximating polynomial is Figure 2), the resulting ex = 3.52524 lo-" +2.20497 1O-6 Agf - 3.18142 lo-" -4.02174 1O-7 Aif - 0.00001793972 Air -0.0000191978 Agf t- 1.20102 1O-6 A2 t j-3.68432 1O-6 t2 -0.0000940654 Agf t 9 +2.83115 1O-8 A’gp t2 - 1.1435 1O-7 t3 At this point, the specified requirements (easy to com- pute and reliable) have bot,h been satisfied, without generating explicit thresholds! Although not, as com- prehensible, from an automated modeling perspective this approximate error equation is preferable because it provides two additional highly desirable features: (3) a continuous estimate of error that is better able to respond to differing accuracy requirements than a sim- ple binary threshold, and (4) coverage of the entire problem distribution space by avoiding the rectangu- lar discretization imposed by thresholds on individual dimensions. Credibility domain synthesis The question still remains - where do conditions like “the bearn is not disproportionately wide” come from and what do they mean ? They are clearly useful in providing intuitive, qualitative indications of a model’s credibility domain. Further, for more complex sys- tems, increased dimensionality may render the deriva- tion of an explicit error function infeasible. The basic goal is to identify bounds in the independent variable and model parameters that specify a region within the model’s credibility domain for a given error tolerance. This is the credibzlity domain synthesis problem: find tf cd p,, p: for every pn E P such that oIt<tf A [v(piEP),p;ip,<pf] -+ t?<T 5As one reviewer correctly noted, global polynomial ap- proximations are sensitive to poles in the function being modeled. For the general case, a more reliable method is needed, such as local interpolation or regression on a more phenomena-specific basis function. There is nothing in the IDEAL algorithm which limits use of these methods. easoning about Physical Systems 603 -6 A,f -8 Figure 3: The error function imposes conservation laws on the shape of the credibility domain. Unfortunately, these dimensions are interdependent. Increasing the allowable interval for pa decreases the corresponding interval for pj. Figure-3 illustrates for w7.f subject to a 5% error threshold. A credibility do- main that maximizes the latitude for t also minimizes the latitude for A,f. What criteria should be used to determine the shape of the hyperrectangle? Intuitively, the shape should be the one that maximizes the ide- alized model’s expected future utility. We currently define future utility as its prior probability. Other in- fluences on utility, when available, can be easily added to this definition. These include the cost of obtain- ing a value for pi and the likely measurement error of pi. Given distributions on parameter values and derivatives, the can be precisely tion problem: credibility domain synthesis problem formulated as the following optimiza- minimize F(tf,P~,P~,...,p,,P~) = l-P(ost<tft Pk <Pk <Pzy...yP, <Pn <P$) subject to e 5 7 For the case of M,f and the distributions given in Figure 1, the optimaicredibility domain is - t < 8.35 A pk > 0.2 A 0 < 60’ which has a prior probability of 0.975. This formulation has several beautiful properties: 1. The credibility domain is circumscribed by clear and easily computed conditions. 2. It maximizes the idealized model’s future utility ac- cording to the user’s typical needs. 3. It offers a precise explanation of the principles un- derlying the standard textbook rules of thumb. In particular, it explains some very interesting as- pects of the passage quoted in the introduction. For example, a careful examination of the theory of elas- ticity-[14], f rom which the passage’s corresponding for- mulas were derived, shows that several influences on the error are omitted. Why? How is that (likely to be) sound? Consider the conditions synthesized for M,f . The limits for p and 0 cover their entire distribution; they are irrelevant with respect to the anticipated ana- lytic tasks and may be omitted.6 Only the independent variable’s threshold imposes a real limit with respect to its distribution in practice. Like the textbook conditions, credibility domain synthesis makes one assumpt,ion about the error’s be- havior - it must not exceed T inside the bounding re- gion. This is guaranteed if it is unimodal and concave between thresholds, or, if convex, strictly decreasing from the threshold. M,f satisfies the former condi- tion. However, the current implementation lacks the ability to construct such proofs, beyond simply check- ing the derived datapoints. Related Work Our stance, starting with [13], has been that the tradi- tional AI paradigm of search is both unnecessary and inappropriate for automated modeling because expe- rienced engineers rarely search, typically selecting the appropriate model first. The key research question is then to identify the tacit knowledge such an engineer possesses. [5] explores the use of experiential knowl- edge at the level of individual cases. By observing over the course of time a model’s credibility in differ- ent parts of its parameter space, credibility extrupola- tion can predict the model’s credibility as it is applied to new problems. This paper adopts a generalization stance - computationally analyze the error’s behavior a priori and then summarize it with an easy to compute mechanism for evaluating model credibility. This is in contrast to much of the work on automated management of approximations. In the graph of mod- els approach [L], the task is to find the model whose predictions are sufficiently close to a given observation. Search begins with the simplest model, moves to a new model when prediction fails to match observation, and is guided by rules stating each approximation’s quali- tative effect on the model’s predicted behavior. Weld’s dornain-independent formulation [15] uses the same ba- sic architecture. Weld’s derivation and use of bound- ing abstractions [16] has the potential to reduce this search significantly and shows great promise. Like our work, it attempts to determine when an approxima- tion produces sound inferences. One exception to the search paradigm is Nayak [9], who performs a post- analysis validation for a system of algebraic equations using a mix of the accurate and approximate mod- els. While promising, it’s soundness proof currently rests on overly-optimistic assumptions about the er- ror’s propagation through the system. “This occurred because p and 61 were bound by fixed intervals, while t had the oo tail of a normal distribution, which offers little probabilistic gain beyond 2-3 standard deviations. 604 Falkenhainer Credibility domain synthesis most closely resembles methods for tolerance synthesis (e.g., [S]), which also typically use an optimization formulation. There, the objective function maximizes the allowable design tol- erances subject to the design performance constraints. ntriguing questions Credibility domain synthesis suggests a model of the principles behind the form and content of the stan- dard textbook rules of thumb. Their abstract, qualita- tive conditions, while seemingly vague, provide useful, general guidelines by identifying the important land- marks. Their exact values may then be ascertained with respect to the individual’s personal typical prob- lem solving context. This “typical” set of problems can be characterized by distributions on a model’s pa- rameters which in turn can be used to automatically provide simplified models that are specialized to par- ticular needs. The least satisfying element is the rather brute- force way *in which the error function is obtained. While it only takes a few seconds on the described examples, they are relatively simple examples (sev- eral permutations of the sliding block example de- scribed here and the more complex fluid flow / heat exchanger example described in [S]). The approach will likely be intractable for higher-dimensional systems over wider distributions, particularly the Z-dimensional PDE beam deflection problem. How else might the requisite information be acquired? What is needed to reduce sampling is a more qualitative picture of the error’s behavior. This suggests a number of possible future directions. One approach would be to analyze the phase space of the system to identify critical points and obtain a qualitative picture of its asymptotic be- havior, which can in turn suggest where to measure [ll; 12; 171. Alt ernatively, one could use qualitative envisioning techniques to map out the error’s behav- ior. The uncertainty with that approach lies in the possibility of excessive ambiguity. For some systems, traditional functional approximation techniques might be used to represent the error’s behavior. Acknowledgments Discussions with Cohn Williams, both on the tech- niques and on programming Mathematics, were very valuable. References [I] Addanki, S, Cremonini, R, and Penberthy, J. S. Graphs of models. ArtificiaZ Intelligence, 51(l-3):145- 177, October 1991. [2] Brayton, R. K and Spruce, R. Sensitivity and Opli- miaation. Elsevier, ,Imsterdam, 1980. [3] Dahlquist, G, BJ ‘or&, A, and Anderson, N. Numerical Methods. Prentice-Hall, Inc, New Jersey, 1974. [4] de Kleer, J. An assumption-based TMS. Artificial Intelligence, 28(a), March 1986. PI VI PI PI PI PI P 11 P21 PI [W P51 WI WI PI Falkenhainer, B. Modeling without amnesia: Making experience-sanctioned approximations. In The Sixth International Workshop on Qualitative Reasoning, Ed- inburgh, August 1992. Falkenhainer, B and Forbus, K. D. Compositional modeling: F’ d m ing the right model for the job. Ar- tificial Intelligence, 51(1-3):95-143, October 1991. Mavrovouniotis, M and Stephanopoulos, G. Reasoning with orders of magnitude and approximate relations. In Proceedings of the Sixth National Conference on Ar- tificial Intelligence, pages 626-630, Seattle, WA, July 1987. Morgan Kaufmann. Michael, W and Siddall, J. N. The optimization prob- lem with optimal tolerance assignment and full accep- tance. Journal of Mechanical Design, 103:842-848, Oc- tober 1981. Nayak, P. P. Validating approximate equilibrium models. In Proceedings of the AAAI-91 Workshop on Model-Based Reasoning, Anaheim, CA, July 1991. AAAI Press. Raiman, 0. Order of magnitude reasoning. Artificial InteEligence, 51(1-3):11-38, October 1991. Sacks, E. Automatic qualitative analysis of dynamic systems using piecewise linear approximations. Art+- &al Intelligence, 41(3):3 13 -364, 1989/90. Sacks, E. Automatic analysis of one-parameter planar ordinary differential equations by intelligent numerical simulation. Artificial Intelligence, 48( 1):27-. 56, Febru- ary 1991. Shirley, M and Falkenhainer, B. Explicit reasoning about accuracy for approximating physical systems. In Working Notes of the Automatic Generation of Ap- proximations and Abstrcactions Workshop, July 1990. Timoshenko, S. Th eory of Elasticity. McGraw-Hill, New York, 1934. Weld, D. Approximation reformulations. In Proceed- ings of the Eighth National Conference on Artificial Intelligence, Boston, MA, July 1990. AAAI Press. W’eld. D. S. Reasoning about model accuracy. Art+ c&l Intelligence, 56(2-3):255-300, August 19!12. Yip, K. M.-K. Understanding complex dynamics by visual and symbolic reasoning. Artificial Intelligence, 5l( l-3):179-221, October 1991. Young, W. C. Road’s Formulas for Stress & Strain, Sixth Edition. McGraw-Hill, New York, NY, 1989. easoning about Physical Systems 605 | 1993 | 90 |
1,421 | merical Be S* Herbert Kay and Benjamin Kuipers Department of Computer Sciences University of Texas at Austin Austin, Texas 78712 bert@cs.utexas.edu and kuipers@cs.utexas.edu Abstract Semiquantitative models combine both qualita- tive and quantitative knowledge within a single semiquantitative qualitative differential equation (S&DE) representation. With current simulation methods, the quantitative knowledge is not ex- ploited as fully as possible. This paper describes dynamic envelopes - a method to exploit quanti- tative knowledge more fully by deriving and nu- merically simulating an extremad system whose so- lution is guaranteed to bound all solutions of the SQDE. It is shown that such systems can be de- termined automatically given the SQDE and an initial condition. As model precision increases, the dynamic envelope bounds become more pre- cise than those derived by other semiquantitative inference methods. We demonstrate the utility of our method by showing how it improves the dynamic monitoring and diagnosis of a vacuum pumpdown system. Introduction Many models of real systems are incompletely spec- ified either because a precise model of the system does not exist or because the parameters of the model span some range of values. Qualitative simulation methods [de Kleer and Brown, 1984; Forbus, 1984; Kuipers, 1984; Kuipers, 19861 permit such systems to be simulated in the face of this incompleteness by transforming the system into a related system in a more abstract space of qualitative values where model imprecision can be dealt with by the rules of qualita- tive mathematics. Semiquantitative models [Kuipers and Berleant, 1988; Berleant and Kuipers, 19921 re- duce model imprecision by adding numerical knowl- edge to the purely qualitative representation. Predic- *This work has taken place in the Qualitative Reasoning Group at the Artificial Intelligence Laboratory, The Uni- versity of Texas at Austin. Research of the Qualitative Reasoning Group is supported in part by NSF grants IRI- 8905494, IRI-8904454, and IRI-9017047, by NASA contract NCC 2-760, and by the Jet Propulsion Laboratory. tions from semiquantitative models are more precise ( i.e., more tightly bounded), while still retaining the accuracy (i.e., all possible behaviors are found) pro- vided by purely qualitative methods. This paper presents a new inference method called dynamic envelopes that more fully exploits the semi- quantitative representation than existing methods. It works by numerically simulating a set of (typically nonlinear) differential equations whose solutions are guaranteed to bound all behaviors of the semiquan- titative QDE. This approach captures the benefits of both qualitative and quantitative reasoning as all pos- sible behaviors of the system are simulated [Kuipers, 19861, and tighter numerical bounds are deduced yield- ing more precise predictions for each behavior. These benefits are especially important in monitoring tasks where early detection of deviations is vital. We represent semiquantitative models as QSIM QDEs [Kuipers, 19861 augmented with envelopes for all monotonic functions and numeric ranges for all model variables. We call this representation an SQDE (for semiquantitative QDE). Our technique generates a bounding ordinary differential equation (ODE) sys- tem derived from the SQDE that is numerically simu- lated to yield bounds on all model variables. Note that since the ODE system is in general a non-linear vector function defined over a multidimensional state space, it has no closed-form solution and so the integration must be performed numerically. The resulting bounds on the SQDE as a function oft are called the dynamic envelopes for the system. The strength of this method is apparent when com- pared to other semiquantitative approaches such as FuSim [Shen and Leitch, 19911 and Q2 [Kuipers and Berleant, 19881. These simulators also use SQDE mod- els, but produce overly conservative bounds because they use a simulation time-step determined by quali- tative distinctions. To better understand this, consider simulating the second order model of the two-tank cas- cade in Figure [Kuipers, la using Q2, an extension to QSIM 19861. Assume that the partially known monotonic function f E AJ+ is bounded by the func- tions (static envelopes) shown in Figure lb. Q2 pro- 606 Kay From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. A’=c-f(A) B’ = f(A) - f(B) (a) System definition (c is constant). 4wo lW.!m - 9o.w - 8o.w - m.w - w.00 - 5a.w - 40.00 - MM) - 2o.w - 1o.w - I I I I I I 0.w - I I I I I t I 0.00 40.00 60.00 (b) Envelope: for f(amt) E M+. B0.W lw.w ~ - INF [90 1001 - FULL [4 1001 O*. . - A-O . . .I.. . . * . [4 251 e- A-9 . . t' - ,ith i nitial condition (c) Q2 behavior for B(t) w A=FULL, B=O. B I I I I I I - ~....‘.......“..............................’..‘.........’..................‘...“‘.~ i DynEm 100.03 (d) Possible curves corresponding to (c). gm4l - j T..... io: ~~~ . . . . . .._.. ~ I I ‘I I I I 000 5.00 IO.00 15.00 20.00 +iUf T (e) Dynamic envelopes defining the lower bound B(t) and the upper bound B(t) on B(t). The rectangular Q2 range prediction is superimposed. Note that the dynamic envelopes are much tighter than the Q2 bound. Figure 1: A second order cascaded tank system and its behaviors. Reasoning about Physical Systems 607 duces predictions by first constructing a qualitative description of the behavior. This behavior consists of a series of time-points where one or more model variables change qualitative value, and the intervening time-intervals over which the behavior is qualitatively unchanging. It then applies a range propagation al- gorithm at each time-point which assigns interval val- ues to each model variable at the time-point. These intervals represent the possible range of values that the variable could have at that time-point. Figure lc shows the Q2 plot of the amount in tank B with the given static functional envelopes and an initial state with tank A fuI1 and tank B empty. Note that there are only three time-points in the behavior description and, due to model imprecision, the possible range for each time-point (other than TO) is infinite. There is therefore little than can be said about the possible tra- jectories described by the prediction. For example, any of the behaviors shown in Figure Id are consistent with the prediction. The problem is that the precision of a numerical sim- ulation is directly related to the number and density of the time-points in the simulation and a simulator whose time-step is based solely on the qualitative dis- tinctions cannot adequately control these quantities. The dynamic envelope method avoids this problem by using a standard numerical method (such as Runge Kutta) which chooses time-points based on local simu- lation error estimates. This method results in a much smaller time-step, and hence a more precise simulation. The remaining imprecision - the difference between the dynamic envelopes in Figure l(e) - more closely reflects the incomplete knowledge in the model itself. Dynamic Envelopes To numerically simulate the bounds of an SQDE, bounding equations for each state variable must be generated. Our method attempts to find a set of ex- tremal equations for a system. An extremal equation is a bound on the derivative of a state variable (as op- posed to a bound on the value of the state variable). It may be either minimal or maximal. Let A : x’ = f(x) be an ODE system with state vector x. For each xi E x, let xi = fi(xi) be the equation for the derivative of z:i where xi c x is the set of state variables that fi depends upon. For each xi, let zzi and %?i denote the lower and upper bounds on xi. We will use the term yi to refer to either gi or pi. We say that 9: = gi(xi) is a minimal equation for xi if yi = xi implies yi < xi and maximal if yi = xi implies ya > xi. The function gi is called an extremal expression for fi. A set of equations is an extremad system for the sys- tem A if it consists of a minimal and a maximal equa- tion for each xi E x. We can generate a set of extremal equations for any SQDE that is written as a system of equations of the form 21 = f(xi) where f is an expression composed of \ I \ I c s z “j L(xj) u(xj> Xi P(Xi > P(x$ A+B L(A) + L(B) U(A) + U(B) AxB L(A) x L(B) U(A) x U(B)s A-B L(A) - U(B) U(A) - L(B) AiB -A “‘u”2Ai UP) “2c-4; L(B) M+(A) M+(L(A>> =+ww M-(A) M-(U(A)) ~-MAN I e L(e) U(e) Table 1: Translation table for extremal expressions of the equation xi = fa(xi). Let ,f3(fi) be the desired bound on xi (/3 = L or /3 = U). The table is applied recursively to the subexpressions of fi. The symbol x. is any state variable other that xi, c is a constant, M P and M- are monotonic functions, c and in return the lower or upper range values of c, M* and M* return the lower or upper functional envelope of the monotonic function. For state variables, L(x) returns the variable a: and U(x) returns the variable Z. addition, subtraction, multiplication, division, unary minus, and arbitrary monotonic functions. The algo- rithm uses the functions L(e) and U(e) which take an expression and return the corresponding minimal or maximal expression as defined in Table I. The extremal equations are generated by computing for each xi the expressions L(f’) and U(fi) using Ta- ble 1. This yields a set of 2n equations which represent an ODE of order 2n which is the extremal system for the SQDE. Let the relation Ri be < when yi E g:i and 2 when Yi 5 2=&. In [Kay, 19911, the following theorem is proved: Let A : x’ = f(x) be an ODE system. Let cy : Y1 = g(y) be an extremal system for A. Assume that for all i yi Ri xi at t = 0. Then for all t, Yi(t) Ri Xi(t)* This states that if the state of the extremal system starts on the “correct side” of the S&DE, then it will remain on that side and hence bound the solution for all time. Once the extremal system has been found, it can be simulated by a standard numerical simulation tech- nique such as Runge-Kutta. The complete simulation method is thus : tThe b ound on xi is the same as that for f; regardless of whether an L(z:i) or U(zci) is desired. $The expressions for multiplication and division are for the case where A and B are positive. For other cases, the expressions in the table for L(e) and U(e) are computed differently, using information about the signs of A and B. 1. For each initial state of the S&DE, generate its ex- tremal system. 2. Using a numerical simulator, simulate the extremal system for all initial states. A simple example To demonstrate the method, we apply it to the second- order model in Figure la. The qualitative equations of the system are A’ = c- f(A) B’ = f(A) - f(B) where c E (0,oo) and f E M+. The semiquantitative model also includes numerical bounds on c such that c < c 5 z and static envelope functions f and 7 such -- - that f < f < 7. The corresponding extremal system is : A = z-f(?l) -f B = 7(3 -f(B) Note that in this case, the extremal tions into two separate systems, one for system parti- A and &, the other for 3 and 3. This is not the case in general. Fig- ure le shows the behavior produced by the dynamic envelope method that corresponds to the Q2-produced behavior shown in Figure lc. Note that the numerical bounds are much tighter than those of Q2. Using dynamic envelopes to infer behavior characteristics The dynamic envelope method bases its prediction on the ability to bound the first derivatives of the sys- tem. As a result, the extremal systems are not gener- ally members of the class of ODES represented by the SQDE. Therefore, the dynamic envelopes do not nec- essarily have the same shape as the behaviors of the S&DE. This means that only “0th order” bounds are predicted. The width of the bounds will increase with increasing imprecision in the SQDE. The prediction may also become weak because the extremal system may be numerically unstable, which results in the en- velopes diverging from each other with time. In such a case, the dynamic envelope bounds can eventually be worse than those from Q2. To combat this effect, we combine the Q2 and dy- namic envelope simulation methods and thus gain the benefits of both. Q2 describes behaviors as a series of time-points with intervening time intervals. It places ranges on the location of each time-point and the val- ues of the model variables at the time-point. The time- intervals are defined simply by their adjacent time- points. We gain predictive precision by intersecting dynamic envelopes with Q2 in two ways: By intersecting over time-intervals, we improve the precision over the interval. The Q2 time-interval prediction is simply that each model variable is somewhere between the values that it has at the adjacent qualitative time-points. Because it uses a smaller time-step, dynamic envelopes can be more precise over such intervals (as seen in Figure 1). e By intersecting at time-points, we potentially reduce the ranges of the model variables at the time-points. This not only improves the precision of the predic- tion, but may also open up gaps into which semi- quantitative time-point interpolation methods such as Q3 [Berleant and Kuipers, 19921 can insert time- points. Note that since both the dynamic envelope method and Q2 bound all real behaviors of the model, if an intersection is empty, the behavior is refuted. Hence, dynamic envelopes can be used as a behavior filter. The Vacuum Chamber In this section we model a complex system, the vac- uum chamber, and use the dynamic envelope simula- tion method to improve the response time of a mon- itoring system based on the MIMIC system [Dvorak and Kuipers, 1989; Dvorak, 19921. The production of high vacuum is of great impor- tance to semiconductor fabrication as many of the steps (such as sputtering and molecular beam epitaxy) cannot be performed if there are foreign particles in the process chamber. Unfortunately, creating such ultra-high vacua can be expensive and time-consuming. To reach ultimate pressures of 10 -’ Torr can take several hours3 and something as innocuous as a fingerprint left on the chamber during servicing can cause a huge perfor- mance loss. Because of this risk, it is important to service vac- uum equipment only when there is a problem. This suggests a need for a monitoring system that can detect when the system goes out of tolerance. The normal ap- proach to monitoring is to run the pumpdown process until the chamber reaches a steady-state pressure and then to compare this pressure to the expected value. Unfortunately, it can take several hours to reach a steady-state pressure. If the monitoring method could detect failures before the chamber reaches a steady- state pressure, the time and expense of unnecessarily running the pumpdown procedure could be avoided. A model-based method that can track the state of the system while it is changing is one way to solve this problem. In order to construct such a system, a model of the pumpdown process must be constructed. The difficulty in modeling this process numerically is that 3Atmospheric pressure is 760 Torr. eassning about Physical Systems 609 there is no practical theory for the sorption 4 of gases. Therefore, any useful model must deal with uncertain- ties in the underlying modeling assumptions. Quali- tative modeling permits reasoning with these types of uncertainties. The pumpdown process is intuitively very simple. A chamber at atmospheric pressure initially contains some amount of gas. A pump, which can displace a cer- tain amount of gas per unit time and pressure, removes gas from the chamber, hence lowering the pressure. For a simple vacuum pump, this process will continue until the pump reaches its cutoff pressure at which point the minimum pressure within the pump is the same as the pressure within the chamber. For pumps that operate in the high vacuum range (between 10V3 and lo- 5 Torr), there are additional effects to consider. The most significant of these is that of “outgassing” - a process where gas initially present in the walls of the chamber desorbs and thereby increases the chamber pressure. Our model takes into account both the effects of the pump and outgassing. The system is described by the following equations : # moleculea . . . . . . . . - . . . . -..... L L ---- ---- ---a N le+24 - I- I I -f&k---- le+22 - I c--___--__--__----------- - I --- le+u)- I -=L.-z~&~---- ---_---_-- 1 - Q2 \ I I lfs18 - L ------ le+16 - LT~-L..:ww;..-~.m; I I I I 1 tin&mill 0.00 5.00 10.00 15.00 Figure 2: The predicted behaviors of the vacuum chamber A variable as a function of time for both a normal and a leaking model are shown using dynamic envelopes (the dotted and short-dashed envelopes, re- spectively). The behaviors of the two hypotheses are clearly distinguished after t = 4 minutes. For compar- ison, the Q2 predictions for both hypotheses are also displayed, although since Q2 is unable to disambiguate the behaviors quantitatively, the Q2 results are repre- sented by the same prediction (long-dashed box). ads(A, B) = mi(pr(A)) . sf(B) (4) PtP(4 = pr(A) . speed(pr(A)) (5) lealc(A) = Cleak - (patm - pr(A)) . CI (6) A’ = +(A, B) - p@(A) + leak(A) (1) B’ = fZ(A, B) (2) fZ(A, B) = area . ads(A, B) - des(B) (3) prediction for the lower envelope of the normal system is less precise than Q2 prediction. This situation is not a problem since the diagnostic algorithm uses the in- tersection of the Q2 and dynamic envelope predictions. predicts no overlap between the two models after t = 4 minutes. Second, notice that the dynamic envelope where A is the amount of gas in the chamber and B is the amount of gas adsorbed in the chamber walls (all other terms are defined in Table 2 in the Appendix). For a working vacuum chamber, the leak rate is zero and hence C&k = 0. For model-based diagnosis, how- ever, fault models of the system must also be created. By setting C[e& to a positive value, the above system models a chamber with a leak. The behavior of both the working and leaking models is for A to decrease un- til it reaches a steady state. With C&k > 0 the steady state value of A will be higher than when C[eak = 0. Simulation results The two systems were augmented with envelopes for the functions speed(p), des( B), k(A), and sf(B) and then simulated with both Q2 and the dynamic enve- lope method using the values described in Table 3. The resulting envelopes are shown in Figure 2 together with the corresponding Q2 range predictions. First, notice that Q2 predicts identical ranges for the normal and faulty model whereas the dynamic envelope method Our diagnostic program is based on a simplified ver- sion of the MIMIC system [Dvorak and Kuipers, 19891. We provided our own predefined fault models and used dynamic envelopes rather than Q2 to predict variable ranges. We then ran our system against a stream of pressure measurements (taken every minute) that simulated a gasket leak in our vacuum system. Our diagnostic system was able to detect the leak after four measurements, whereas the diagnostic system us- ing only Q2 required nine measurements to detect the fault5. Further improvements are possible by recom- puting the envelopes of both models after every new measurement is taken. Note that leak model envelopes are predicted based on an assumed leak size range. Be- cause of this, when MIMIC refutes the leak model, it is really partitioning the space of possible leak sizes into three regions (those within the range, those bigger, and those smaller) with the first two regions refuted. This provides a method for converging on the precise leak size through successive partitions based on refining the leak size hypothesis. 4 Desorption is the process by which gases trapped on a substance are released. The reverse process is called ad- sorption. Adsorption is different from absorption in that the gases do not dissolve into the substance; they simply “stick” to its surface. 5Q2 detected the fault because of a difference in the qualitative behavior of the two models that is detectable after the chamber pressure becomes constant. 610 Kay Related Work There has been considerable interest in the combina- tion of qualitative and quantitative reasoning. This work includes the development of combined qualita- tive/quantitative representations (see [Williams, 1988; Kuipers and Berleant, 1988; Cheng and Stephanopou- los, 1988; Karp and Friedland, 1989; Simmons, 19861 ) and the use of numerical and qualitative knowl- edge for process monitoring [Dvorak and Kuipers, 19891 and process planning [Fusillo and Powers, 1988; Lakshmanan and Stephanopoulos, 1988; LeClair and Abrams, 19881. The methods and software described in [Kuipers and Berleant, 1988] and [Dvorak and Kuipers, 19891 (Q2 and MIMIC) are integral parts of this research. Recently Berleant and Kuipers have ex- tended Q2 to provide a single representation for both qualitative and quantitative simulation [Berleant and Kuipers, 1992; Berleant, 19891. In their method, called Q3, the range of a qualitative parameter is narrowed through an adaptive discretization technique that sub- divides qualitative intervals. Q3 and the dynamic en- velope method take different approaches to improving the precision of semiquantitative inference. Our expe- rience suggests that the two methods ma-y have com- plimentary strengths, so we are coordinating their application. exploring methods for The problem of predicting behavioral bounds on uncertain systems is also addressed in control theory and ecological system simulation. Sensitivity analysis [Deif,. 19861 is used to investigate the effect of small- scale perturbations to a model. [Ashworth, 19821, [L Tolerance banding unze, 19891 is used to predict the effect of larger-scale model uncertainties. Both meth- ods are normally restricted to linear models and hence permit uncertainty in parameter values or initial con- ditions only. The dynamic envelope method is not re- stricted by linearity assumptions and so it can also handle models with uncertain (and possibly nonlinear) functional relations. Bounding techniques that do not rely on linearity assumptions have been developed for VLSI simulation [Zukowski, 19861, however they rely on domain-specific assumptions about MOS VLSI cir- cuits. Interval analysis [Moore, 19791 also provides meth- ods for simulating S&DES by recasting standard nu- merical ODE solvers to work with interval arithmetic [Markov and Angelov, 19861. In contrast, the dynamic envelope method recasts the SQDE into an ODE of higher order and uses a standard numerical ODE solver directly. An advantage of this approach is that model imprecision is separated from the error introduced by the simulator. Another benefit is that we can directly take advantage of advances in the field of numerical analysis by switching to more powerful simulators as they are developed. This research also relates to the measurement inter- pretation theories ATM1 [Forbus, 1986] and DATMI [DeCoste, 19901. Both of these methods abstract a measurement stream into qualitative values and then select possible behaviors by comparing measurement segments to states in the total envisionment graph. By hypothesizing measurement errors, DATMI also man- ages to interpret noisy sensor data. By contrast, the dynamic envelope method augments the qualitative be- havior with numerical envelopes that are guaranteed to bound any solution of the system and then compares the measurement data directly. This approach has the advantage that distinctions between models can be de- tected over intervals where their qualitative behaviors are identical. Furthermore, by recomputing the en- velopes as new measurements are received, the bound- ing solutions can be further tightened. Measurement faults can also be modeled by assuming that the mea- surement data itself represents a range rather than a precise point. The work on SIMGEN [Forbus and Falkenhainer, 19901 is also related to the work described in this pa- per. It, too, generates a standard numerical simulation by extracting the relevant information from a qualita- tive model. It differs in that it generates an exact numerical model based on a library of predefined func- tions rather than generating a bounded model express- ing the inexactness of the qualitative model. Since it sacrifices accuracy for precision, it is not particularly suited to tasks such as process monitoring in which an exact numerical model cannot be found. Conclusions The dynamic envelope method combines qualitative and quantitative simulation so that both representa- tions can be used in problem solving. QSIM produces all behaviors associated with a particular model, and dynamic envelopes provide detailed numerical ranges for each behavior. Because the generation of extremal systems is guided by the qualitative behaviors, the ex- pense of needless numerical simulation is eliminated. Because the envelope systems are automatically gen- erated from the SQDEs used by Q2, the method can be used with any existing Q2 model. The precision of the dynamic envelope predictions depends on the precision of the S&DE. As model pre- cision increases, dynamic envelope predictions become more precise than Q2 predictions. Even when the model is very imprecise, combining dynamic envelopes with other QSIM prediction techniques leads to im- proved precision. In monitoring tasks, the dynamic envelope method improves the predictive power of SQDEs both in ac- curacy (meaning that fault hypotheses can be more easily eliminated) and failure detection time (meaning that there is more time to recover from failures). In cases where measurement acquisition is expensive, the increased accuracy of the predictions may allow fewer measurements to be made and errors to be detected sooner. The ultimate goal of our research is to construct a Reasoning about Physical Systems 611 self-calibrating monitoring and diagnosis system that can learn models directly from observations of a phys- ical system. Part of this task involves developing a semiquantitative simulation method that produces pre- cise predictions without excessive computation. The dynamic envelope method helps address this need bY providing a new form of inference especially suited for high precision semiquantitative models. Acknowledgments The authors would like to thank Adam Farquhar his comments on an earlier version of this paper. Vacuum system terms and SQDE quantitative knowledge for Term A B area pr(A) ptp(A) SPWP) ads(A, B) des( B) f 6% B) &(A) sf (3 Eeak(A) c leak Patm Cl Definition (units) amount of chamber gas (molec) amount of gas in chamber walls (molec) surface area of chamber (cm2) pressure exerted by A molecules [assuming fixed chamber volume v] (Torr) pump throughput (Torr-liters/min) pump speed (liters/min) rate : chamber gas + walls ( molec/cm2-min) rate : chamber gas c walls (molec/cm2-min) net flow of gas out of walls (molec/min) # molecules incident on walls ( molec/cm2-min) sticking factor : fraction of mi(A) that “stick” to walls rate : room air --+ chamber (molec/min) leak conductance (liters/min) atmospheric pressure (760 Torr) constant : Torr-liters ---) molecs Table 2: Definition of terms used in equations 1 through 6. Term Value or envelope description A [2.34 x 10z4, 2.34 x 1O24] molec B [1.36 x 10 21, 1.50 x 1021] molec area [13100,14500] cm2 Cleak [O.Ol, O.OOl] liters/min Zpeed(p) 90 liters A&+ piecewise linear with narrowing envelope des( B) M+ linear with both envelopes equal mi(p) M+ linear with both envelopes equal 0 PI &f- exponential envelope narrowing from [2,0.5] to [O,O] at B M 0.3 Table 3: Initial ranges and functional envelopes for the vacuum chamber model. These values are based on data from Duval [Duval, 19881. References Ashworth, M. J. 1982. Feedback Design of Systems With Significant Uncertainty. John Wiley and Sons, New York. Berleant, Daniel and Kuipers, Benjamin 1992. Com- bined qualitative and numerical simulation with q3. In Faltings, Boi and Struss, Peter, editors 1992, Re- cent Advances in Qualitative Physics. MIT Press. Berleant, Daniel 1989. A unification of numerical and qualitative model simulation. In Proceedings of the Model- based Reasoning Workshop. Cheung, Jarvis Tat-Yin and Stephanopoulos, George 1990. Representation of process trends - part I. a for- mal representation framework. Computers and Chem- ical Engineering 14(4/5):495-510. de Kleer, Johan and Brown, John Seely 1984. A qual- itative physics based on confluences. Artificial Intel- ligence 24~7-83. DeCoste, Dennis 1990. Dynamic across-time measure- ment interpretation. In Proceedings of the Eighth Na- tional Conference on Artificial Intelligence (AAAI- 90). 373-379. Deif, Assam 1986. Sensitivity Analysis in Linear Sys- tems. Springer-Verlag, Berlin. Duval, Pierre 1988. High Vacuum Production in the Microelectronics Industry. Elsevier Science Publish- ers, Amsterdam. Dvorak, Daniel Louis and Kuipers, Benjamin 1989. Model-based monitoring of dynamic systems. In Pro- ceedings of the Eleventh International Joint Confer- ence on Artificial Intelligence. 1238-1243. Dvorak, Daniel Louis 1992. Monitoring and diagnosis of continuous dynamic systems using semiquantita- tive simulation. Technical Report AI92-170, Artifi- cial Intelligence Laboratory, University of Texas at Austin, Austin, Texas 78712. Forbus, Kenneth D. and Falkenhainer, Brian 1990. Self-explanatory simulations: An integration of qual- itative and quantitative knowledge. In Proceedings of the Eighth National Conference on Artificial Intelli- gence (AAAI-90). 380-387. Forbus, Kenneth 1984. Qualitative process theory. Artificial Intelligence 24:85-168. Forbus, Kenneth D. 1986. Interpretating meaure- ments of physical systems. In Proceedings of the Fifth National Conference on Artificial Intelligence (AAAI-86). 113-117. Fusillo, R. H. and Powers, G. J. 1988. Operat- ing procedure synthesis using local models and dis- tributed goals. Computer and Chemical Engineering 12(9/10):1023-1034. Karp, Peter D. and Friedland, Peter 1989. Coordi- nating the use of qualitative and quantitative knowl- edge in declarative device modeling. In Widman, Lawrance E.; Loparo, Kenneth A.; and Nielson, Nor- man R., editors 1989, Artificial Intelligence, Simula- tion and Modeling. John Wiley and Sons. chapter 7. Kay, Herbert 1991. Monitoring and diagnosis of multi-tank flows using qualitative reasoning. Master’s thesis, The University of Texas at Austin. Kuipers, Benjamin and Berleant, Daniel 1988. Using incomplete quantitative knowledge in qualitative rea- soning. In Proceedings of the Seventh National Con- ference on Artificial Intelligence. 324-329. Kuipers, Benjamin 1984. Commonsense reasoning about causality : Deriving behavior from structure, Artificial Intelligence 24:169-204. Kuipers, Benjamin 1986. Qualitative simulation. Ar- tificial Intelligence 291289-338. Lakshmanan, R. and Stephanopoulos, G. 1988. Syn- thesis of operating procedures for complete chemical plants - i. hierarchical, structured modelling for non- linear planning. Computer and Chemical Engineering 12(9/10):985-1002. LeClair, Steven R. and Abrams, Frances L. 1988. Qualitative process automation. In Proceedings of the 27th National Conference on Decision and Control. 558-563. Lunze, Jan 1989. Robust Multivariable Feedback Con- trol. Prentice Hall. Markov, S. and Angelov, R. 1986. An interval method for systems of ode. In Lecture Notes in Computer Science #212 - Interval Mathematics 1985. Springer- Verlag, Berlin. 103-108. Moore, Ramon E. 1979. Methods and Applications of Interval Analysis. SIAM, Philadelphia. Shen, Qiang and Leitch, Roy 1991. Synchronized qualitative simulation in diagnosis. In Working Pa- pers from the Fifth International Workshop on Qual- itative Reasoning about Physical Systems. 171-185. Simmons, Reid 1986. “commonsense” arithmetic rea- soning. In Proceedings of the Fifth National Confer- ence on Artificial Intelligence (AAAI-86). 118-124. Williams, Brian C. 1988. Minima : A symbolic ap- proach to qualitative algebraic reasoning. In Proceed- ings of the Sixth National Conference on Artificial In- telligence. 264-269. Zukowski, Charles A. 1986. The Bounding Approach to VLSI Circuit Simulation. Kluwer Academic Pub- lishers, Boston. Reasoning about Physical Systems 613 | 1993 | 91 |
1,422 | A Qualitative Metho its* Wood W. Lee Schlumberger Dowell P. 0. Box 2710 Tulsa, OK 74101 lee@dsn.sinet .slb.com Abstract Intoduction We have developed and iniplenlented in the QPORTRAIT program a qualitative simulation based method to construct phase portraits for a significant class of systenls of two coupled first or- der autonomous differential equations, even in the presence of incomplete, qualitative knowledge. Differential equation models are important for rea- soning about physical systems. The field of non- linear dynamics has introduced the powerful phase portrait representation for the global analysis of uonlinear differential equations. QPORTRAIT uses qualitative simulation to gener- ate the set of all possible qualitative behaviors of a system. Constraints on two-dimensional phase portraits from nonlinear dynamics make it possi- ble to identify and classify trajectories and their asymptotic limits, and constrain possible conlbi- nations. By exhaustively forming all combinations of features, and filtering out inconsistent combi- nations, QPORTRAIT is guaranteed to generate all possible qualitative phase portraits. We have ap- plied QPORTRAIT to obtain tractable results for a number of nontrivial dynamical systerrm. Guaranteed coverage of all possible behaviors of incompletely known systems coniplements the more detailed, but approximation-based results of recently-developed methods for intelligeutly- guided numeric simulation [Nishida et al; Sacks; Yip; Zhao]. Combining the strengths of both ap- proaches would better facilitate automated under- standing of dynamical systems. ‘This work has taken place in the Qualitative Reasoning Group at the Artificial Intelligence Laboratory, The Uni- versity of Texas at Austin. Research of the Qualitative Reasoning Group is supported in part by NSF grants IRI- 8905494, IRI-8904454, and IRI-901’7047, by NASA contract NCC 2-760, and by the Jet Propulsion Laboratory. Benjamin J. Muipers Department of Computer Sciences University of Texas Austin, TX 78712 kuipers@cs.utexas.edu This report’ describes a qualitative simulation based method, implemented in the QPORTRAIT program, to construct phase portraits for a significant class of sys- tems of two first order autonomous differential equa- tions. It is a step towards a useful tool for autoniated reasoning about dynamical systems (i.e. differential equations), and shows that a dynamical systems per- spective can give a tractable overview of a qualitative simulation problem. Differential equations are inlportaut for reasoning about physical systems. While nonlinear systems of- ten require complex idiosyncractic treatments, phase potraits have evolved as a powerful tool for global anal- ysis of them A state of a system is represented by a point in the phase space; change of the system state over time is represented by a trajectory; and a phase portrait is the collection of all possible trajectories of the system. Phase portraits are typically constructed for ex- actly specified system instances by intelligently choos- ing samples of trajectories for numeric simulation and interpreting the results. This has led to recent de- velopment of numeric methods based reasoning in the phase space [Nishida et al; Sacks; Yip; Zhao]. These approaches are able to give good approxianite results. Based on qualitative simulation [Kuipers 861, and using knowledge of dynamical systems, QPORTRAIT is able to predict all possible phase portraits of incom- pletely known systems (in the form of qualitative dif- fereutial equations, QDEs). Starting with a total en- visionulent [Forbus 841 of a system, QPORTRAIT pro- gressively identifies, classifies, and combines features of the phase portrait, abstracting away uninteresting dis- tinctions, aud filtering out inconsistent conibiuatious of features. Exhaustive search and elimination of ouly provable inconsistencies enable guarauteed coverage of behaviors. This, aud the ability to handle incomplete information about systems complement numeric nieth- ods based approaches. QPORTRAIT is currently applicable to systems of two ‘This report summarizes the work of [Lee 931. 614 Lee From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. first order autonomous differential equations with non- degenerate fixed points. Various recently developed techniques have been incorporated to deal with qual- itative simulation’s potential for intractability. As a result, QPORTRAIT is able to produce tractable results for systems with fixed points at landmark values for the phase variables. We have applied QPORTRAIT to ob- tain tractable results for QDE versions of several well- known nonlinear systems, including a Lienard equa- tion, a vau der Pol equation, an undamped pendulum, and a predator-prey system In the rest of this report, we will first describe the un- derlying concepts of our work. Then we will describe the steps of our method, followed by an illustration of the steps using a Lienard equation example. Next we will present an argument that our method provides guarantee of coverage, discuss dependencies and limi- tations, and describe related work. We then end this report with our conclusion. Underlying Concepts Phase Portraits In the phase portrait representation, a state of a system is represented by a point in the system’s phase space, defined by a set of phase variables of the system. (A set of phase variubEes of a system is a minimal set of variables that fully describes the state of the system.) Change of the system state over time is represented by a trajectory in the phase space. A phase portrait is the collection of all possible trajectories of the sys- tem. The key characteristics of a phase portrait are the asymptotic limits of trajectories (i.e. where trajec- tories may emerge or terminate), and certain bounding trajectories that divide the phase space into stability regions. For autonomous two-dinlensional systems, { x’ = f(x, Y) Y’ = g(x, Y), asymptotic limits of trajectories can only be one of: fixed points (where the system is stationary), closed orbits (where the system oscillates steadily forever), unions of fixed points and the trajectories connect- ing them, and points at infinity. Fixed points are ei- ther sinks (where trajectories only terminate), sources (where trajectories only emerge), saddles (where tra- jectories may either emerge or terminate), or cen- ters (where trajectories neither emerge or terminate). Bounding trajectories other than closed orbits are as- sociated with saddles, and are called sepurutrices. In restricting our attention to systexu with nonde- generate fixed points (which are noncontiguous), lo- cal characteristics of fixed points are essentially lin- ear. This xneans, in particular, that unions of fixed points and the trajectories connecting them can only be unions of saddles and separatrices connectiug them. Furthermore, the essentially linear characteristics of saddle points means that exactly one separatrix en- ters a saddle in either of two opposite directions, and exactly one separatrix exits a saddle in either of two opposite directions. easoning in wdit at ive ase Space2 To reason about phase portraits in qualitative phase space, we integrate the total envisionment and behuv- ior generation approaches in qualitative simulation. A total envisionment [Forbus 841, using a coarse state space representation3 , produces a transition graph of the n-dimensional state space for the QDE of the sys- tem in question. This includes all possible qualitative states a system can take on, and possible trausitions between them, capturing all possible trajectories, and their asymptotic limits. Behavior generation [Kuipers 861 refines trajectory paths for two purposes: to check for each trajectory that not all behavior refinexneuts of it are provably inconsistent, and to depict detailed trends of cyclic paths. These ideas are further dis- cussed in the next section. A QDE description of a system may apply to iu- stances of a system that give rise to phase portraits with different local characteristics. For example, a nonlinear oscillator may be overdamped, giving rise to non-spiraling (nodal) trajectories into a sink; partially underdamped, with trajectories spiraling an arbitrary finite number of times around a sink; or totally under- damped, spiraling infinitely many times as it converges to the sink. These trajectories are mutually intersect- ing, and belong to different phase portraits, but the distinctions are local to the cyclic paths around a par- ticular sink. In order to arrive at a tractable global view of the set of qualitative phase portraits, we abstract such a local configuration into a spiral-nodal bundle of trajectories around a given sink or source [Lee93], representing the bundle with one of the constituents. Other examples of abstracting away detailed distiuctions are discussed subsequently. The major steps of QPORTRAIT are: envision, through total envisiorment, to capture all possible trajectories and their asymptotic limits, identify the asymptotic limits (possible origins aud destinations) from the envisionxueut graph, gather trajectories by exhaustively tracing paths be- tween possible origins and destinations, 2Notable earlier work in this area has been done by [Chiu 881, [Lee & Kuipers 881 and [Struss 881. 3The value a variable can take on is from a predeter- mined set of landmark v&es for the variable, or the set of intervals between these landmarks. Reasoning about Physical Systems 615 4. compose mutually non-intersecting trajectories into phase portraits. With a few exceptions identified explicitly below, all steps in this analysis have been automated. These techniques are described in more detail in [Lee93]. Capturing all Trajectories A QDE is first constructed for the system in question. While this process is manually performed, there are often straightforward transformations between func- tional relationships and QDE constraints. Next, to- tal envisionment captures all possible trajectories and their asymptotic limits. Fixed points are then identi- fied and checked for uondegeneracy4. This involves symbolic algebraic manipulation, and is performed manually (though a simple version can be relatively easily implemented). Potentially degenerate fixed points suggest possible bifurcation, and the system needs to be decomposed along these points. Before proceeding to identify asyniptotic limits, the envisionruent graph is projected onto the phase plane, and states not giving rise to distinctions in the phase plane are removed. These techniques are described in [Fouche 921 and [Lee93]. Identifying Asymptotic Limits The complete set of possible asymptotic limits (origins and destinations) of trajectories for autonomous two- diniensional systems with nondegenerate fixed points can be identified from the total envisionment graph. 1. Fixed points are quiescent states in the envisionment graph. Sinks have only predecessors; sources have only successors; saddles have both; and centers have neither. 2. Closed orbits are closed paths in the graph. (Closed paths may also represeut inward or outward spirals. These possibilities are distinguished in the next step, gathering trajectories.) 3. Separatrices are paths connecting to saddle points. The union of saddle points and separatrices connect- ing them (homoclinic and heteroclinic orbits) can also be asymptotic limits of trajectories. 4. Points at infinity that are asymptotic limits have either 00 or -oo as their qmag, and either have no predecessors, or have no successors. Gathering Trajectories Trajectories are gathered by exhaustively tracing pos- sible paths between origins aud destiuations, abstract- ing away unimportant distinctions. Loops representing 4This is done by checking to see that the eigenvalues of the Jacobian matrix of the system at the fixed points have nonaero real parts. 616 Lee Figure 1: Phase portraits of the Lienard equation: a) from [Brauer & Nohel 691 and [Sacks 901, and b) from QPORTRAIT. chatter [Kuipers & Chiu $71, and topologically equiv- alent paths (i.e. sets of mutually homotopic trajecto- ries) , are abstracted away plest representative path. and replaced bY their siiii- When one of the resulting trajectories contains a cyclic path in the envisiomnent graph, its qualita- tive description is refined through behavior genera- tion in order to determine possible trends of the cy- cle (spiral inward, spiral outward, and/or periodic). Envisionment-guided simulation [Clancy & Kuipers 921, the energy filter [FouchC & Kuipers 921, and cycle treud extraction [Lee93], are used for this task. Once cycle trends have been established, incomplete cyclic trajectory fragments can be conibined in all consistent ways with connecting fragments to form complete tra- jectories. Next, trajectories around sinks and sources are aua- lyzed, and spiral-nodal bundles are identified and ab- stracted. Each trajectory is checked to see that not all behavior refinements are provably inconsistent. Composing Portraits Trajectories gathered are first classified as either sep- aratrices, which connect to saddle points (and are bounding trajectories that divide the phase space into stability regions), and flows, which do not. At each saddle, QPORTRAIT composes all possible separatrix sets, each consisting of non-intersecting separatrices with exactly one entering the saddle in each of two opposite directions, and exactly one exiting in each of two opposite directions. The method for enforcing non-intersection of qualitative trajectories is described in [Lee & Kuipers 881. All possible non-intersecting combinations of sepa- ratrix sets betweeu saddle points are then formed, and all possible non-intersecting flows are composed into each combination to form all possible qualitative phase portraits. A Eienard Equation Example A particular instance of the Lienard equation takes the form ([Brauer & Nohel 691 pp. 217): x” + 2’ + x2 + 2 = 0, or equivalently: { 2’ = y y' = -(x2 + 2) - Y* It has an interesting phase portrait, discussed in detail in [Brauer & Nohel 691 pp. 217-220, and used in [Sacks 901 as a main example. Its phase portrait (from [Brauer & Nohel 691 pp. 220) is as shown in Figure la. The portarit produced in [Sacks 901 has the same essential qualitative features. A QDE generalization of this equation has the x2+x term replaced by a U+ function5: QPORTRAIT is able to produce for this QDE the phase portrait in Figure lb. This portrait has the same es- seutial features as the one in Figure la, though ours is applicable to the QDE. We describe briefly below re- sults of intermediate steps for arriving at this portrait. Applying total envisioniuent, projecting the envi- sionment graph onto the phase plane, and renioving states not giving rise to interesting distinctions give the envisionment graph in Figure 2a. The potential asymptotic limits are the fixed points at S-26 which is a saddle, the fixed point at S-27 which is a sink, the closed paths around S-27, the paths counecting S-26 to itself (which are separatrices connecting a saddle to itself), and the points at infinity, S-47 and S-57. They are automatically identified from the graph. Both fixed points are nondegenerate. Trajectory gathering then proceeds progressively. Initially, paths emerging from points at infiuity and fixed points are traced. This results in the paths shown in Figure 2b. Note that topologically equivaleat paths are abstracted together. The cycle associated with tra- jectories 7 and 13 is then refined to extract its possible trends. It is found to be inward spiraling, aud is con- sistent with trajectories 7 and 13. Further processing of trajectories 7 and 13 produce trajectories that spiral iuto the sink in various manners. Subsequently, when analyziug trajectories for spiral- nodal bundles, spiraling trajectories associated with 7, together with 5 and 6, are buudled. Also bundled are spiraliug trajectories associated with 13, together with 11 and 12. “A qL, function is a QSIM [Kuipers 861 modeling primitive. Intuitively speaking, it is a ‘U’ shaped func- tion consisting of a monotonically decreasing left segment and a monotonically increasing right segment, with (a, b) the bottom point. -m C 5=u. 0 xl 47 Y 57 cc’ = 0 -W a) Envisionment graph of the Lienard equation in the phase plaue. 1) 2) 3) 4) 8) 9) 11) 12) 13) b) Trajectories from initial gathering. Trajectories 7 and 13 are cyclic and inconlplete. Trajectory 10 is a honloclinic orbit. Figure 2: Iutermediate results from applying QPOR- TRAIT to a QDE generalization of a Lienard equation. Reasoning about Physical Systems 617 Trajectory 10 is a separatrix connecting a saddle to itself (a homoclinic orbit). It is a potential asymptotic limit, and is further processed for trajectories emerg- ing from or terminating on it. Subsequent checking for consistent behavior refinements of trajectories, how- ever, finds trajectory 10 to be inconsistent (violating energy constraints). Trajectory 10 and its associated trajectories are therefore eliminated. Trajectory 9 is also found to violate energy constraints and eliminated. Trajectories resulting from gathering are 1 through 4, 8, and the two bundles. Of these, 3, 4 , 8 and the bundle associated with 13 are separatrices. Composing separatrix set, then phase portrait, give the result in Figure lb. Discussions While some phase portraits produced by QPORTRAIT may be spurious, and some may contain spurious tra- jectories, the set of portraits that remain consistent after spurious trajectories are removed is guaranteed to capture all real portraits of systems consistent with the given QDE. We have applied QPORTRAIT to obtain tractable results for a set of nontrivial examples to of- fer reasonable coverage of possible asymptotic limits of systems in our domain. Included are a Lienard equa- tion, a van der Pol equation, an undamped pendulum, and a predator-prey system. Guarantee of Coverage Guaranteed coverage follows from the guarautees of the individual steps. First, qualitative simulation is guaranteed to predict all qualitatively distinct solu- tions. Second, possible asymptotic limits of trajecto- ries are exhaustively identified for systems in our do- maiu. Third, possible flows between asymptotic linl- its are exhaustively traced, eliminating only provably inconsistent flows. Fourth, in abstracting away un- interesting qualitative distinctions, asymptotic limits and flows are preserved. Fifth, all possible phase por- trait compositions are exhaustively explored, eliminat- ing only provably inconsistent compositions. Thus, given a QDE, QPORTRAIT is guaranteed to produce all qualitatively distinct phase portraits of it. Dependencies and Limitations6 While QPORTRAIT is dependent on its supporting tech- niques, the dependency is in terms of tractability7. In other words, improvement in performauce of the sup- portiug techniques gives more tractable results, and G Aspects concerning construction of QDE, determina- tion of nondegeneracy of fixed points, and system bi- furcation have been discussed when describing steps of QPORTRAIT, and will not be repeated here. 7The proble m with tractability can be due to an in- tractable number of spurious predictions, or an intractable number of overly detailed distinctions, or both. Refer to [Lee 931 for further discussion. converse otherwise. Guarantee of coverage, however, is preserved regardless of the performance of the sup- porting techniques, though the guarantee becomes in- creasingly less useful as results become increasingly less tractable. Although QPORTRAIT is able to produce tractable results for the examples we’ve attempted, it would not be difficult to come up with examples where in- tractability would result. No general characteriza- tion relating system property to the potential for in- tractability has been developed. Nevertheless, knowl- edge of system fixed points helps produce tracable re- sults, such as when fixed points are at landmark values for the phase variables. QPORTRAIT'S applicability is limited to autonomous two-dimensional systems with nondegenerate fixed points. Extending QPORTRAIT to apply to systems with degenerate fixed points would require incorpo- rating knowledge of asymptotic limits of such sys- tems. While nonautononious system can be trans- formed into equivalent autononlous systems, systems of higher dimensious will result. Extendiug QPOR- TRAIT to higher dimensional systems will be diffi- cult, largely because the qualitative non-intersection constraint [Lee & Kuipers 881 may not apply gener- ally. Furthermore, trajectory flows and their asymp- totic limits have more complicated structures in higher- dimensional systems, and are hard to characterize ex- haustively. Related Work Various numeric methods based approaches to reason in the phase space have recently emerged. These in- clude the work of Nishida et al, Sacks, Yip, and Zhao. They work with exactly specified system instances to produce approximate solutions, and are able to pro- duce qualitative conclusions from underlying numeri- cal results. Although each approach iterates in au at- tempt to capture all essential qualitative features, noue guarantees coverage. An early attempt to use qualitative simulation to construct phase portraits is the work of [Chiu 881. Chiu was able to use the few available qualitative simulation techniques to perform complete analysis of various sys- tems. Using his work as our foundation, we are able to take advantage of more recently developed techniques to perform more sophisticated reasoning, and incor- porate sufficient knowledge of dynamical systems to handle a significant class of systems. Conclusion We have developed a qualitative siululatiou based method to coustruct phase portraits of autonomous two-diniensional differential equations with nondegen- erate fixed points. It has been iiuplemented in the QPORTRAIT program. It has the attractive property that it is guaranteed to capture the essential quali- tative features of all real phase portraits of systems 618 Lee consistent with an inconlplete state of knowledge (a QDE). This conlplenrents the ability of nunreric nieth- ods based approaches to produce good approxinrate results for particular systeni instances. While the potential for intractable results renlain, we have denlonstrated that QPORTRAIT is able to produce tractable results for nontrivial systenis. In particular, results will be tractable when fixed points of the system are at landnrark values for the phase variables. Extending our approach to higher-diniensional sys- tenis will be hard, and will be a very significant con- tribution. It will need to proceed in snialler steps (covering a snialler class of systenls at a tinie) due to the nlore conlplicated phase space structures of higher- dinrensional systenis. Integration with nunieric ineth- ods to conrbine the power of both approaches appears to be a particularly attractive line of future work. Despite a concern (notably in [Sacks & Doyle 92a] and [Sacks & Doyle 92b]) that qualitative siniulation nrethods niay not be useful for scientific and engineer- ing reasoning, our work represents a significant steps towards autoniated reasoning about differential equa- tions, which are important for scientists and engineers. Furthernlore, our work is a denlonstration that a dy- nainical systenis (phase space) perspective can give a tractable overview of a qualitative sinrulation problenr. References [Brauer & Nohel 691 Brauer, F. and Nobel, J. A., The Qualitative Theory of Ordinary Differential Equations, W. A. Benjaniin, New York, 1969. [Chiu 881 Chiu, C., Higher Order Derivative Con- straints and a QSIM-Based Total Sinrulation Schenie, technical report AITR88-65, Department of Coniputer Sciences, University of Texas, Austin, TX, 1988. [Chancy & Kuipers 921 Clancy, D. J. and Kuipers, B. J., Aggregating Behaviors and Tractable Sinrula- tion, in: AAAI Design from Physical Principles Fall Symposium Working Notes, pp 38-43, Canibridge, MA, 1992. [Forbus 841 Forbus, K. D., Qualitative Process The- ory, in: Artificial Intelligence 24, pp 85-168, 1984. [Fouchg 921 FouchC, P., Towards a Unified Frame- work for Qualitative Simulation, PhD Dissertation, Universite de Technologie de Conlpiegne, France, 1992. [Fouch~5 & Kuipers 921 FouchC, P. and Kuipers, B. J., Reasoning about Energy in Qualitative Siniula- tion, in: IEEE Transactions on Systems, Man and Cybernetics 22, pp 47-63, 1992. [Guckenheimer & Holmes 831 Guckenheinler, J. and Holnres, P., Nonlinear Oscillations, Dynam- ical Systems, and Bifurcations of Vector Fields, Springer-Verlag, New York, 1983. [Hirsch & Smale 741 Hirsch, M. W. and Sniale, S., Differential Equations, Dynamical Systems, and Linear Algebra, Acadenric Press, New York, 1974. [Kuipers 881 Kuipers, B. J., Qualitative Sinrulation, in: Artificial Intelligence 29, pp 289-338, 1986. [Kuipers & Chiu 871 Tanring Intractable Branch- ing in Qualitative Simulation, in: Proceedings IJCAI-87, pp 1079-1085, Milan, Italy, 1987. [Lee 931 Lee, W. W. A Qualitative Based Method to Construct Phase Portraits, PhD Dissertation, De- partment of Conrputer Sciences, University of Texas, Austin, TX, 1993. [Lee & Kuipers 881 Lee, W. W. and Kuipers, B. J., Non-Intersection of Trajectories in Qualitative Phase Space: A Global Constraint for Qualitative Siniulation, in: Proceedings AAAI-88, pp 286-290, Saint Paul, MN, 1988. [Nishida & Doshita 911 Nishida, T. and Doshita, S., A Geonletric Approach to Total Envisioning, in: Proceedings IJCAI-91, pp 1150-1155, Sydney, Aus- tralia, 1991. [Nishida et al 911 Nishida, T., Mizutani, K., Kub- ota, A. and Doshita, S., Autonrated Phase Portrait Analysis by Integrating Qualitative and Quantita- tive Analysis, in: Proceedings AAAI-91, pp 811-816, Los Angeles, CA, 1991. [Sacks 901 Sacks, E. P., Autoniatic Qualitative Anal- ysis of Dynalnic Systenls using Piecewise Linear Ap- proxiniations, in: Artificial Intelligence 41, pp 313- 364, 1990. [Sacks 911 Sacks, E. P., Autonlatic Analysis of One- Paranleter Planar Ordinary Differential Equations by Intelligent Nunrerical Siniulation, in: Artificial Intelligence 48, pp 27-56, 1991. [Sacks & Doyle 92a] Sacks, E. P. and Doyle, J., Prolegonrena to Any Future Qualitative Physics, in: Computational Intelligence 8, pp 187-209, 1992. [Sacks & Doyle 92b] Sacks, E. P. and Doyle, J., Epilegonlenon, in: Computational Intelligence 8, pp 326-335, 1992. [Struss 881 Struss, P., Global Filters for Qualitative Behaviors, in: Proceedings AAAI-88, pp 275-279, Saint Paul, MN, 1988. [Yip 881 Yip, K., G enerating Global Behaviors using Deep Knowledge of Local Dynanzics, in: Proceedings AAAI-88, pp 280-285, Saint Paul, MN, 1988. [Yip 911 Yip, K., KAM: A System for Intelligently Guiding Numerical Experimentation by Computer, MIT Press, Canibridge, MA, 1991. [Zhao 911 Zhao, F., Extracting and Representing Qualitative Behaviors of Conrplex Systenis in Phase Spaces, in: Proceedings IJCAI-91, pp 1144-1149, Sydney, Australia, 1991. Reasoning about Physical Systems 619 | 1993 | 92 |
1,423 | Understanding Linkages* Howard E. Shrobe Massachusetts Institute of Technology NE43-839 Cambridge, MA 02139 hes@zermatt .Ics.mit .edu Abstract Mechanical linkages are used to transmit and transform motion. In this paper we investigate what it means for a program to “understand” a linkage. Our system extracts its understanding by analyzing the results of a numerical simula- tion of the mechanism, finding interesting quali- tative features, looking for symbolic relationships between these features and conjecturing a causal relationship between them. Our system is ca- pable of understanding a variety of mechanisms, producing explanations very much like those in standard texts. 1 Mot ivat ion Mechanical linkages are used to transmit and transform motion. They are a subset of the class of “fixed topology mechanisms”, those consisting of rigid bodies in constant contact with motion being transmitted through joints, gears and cams. In this paper we investigate how a system can “understand” a linkage, i.e. how it can e Decompose the mechanism into understandable sub- mechanisms. o Explain how the behavior of the whole arises from that of the parts. e Assign a purpose to each of the components. e Enable redesign by highlighting what interactions lead to the desired behavior. Although the techniques in this paper apply to the broader class of fixed topology mechanisms, the running example in this paper will consist of a linkage with a sin- gle degree of freedom. *This paper describes research done at the Artificial InteIIi- gence Laboratory of the Massachusetts Institute of Technology. Support for the author’s artificial intelligence research is pro- vided by the Advanced Research Projects Agency of the De- partment of Defense under Office of Naval Research contract NOOOlP91-J-4038. The lengths of the links comply with the conditions: BC = 2 AB, DC = 5.2AC, EC = 3.6AB, EF = 3.6AB, GF = 11.4AB, AD = 6AB, GD = 8.4AB and AG = 1lAB. Link 4 is connected by turning pairs E and F to Iink 2 of four-bar linkage ABCD and to Iink 5 which osciUates about fixed axis G. When point B of crank 1 travels along the part of the circle indicated by a heavy continuous line, point E of connecting rod 2 describes a path of which portion a-a approximates a circular arc of radius FE with its center at point F. During this period link 5 almost ceases to osciIIate, i.e. it practically has a dwell. Figure 1: A Dwell Mechanism and Its Explanation Figure 1 shows a six-bar linkage’ functioning as a dwell mechanism2 with its explanation reproduced from El]“. (We have highlighted parts of this explanation). This pa per presents a system which can “understand” this linkage, producing an explanation like that of the figure. Several observations about the explanation of figure 1 are worth emphasizing: e The explanation is compositional. The behavior of the whole is derived by first decomposing the mechanism into modules and by then composing the behaviors of the modules into an aggregate behavior: the device consists of “four-bar linkage ABCD” driving the pair of links 4 and 5.4However, the decomposition stops be- fore reaching the primitive elements (joints and links). ‘For those not familiar with linkages, we note that the set of links 1,2, and 3 together with the fixed frame, is a “four-bar linkage” (with joints A,B,C and D) and that the pair of links 4 and 5 (with joints F and G) is a “dyad”. Link 2 is the “coupler” of the four-bar linkage; since point E is on the coupler, the curve it traces is called a “coupler curve”. Four bar linkages are extremely flexible driving mechanisms; they can create a large number of coupler curves exhibiting a broad variety of shapes. The shape of the curve is a function of the (relative) sizes of the links and the position on the coupler link used to trace the curve. 2 A dwell mechanism is one in which some part moves (in this case oscillates) most of the time, but for some period of time stands still (i.e. dwells). 3 In this picture, the links are draw as bars, except that link 2 has a long finger projecting from it to point E making it look like an inverted T. Circles are used to indicate the joints between the links. The “ground” symbols are used to indicate that link AB is rigidly connected to the fixed frame and that joint G connects link 5 to the fixed frame. 4 Such a pair of links is called a dyad. 620 Shrobe From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. The explanation does not attempt to provide a mech- anistic explanation of how the shape of the coupler curve of the four bar linkage ABCD is related to the sizes of its links. e A crucial component of the explanation is a charac- terization of the qualitative shape of curves traced by “interesting points” in the mechanism: “point E (of link 2) . . . traces a path of which portion aa approx- imates a circular arc of radius FE with its center at point F”. o Although the explanation does not emphasize local causal propagations of the type made popular in [2; 4; 11; 123, it does have a causal flavor at a relatively high level of abstraction: “the shape of the coupler curve causes link 5 to have a dwell”. The system described in this paper is capable of produc- ing such an explanation. Our approach is as follows: 1. We numerically simulate the mechanism at a single time step. 2. The simulator is driven by geometric constraints. While satisfying these constraints, the simulator records its inferences as a “mechanism graph” show- ing how motion propagates from link to link through the joints of the mechanism. 3. The mechanism graph is “parsed” into a more struc- tured form which decomposes the system into driving and driven modules. To the extent possible the parsed graph consists of standard building blocks (e.g. four- bar linkages, dyads). The system knows which param- eters of the standard building blocks are significant and these are identified as important parameters. Also the coupling points between the driving and driven modules are identified as important parameters. A complete simulation of the linkage is run, stepping the mechanism through its full range of positions (in our example this amounts to spinning link 1 through a full 360 degrees and for each step determining the positions and orientations of all the remaining compo- nents). During this simulation, the values of all im- portant parameters (including the trajectories of the points connecting driving and driven modules) are recorded. 4. The shapes of the captured curves are analyzed and qualitative features extracted. 5. Qualitative relationships between these features are derived and accounted for by geometric reasoning. Section 2 describes the simulator and how it supports the rest of this process. Section 3 then examines the pro- cess of mechanism extraction and section 4 describes curve characterization. Section 5 show how these facilities work together to construct an explanation of the mechanism. Section 6 discusses to what degree the interpretation pro- duced is an adequate “understanding” of the mechanism. Finally, in section 7 we compare our work with other work on understanding mechanisms. 2 e Simulator Our simulator is based on Kramer’s TLA [lo]. However, since our work (at least for now) only involves planar mech- PrISMtIC meshxng gears cam and follower Figure 2: The Joints Modeled in the System anisms we have simplified TLA to a 2-D simulator. Also we have extended the geometric solution techniques to handle gears and cams as well as pure linkages. 2.1 asic Object Types The simulator is at its core a geometric constraint engine. This engine reasons about the following physical objects: Links: These are rigid bodies connected by joints. All links are assumed to be aligned in parallel planes. Each link has its own local coordinate system. Each link also has a transformation matrix mapping its co- ordinate system into the global coordinate system. (We will often refer to the global coordinate system as the “fixed frame”). Joints: A joint is a fixed connection between two links which couples their motion. We handle the following joint types (see figure 2): 1. Revohte: The two links are connected at a sin- gle point; they rotate relative to each other about this point. A hinge is familiar example. All the joints in our example are revolute joints; these are sometimes called “turning pairs”. 2. Pin in Slot: A round “finger” from the first link slides in a guide path in the second link. The first link can translate along the direction of the slot; it can rotate relative to the second link as well. The guide track of a folding door is an example. 3. Prismatic: The first link slides along the second link, but is not free to rotate relative to it. A piston in its cylinder is a familiar example. 4. Gears: The two links are spur gears coupled by the meshing of their teeth. This includes plane- tary as well as fixed gears. 5. Cams: One link is an irregularly shaped rota- tional device; the other is constrained to maintain contact with the perimeter of the rotating cam. to Links and joints are modeled by reducing the following computational constructs: their behavior 8 Markers: Each marker is associated with a specific link (although each link may have several markers). A marker has two components specified in the local co- ordinate system of its link: a point and an orientation. A marker can be thought of as a line extending from the point in the direction specified by the orientation. The simulator may restrict a marker to occupy a spe- cific position or to have a specific orientation in the Reasoning about Physical Systems 621 global coordinate system. If this has occured we say that the marker has invariant position or orientation. e Constraints: Constraints are the mechanism used to build a computational model of Joints. Each joint is modeled as a bundle of constraints. A constraint is imposed between two links by relating two markers, one from each link. Our simulator has the following constraint types: 1. 2. 3. 4. 5. Coincident: The two markers are forced to be at the same location in the global coordinate system. A revolute joint is modeled as a single coincident constraint. Inline: The location of the first marker is on the line described by the second marker. A Pin-In- Slot joint is modeled by a single Inline constraint. Cooriented: The two markers’ orientations are forced to be the same in the global coordinate system. A prismatic joint is modeled as a combi- nation of a Cooriented and an Inline constraint. Rotational Multiplication: Used to model gears. The angular deflection of the first marker from its initial position is a constant (the gear ratio) times the deflection of the second marker from its initial position. Perimeter Contact: Used to model cams. The marker on the follower is constrained to be in contact with the perimeter of the Cam link. o Anchors: An anchor is a distinguished type of marker attached to the global coordinate system rather than to a link. Constraints between anchors and markers on links are used to orient or set the position of a link in the global coordinate system. Input variables are supplied to the system as the position or orientation of an anchor; typically, the anchor controls the position or orientation of a link via a constraint to one of the link’s markers. 2.2 The Constraint Engine As in Kramer’s TLA, the constraint engine solves the geo- metric constraints using local geometric techniques. These techniques take the form of “constraint” and “locus in- tersection” methods (described below). As the constraint engine runs, it monitors the degrees of freedom remaining to each link; it also records for each marker whether its global position and orientation are invariant. As the links’ degrees of freedom are reduced and the markers’ orienta- tions and positions become invariant, the geometric meth- ods are triggered. Each method moves or rotates a link to satisfy a constraint, further reducing the degrees of free- dom available to the link; each method may also cause the global position or orientation of some marker to become invariant. This will trigger other methods. The process terminates when all degrees of freedom are removed. The constraint methods are triggered when the lo- cation or orientation of a marker becomes invariant. Its triggering pattern contains the invariant marker, a con- straint coupling the invariant marker to some marker on a link, the type of the constraint, and the degrees of transla- tional and rotational freedom remaining to the link. When a constraint method is triggered, it translates or rotates (or If There is a coincident constraint between M-l and M-2 M-l has invariant global position M-2 is on link L-2 L-2 has 2 degrees of translational freedom Then Measure the vector from M-2 to M-l Translate L-2 by this vector Reduce the translational degrees of freedom of L-2 to 0 Constrain M-2 to have invariant global position MartPr-2 Jl Figure 3: straint A Constraint Method for The Coincident Con- both) the link to satisfy the constraint duces the degrees of freedom available . In doing so it re- to the link; it also causes the global orientation or position of the the link to become invariant . When a link is marker on reduced to 0 degrees of rotational freedom, the orientation of every marker on it becomes invariunt in the global coordinate system; when a link is reduced to 0 degrees of both ro- tational and translational freedom, the position of every marker on the link becomes invariant in the global coordi- nate system. Figure 3 shows a constraint method for the coincident constraint used to model a revolute joint. Locus intersection methods are used after the con- straint methods. When a link’s degrees of freedom have been sufficiently reduced, the markers on the link are con- strained to move in simple curves. For example, if a link has 0 degrees of translational freedom and 1 degree of rota- tional freedom, then every marker on the link (except the one about which the link rotates) is constrained to move in a circle. If two markers coupled by a constraint are both restricted to move in simple curves, then there are only a small number of locations that the markers can consis- tently occupy. For example, if two markers coupled by a coincident constraint are both restricted to move in circles, then the markers must be located at one of the two inter- section points of the circles. In simulating the linkage of figure 1, driving link 1 is rotated into its desired position, fixing the position of B; this means that link 2 is allowed only to rotate about B. Similarly, the position of D is fixed, so link 3 may only rotate about D. C must, therefore, be at an intersection point of the circular paths allowed to the ends of links 2 and 3.56 2.3 Animating a Linkage The motion of a linkage can be simulated by repeatedly incrementing the position or orientation of the driving link and allowing the constraint engine to determine the correct 5 Notice that locus intersection methods lead to ambiguous results, since two circles may intersect at more than one point; the simulator must chose between the geometrically allowable results using physical principles such as continuity of motion. 6 When there are no further constraint method or locus intersection meth- ods to be employed but the constraints have not been solved, then an iterative numerical solution technique is employed. To save space and maintain continu- ity of presentation we omit the details; the examples in this paper never require iterative techniques. 622 Shi-obe locations and orientations for the other links. A simple an- imation can be produced by showing successive snapshots. The simulator can attach “probes” to any marker in the mechanism; these record the position and/or orientation of the marker at each time step of the simulation. Thus, a probe captures the complete trajectory of a marker (e.g. the trajectory of point E in figure 1) or the history of values of some property of a marker (e.g. the global orientation of Marker G which is the same as the angular deflection of Link 5). This information is used later in analyzing the mechanism, see section 4. 2.4 uilding the echanism Graph The simulator can record its deductions using a truth maintenance facility. The simulator maintains in each link a special data structure called a link-state-entry. This con- tains the number of degrees of rotational and translational freedom available to the link at that point in the process of satisfying the geometric constraints. When a constraint method entry updates the state of a link, it creates a new link-state-entry for the link. It also creates a justification. The antecedents of the justification are the current link-state-entries of the links coupled by the constraint; the consequent of the justification is the new link-state-entry for the affected link. The justifica tion also records the constraint which caused the update. The link-state-entries may be thought of as the nodes of a truth maintenance graph and the justifications as directed arcs from the old link-state-entries to the new one. Lo- cus intersection methods also create new link-state-entries and special justifications connecting them. The resulting graph records the steps of the process of satisfying the geo- metric constraints by moving (or rotating) then links while reducing their degrees of freedom. This structure is similar to the “mechanism graph” of PI. The first step in understanding the linkage mechanism shown in figure 1 is mechanism eztraction, in which the assembly is decomposed into sub-assemblies and the rela- tionship between driving and driven components is estab- lished. The input to this process is the mechanism graph produced by the simulator. Mechanisms are identified as patterns of constraint so- lution within the mechanism graph; the patterns are iden- tified by parsing rules like those shown in figure 4. The parsing rules build up a hierarchy of sub-modules. For the linkage of figure 1, the first rule characterizes link 1 as a crank; the third rule characterizes links 2 & 3 as a dyad. The second rule then notices that crank 1 drives the dyad formed by links 2 & 3. The last rule characterizes links 1,2 & 3, together with the fixed frame, as a four-bar link- age. Next, links 4 & 5 are characterized as another dyad which is driven by marker E on the coupler of the four bar linkage. The rules shown in figure 4 cover most uses of four bar linkages. Pantographs, scotch-yokes, planetary-gear sets, slider-cranks, etc. are identified by similar sets of rules. The structure produced by the parsing rules is used to identify points of interest in the mechanism. Part of what If Marker M-l is on Link-l A-l is an anchor and C-l is a coincident constraint between M-l and A-l The position of M-l is determined by satisfying C-l A-2 is an anchor providing an input parameter C-2 is a cooriented constraint couniinn A-2 and M-1 The orientation of M-l is determined‘6y satisfying C-2 Then Link-l is acting aa a crank If Marker M-l is on Link-l There is a constraint M-2 is on Link-2 C between M-l and M-2 The position of M-2 is determined by satisfying C Then Link-2 is driven by Link-l If M-O and M-l are on Link-l M-2 and M-3 are on Link-2 The positions of Ml and M2 are determined by a circle-circle locus method A-2 is an anchor coupled by a coincident constraint C-2 to M-3 The position of M-3 is determined by satisfying C-2 M-4 is a marker coupled to M-O by a coincident constraint C-l The position of M-O is determined by satisfying C-l Then Links 1 and 2 form a Dyad Dyad-1 Link-l is the couuler of Dvad-1 Link-2 is the rocker of Dy;d-1 If Dyad-1 is a Dyad C-l is the coupler of Dyad-1 R-l is the rocker of Dyad-1 Crank-l is acting as a-crank C-l is driven by Crank-l Then Cl, Rl and Crank-l form a four-bar linkage Four-bar-l Crank-l is the crank of the four-bar C-l of Dyad-1 is the coupler of Four-bar-l Rl of Dyad-1 is the rocker of Four-bar-l Figure 4: Rules for Parsing a Mechanism Graph the system knows about each type of module is what points in the module are likely to play “interesting” roles in the larger mechanism. In particular, the system knows that the trajectory of coupler points of four-bar linkages are usually-interesting, particularly if the coupler point drives - - - - another identifiable mechanism. Also the system knows that the deflection angle of the rocker arm of a driven dyad is interesting. e At this point, mechanism extraction has parsed the link- age into-2 sub-assemblies (a four-bar linkage and a dyad) and established a driver-driven relationship between them (the dyad is driven by a coupler point on the four-bar). However. the overall behavior of the mechanism denends on a specific feature of the shape of the curve trac;d by point E (it is a circular arc). The next step of the analysis is to capture the relevant curves and to characterize their shapes. This is done bv running a complete simulation of the-linkage (i.e. by step- ping the driving link through its complete range of mo- tion); during this simulation, probes are attached to those points identified as interesting by the mechanism extrac- tion: the coupler curve traced by point E and the angle of the rocker arm 3. The following analyses are then performed: e For each graph, the extrema of values are located finding the zero crossings of the first derivatives). (by o For each trajectory traced, the system calculates the “ThetaS” representation which maps distance along the trajectory to the orientation at that point on the trajectory (in this representation a circular arc on the original curve appears as a straight line and a straight line in the original curve appears as a horizontal line). Reasoning about 623 Rocker f&-o Ansle Figure 5: Curvature of The Coupler Curve and Angle of Rocker Arm For each graph (other than traces of the trajectory of a point, but including the Thetas curve for such a trajectory) a segmentation into linear approximations is performed using the “Split-Merge” technique. For each trajectory traced, the segmentation of the Theta-S representation is mapped back into a seg- mentation of the original curve. This segmentation approximates the original curve with linear and circu- lar segments. For each trajectory traced, the system calculates the radius and centers of curvature at each point. For each trajectory traced, the system calculates the points of self intersection. For each graph, segments of constant value are lo- cated. Fourier transforms of graphs are calculated if there is reason to suspect that periodic motion is present. Figure 5 shows the coupler curve of the dwell mechanism of Figure 1. The segmentation of the curve is indicated by “hatch marks” ; the approximation of the curve by straight lines and circular arc segments is shown by dashed lines. Dots along the curve with numbers attached indicate the value, in radians, of the driving parameter (the angle of the crank, link 1). Also shown is the Theta-S representation of this curve with its segmentation. The horizontal axis is the distance along the curve normalized to 2T, the vertical axis is the orientation (in radians) at that position on the curve. Finally, the figure shows the angle (in radians) of the rocker arm of the dyad (link 5) plotted against the driving parameter same analyses, as is also shown in the figure. Note that the coupler curve is well approximated by a circular arc (between about 1.1 and 4.0 radians of the driv- ing parameter). Also note that the rocker arm’s orientation is very nearly constant between about 1.0 and 4.1 radians of the driving parameter (the vertical scale of the graph is much larger than the horizontal scale, which obscures this fact). 5 Coristructing an Understanding of a Mechanis At this point, we have extracted from the simulation of the device a decomposition into driving and driven com- ponents. The decomposition has guided the choice of tra jectories and displacement histories to collect. The anal- ysis of these curves leads to a set of qualitative features characterizing the shapes of the curves. In the case of the dwell mechanism of figure 1, the system notes that: o The angle of the rocker arm has a period of constant value. * The coupler curve has a segment of constant curvature (i.e. a circular segment). The final step in constructing an understanding of the mechanism is to notice relationships between these fea- tures as well as relationships between the curve features and metric properties of the links of the mechanism. It must then attempt to explain these relationships through geometric reasoning. In particular, the system notes that: The radius of curvature of the circular segment traced by point E, the coupler point of the four bar linkage is nearly equal to the length of link 4, the coupler arm of the dyad. The distance from the fixed end of link 5, the rocker of the dyad to the center of curvature of the circular arc traced by point E is nearly equal to the length of link 5, the rocker of the dyad. There is a substantial overlap between the period dur- ing which the coupler of the four bar traces the circu- lar arc and the period during which the rocker arm’s angle holds steady. Having found these overlaps, the system conjectures that the dyad has a dwell period which is caused by the coupler arm moving through a circular arc whose curvature is the same as the length of the driven arm of the dyad and whose center of curvature is at the location occupied by the dyad’s joint when the circular arc is entered. Notice that this conjecture does not itself refer to any specific metric information from the simulation. If we can support the conjecture with reasoning which also does not depend on metric information specific to this linkage, then we will have deduced a universal principle applicable to a broader class of devices. The final step is to use geometric reasoning to support the conjecture. The geometric knowledge needed to sup- port the conjecture is very basic: o Two circles intersect in at most two points. e The center of a circle (the center of curvature of a circular arc) is the unique point equidistant (by the radius) from more than two points on the circle. The reasoning supporting the conjecture is quite simple (and we omit it for brevity). It completes the interpreta- tion of the mechanism and uses no metric information from the simulation but only qualitative shape features of the curves and symbolic relationships between joint positions. Any other mechanism satisfying these symbolic relation- ships will have the same behavior. General information has been extracted from the simulation of a specific device. An understanding of a mechanism should: 624 Shrobe 8 Decompose the mechanism into understandable sub- mechanisms. o Explain how the behavior of the whole arises from that of the parts. a Assign a purpose to each of the components. e Enable redesign by highlighting what interactions lead to the desired behavior. Our explanation of the dwell mechanism meets all these criteria. It decomposes the linkage into two well known sub-linkages and explains how the shape of the coupler curve causes the dyad to dwell. The two sub-linkages have well understood purposes. We also claim that this explanation of the mechanism enables redesign. Although our system is not a redesign system, we claim that a redesign system could use the kind of information we generate. In particular, it is clear that the four bar linkage could be replaced by any other mech- anism which generates a curve with a similar circular arc segment. We have run our system on several mechanisms from [l] and several other source books of mechanisms. We handle multiple dwells mechanisms, frequency multipliers, quick returns and a variety of stand-alone uses of four bar linkages. The modules understood by the system include planetary gears, scotch-yokes, pantographs, dyads, cams, four-bar linkages, dyads, slider-cranks, etc. Approaches There have been other projects on understanding kine- matic mechanisms, (e.g. [5; 3; 8; 61. These have been concerned mainly with determining when state transitions occur, typically when contact between bodies is established and broken. Although this is an important and difficult is- sue in the general case, it does not occur in the domain of linkages (or more generally fixed topology mechanisms). Our central concern is deriving qualitative features of the shapes of curves generated by driving mechanisms; and this is quite different from those generated by these sys- tems. With the exception of [6; 81, most of these systems are based on qualitative simulation. One system [9] attempts to apply qualitative simulation to linkages. Kim’s system conducts a form of envisioning of the behavior of a four-bar linkage. However, the system as described does not predict the shape of coupler curves, nor does it deal with more complex systems which use four bar linkages as driving mechanisms. The shape of a coupler curve is governed by highly non- linear equations (it is a 6th degree curve). [7] points out the difficulty of relating link sizes to coupler curve shapes and catalogues several thousand coupler curves as a service to designers. Because the equations are highly non-linear, it is unlikely that qualitative simulation can derive the shape properties of coupler curves. 8 Summary We have shown a system that can “understand” linkages (and other fixed topology devices) producing an explana- tion very similar to that given in textbooks on mechanical design. Our system begins with numerical simulation, extracting from this a mechanism graph. The graph is then parsed into familiar modules bearing a driver-driven relationship to one another. This identifies interesting points in the mechanism whose trajectories are extracted and qualita, tively characterized. Symbolic relationships between curve features are then noticed and used to generate conjectures about the functioning of the mechanism. Finally, geomet- ric reasoning is used to support the conjecture, establish- ing the qualitative conditions which must obtain for the observed behavior to result. This process has been shown to extract general design principles from specific mechanisms. eferences PI PI PI VI PI PI 171 PI PI PI WI WI I.I. Artobolevsky. Mechanisms in Modern Engineering Design. MIR Publishers, Moscow, 1975. Johan deKleer. Causal and teleological resoning in circuit recognition. Technical Report TR-529, Mas- sachussetts Institute of Technology, AI Lab., Cam- bridge, Mass., September 1979. Boi Faltings. A theory of qualitative kinematics in mechanisms. Technical Report UILU-ENG-86-1729, University of Illinois at Urbana-Champaign, Urbana, Illinois, May 1986. Kenneth D. Forbus. Qualitative process theory. Tech- nical Report TR-789, Massachussetts Institute of Technology, AI Lab., Cambridge, Mass., July 1981. Kenneth D. Forbus, Paul Nielsen, and Boi Faltings. Qualitative kinematics: A framework. Technical Report UILU-ENG-87-1739, University of Illinois at UrbanaChampaign, Urbana, Illinois, June 1987. Andrew Gelsey. The use of intelligently controlled simulation to predict a machine’s long-term behavior. In Proceedings of the National Conference on Art+- cial Intelligence, pages 880-887. AAAI, 1991. J.A. Hrones and G.L. Nelson. Analysis of The Four- bar Linkage. MIT Press and John Wiley & Sons, New York, 1951. L. Joskowicz and E.P. Sacks. Computational kine- matics. Technical Report CS-TR-300-90, Princeton University, Princeton, N.J., April 1990. Hyun-Kyung Kim. Qualitative kinematics of linkages. Technical Report UILU-ENG-90-1742, University of Illinois at Urbana-Champaign, Urbana, Illinois, May 1990. Glenn A. Kramer. Solving geometric constraint sys- tems. In Proceedings of the National Conference on Artificial Intelligence, pages 708-714. AAAI, 1990. Benjamin J. Kuipers. Qualitative simulation. Artifi- cial Intelligence, 29:289-338, 1986. Brian Williams. Qualitative analysis of mos circuits. Artificial Intelligence, 24:281-346, 1984. Reasoning about Phvsical Svstems 625 | 1993 | 93 |
1,424 | CFRL: A Language for Specifying the Causal Functionality of Engineered Marcos Vescovi Yumi Iwasaki Richard Fikes B. Chandrasekaran Knowledge Systems Laboratory Stanford University 701 Welch Road, Bldg C Palo Alto, CA 94304 vescovi,iwasaki,fikes@ksl.stanford.e Laboratory for AI Research The Ohio State University du 217 B, Bolz Hall, 2036 Neil Avenue Columbus, OH 43210-1277 chandra@cis.ohio-stateedu Abstract* Introduction Understanding the design of an engineered device requires both knowledge of the general physical principles that determine the behavior of the device and knowledge of what the device is intended to do (i.e., its functional specification). However, the majority of work in model- based reasoning about device behavior has focused on modeling a device in terms of general physical principles or intended functionality, but not both. For example, most of the work in qualitative physics has been concerned with predicting the behavior of a device given its physical structure and knowledge of general physical principles. In that work, great importance has been placed on preventing * The research by the first three authors is supported in part by the Advanced Research Projects Agency, ARPA Order 8607, monitored by NASA Ames Research Center under grant NAG 2- 581, and by NASA Ames Research Center under grant NCC 2- 537. Chandrasekaran’s research is supported by the Advanced Research Projects Agency by means of AFOSR contract F- 49620-89-C-01 10 and AFOSR grant 89-0250. evices a pre-conceived notion of an intended function of the device from influencing the system’s reasoning methods and representation of physical principles in order to guarantee a high level of “objective truth” in the predicted behavior. In contrast, in their work based on the FR (Functional Representation) language (Sembugamoorthy & Chandrasekaran 1986) (Keuneke 1986), Chandrasekaran and his colleagues have focused mostly on modeling a device in terms of what the device is intended to do and how those intentions are to be accomplished through causal interactions among components of the device. Both types of knowledge, functional and behavioral, seem to be indispensable in fully understanding a device design. On the one hand, knowledge of intended function alone does not enable one to reason about what a device might do when it is placed in an unexpected condition or to infer the behavior of an unfamiliar device from its structure. On the other hand, knowledge of device structure and general physical principles may allow one to predict how the device will behave under a given condition, but without knowledge of the intended functions, it is impossible to determine if the predicted behavior is a desirable one, or what aspect of the behavior is significant. In order to use both functional and behavioral knowledge in understanding a device design, it is crucial that the functional knowledge is represented in such a way that it has a clear interpretation in terms of actual behavior. Suppose, for example, that the function of a charge current controller is to prevent damage to a battery by cutting off the charge current when the battery is fully charged. To be able to determine whether this function is actually accomplished by an observed behavior of the device, the representation of the function must specify conditions that can be evaluated against the behavior. Such conditions might include occurrence of a temporal sequence of expected events and causal relations among the events and the components. Without a clear semantics given to a representation of functions in terms of actual behavior, it would be impossible to evaluate a design based on its predicted behavior and intended functions. While it is important for a functional specification to have a clear interpretation in terms of actual behavior, it is also desirable for the language for specifying functions to be independent of any particular system used for simulation. Though there are a number of alternative 626 Vescovi From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. methods for predicting behavior, such as numerical under which the phenomenon occurs and a set of simulation with discrete time steps or qualitative consequences of the phenomenon. The conditions specify simulation, a functional specification at some abstract level a set of instances of object classes that must exist and a set should be intuitively understandable without specifying a of relations that must hold among those objects and their particular simulation mechanism. If a functional attributes for the phenomenon to occur. The consequences specification language was dependent on a specific specify the functional relations the phenomenon will cause simulation language or mechanism, a separate functional to hold among the objects and their attributes. specification language would be needed for each different Model fragments can represent phenomena as occurring simulation language, which is clearly undesirable. What is continuously while the fragment’s conditions hold or as needed is a functional specification language that has events that occur instantaneously when the conditions sufficient expressive power to support descriptions of the become true. The consequences of a model fragment that desired functions of a variety of devices. At the same represents an event are facts to be asserted resulting from time, the language should be clear enough so that for each the event, whereas the consequences of a model fragment simulation mechanism used, it can be given an that represents a continuous process are sentences (e.g., unambiguous interpretation in terms of a simulated ordinary differential equations) which are true while the behavior. phenomena is occurring. An essential element in the description of a function is causality. In order to say that a device has achieved a function, which may be expressed as a condition on the state of the world, one must show not only that the condition is satisfied but also that the device has participated in the causal process that has brought about the condition. For example, when an engineer designs a thermostat to keep room temperature constant, the design embodies her idea about how the device is to work. In fact, the essential part of her knowledge of its function is the expected causal chain of events in which it will take part in achieving the goal. Thus, a representation formalism of functions must provide a means of expressing knowledge about such causal processes. When there exists at time t a set of objects represented by model fragments mi to mj that satisfy the conditions of a model fragment mg, we say that an instance of mg is active at that time. We will call mi through mj the participants of the mg instance. We have developed a new representational formalism for representing device functions called CFRL (Causal Functional Representation Language) that allows functions to be expressed in terms of expected causal chains of events. We have also provided the language with a well- defined semantics in terms of the type of behavior representation widely used in model-based, qualitative simulation. Finally, we have used CFRL as the basis for a functional verification program which determines whether a behavior achieves an intended function. Representation of physical knowledge in terms of model fragments is a generalization of the representation of physical processes and individuals in Qualitative Process Theory (Forbus 1984). There are several systems, including the Device Modeling Environment (DME) (Iwasaki & Low 1991) the Qualitative Process Engine (QPE) (Forbus 1989), and the Qualitative Process Compiler (QPC) (Crawford, Farquhar & Kuipers), that use similar representations for physical knowledge to predict the behavior of physical devices over time. Though the ways these systems actually perform prediction differ, the basic idea behind all of them is the following: For a given situation, the system identifies active model fragment instances by evaluating their conditions. The active instances give rise to equations representing the functional relations that must hold among variables as a consequence of the phenomena taking place. The equations are then used to determine the next state into which the device must move. This paper is organized as follows: We first describe the representation of behavior over time in terms of which the semantics of CFRL will be defined and our assumptions about the modeling and simulation schemes that produce such a behavior description. We then present the CFRL language and define its semantics in terms of behavior. We close with a discussion and summary. We assume that a behavior is a linear sequence of states. The output of a qualitative simulation system such as QPE, DME, and QPC is usually a tree or a graph of states. Each path through the graph represents a possible behavior over time. We will refer to such a path, i.e., a linear sequence of states, as a trajectory. A state represents a situation in which the physical system being modeled is in at a particular time. “A particular time” here can be a time point or interval. We will not assume any specific model of time in this paper. The only assumptions about time that we make are: (1) the times associated with different states do not overlap; (2) when a state sj immediately follows si in a behavior, there is no other “time” that falls between the times (periods) associated with si and sj; and (3) every state has a unique successor (predecessor) unless it is the final (initial) state, in which case it has none. Before describing CFRL, we briefly describe the behavior representation in terms of which the semantics of CFRL will be defined. A physical situation is modeled as a collection of model fragments, each of which represents a physical object or a conceptually distinct physical phenomenon, such as a particular aspect of component behavior or a physical process. A model fragment representing a phenomenon specifies a set of conditions Reasoning about Physical Systems 627 In our modeling scheme, each state has a set of variable values and predicates that hold in the state. In addition, each state has a set of active model fragment instances representing the phenomena that are occurring in the state. An Electrical Power System We now describe the syntax and semantics of CFRL. Figures 2 shows an example of the representation of a function of the EPS. This section presents the device that we will use throughout the rest of this paper as an example. The device is the electrical power system (EPS) aboard an Earth orbiting satellite (Lockheed 1984). A simplified schematic diagram of the EPS is shown in Figure 1. The main purpose of the EPS is to supply a constant source of electricity to the satellite’s other subsystems. The solar array generates electricity when the satellite is in the sun, supplying power to the load and recharging the battery. The battery is a constant voltage source when it is charged between 6 and 30 ampere-hotus. When the charge level is below 6 ampere-hours, the voltage output decreases as the battery discharges. When the charge level is above 30 ampere-hours, the voltage output increases as it is charged. DF: ?eps: Electrical-power-system CF: Object-set: ?sun: Sun ?l: electrical-load Conditions: T GF: (ALWAYS @ID (-> (AND (Shining-p ?sun) (Closed-p (Relay-component ?eps))) CPDl) (-> (OR (NOT (Shining-p ?sun)) (Open-p (Relay-component ?eps))) CPD2) (-> (AND (> (Electromotive-force (Battery-component ?eps)) 33.8) SA la. (Closed-p (Relay-component ?eps))) CPD3) SA: Solar array LD: Electrical load on board BA: Rechargeable battery CCC: Charge current controller Kl: Relay Figure 1: An Electrical Power System. Since the battery can be damaged when it is charged beyond its capacity, the charge current controller opens the relay when the voltage exceeds a threshold to prevent the battery from being over-charged. The controller senses the voltage via a sensor connected to the positive terminal of the battery. When the voltage is greater than 33.8 volts, the controller turns on the relay Kl. When the relay is energized, it opens and breaks the electrical connection to prevent further charging of the battery, thereby switching the current source for the load from the solar array to the battery. When the relay is open or when an eclipse period begins, the battery’s charge-level starts to decrease. When the battery becomes under-charged, the voltage decreases. When it reaches 31.0 volts, the CCC turns relay Kl off to close it. (-> (AND (c (Electromotive-force (Battery-component ?eps)) 31.0) (Open-p (Relay-component ?eps))) CPD4))) Figure 2-a: Function FI of EPS We consider a function to be an agent’s belief about how an object is to be used in some context to achieve some efict. Thus, our representation of a function specifies the object, the context, and the effect. However, it does not specify an agent, which is implicitly assumed to be whoever is using the representation. Formally, a function is defined as follows: Definition 1: Function A function F is a triplet {DF, CF, GF], where: DF denotes the device of which F is a function. CF denotes the context in which the device is to function. GF denotes the functional goal to be achieved. The device specification, DF, specifies the class of the device and the symbol by which the device will be referred to in the rest of the definition of F . The example in Figure 2-a states that the function is of an Electrical-power- system which will be referred to as ?eps in the rest of the definition. 628 Vescovi Current (+terminal (Load-component ? ar-array-camp -function-of ~solar-array-component ?eps)) (Stored-charge (Battery-component ?eps)) CPDZ: causal, = Current (+terminal (Load-component ? y-function-of (Battery-component ?epsB usal, = -charge (Battery-camp CPD3: (Battery-component ?e (by-function-of (Controller-component ?eps)) CPD4: (Electromotive-force (Battery-component ? y-function-of (Controller-component ?epsH Figure 2-b: CPD’s of Function FI of EPS. Reasoning about Physical Systems 629 The notion of a device function assumes some physical described by the destination node, and 5 means context in which the device is placed, and CF is a the state described by the source node must either specification of such a context. CF consists of two parts, a be the same as or precede the state described by set of objects and a set of conditions on those objects. For the destination state. example, Figure Z-a states that there must exist an instance of Sun and an instance of electrical load. causal-justification: If an arc is “causal”, one can The conditions must hold throughout a behavior in order for the function attach a justification for the causal relation. A to be verified in the behavior. justification takes the form of a Boolean combination of the following predicates: Formally, the Object-set of a CF is a list of pairs {var, type}, where var is a symbol to be used in the description of F to refer to the object, and type is the type (class) of the object. Conditions is a logical expression involving the variables defined in the Object-set and DF. The third part of the function definition, GF, specifies the behavior to be achieved by the device used in a specific manner. GF of a function is represented as a Boolean combination of Causal Process Descriptions (CPDs) and conditions involving the variables defined in DF and the Object-set of CF. Each CPD is an abstract description of expected behavior in terms of a causal sequence of events. In the following, we formally define a CPD. (by-function-of <model-fragment>), (with-participation-of <model-fragment>). The meaning of these predicates will be explained after we give a precise definition of a causal relation among nodes. In order to refer to attributes of arcs, we will use the attribute name (e.g., source, destination, etc.) as a function of the arc as in “source”. We will write ni ac n. when there is a causal arc from ni to n.. As a condition specified by a node can be a Boolean combination of conditions, the following defines the meaning of causal relations among them, where el, e2, and e3 are conditions: Causal Process Descriptions (CPD’s) Figure 2-b shows examples of CPD’s which are part of the functional specification of the EPS. A CPD is a directed graph, in which each node describes a state and each arc describes a temporal and (optionally) a causal relation between states. A node specifies a condition on a state. The condition is a logical sentence about the state of the world at some time using the variables defined in the DF and CF portions of the function. For example, the node nl in Figure 2-b states the condition that the sun be shining. One or more nodes in each CPD are distinguished as the initial node(s). In the figures, the initial nodes are indicated with a thick oval. A condition specified by a node can contain AND and OR as logical connectives. When the meaning is clear, we will use the name of a node to refer to the condition represented by the node. The arcs in a CPD are directed and specify temporal and causal relations among nodes. An arc has the following attributes: source: The node at the tail of the arc. destination: The node at the head of the arc. causal-flag: An indicator of whether the relationship between the states described by the source and destination nodes is causal. (The relationship is always temporal.) temporal-relation: =, <, or 5, indicating the temporal relation between the states described by the source and destination nodes. = means that the states described by the two nodes are to be the same state, < means the state described by the source node must strictly precede the state a) (AND el e2) ac e3 = (MD (el *C e3) k2 -C e3)) b) el *c (AND q e3) = (MD (el *C e2 )(el =+ e3)) c) (OR el e2) ac e3 = (OR@] *C e3) k2 -ke3N d) el =;sc (OR e2 e3) = (OR (el *C e2) (q =k e3)) Semantics of a CPD A CPD can be considered to be an abstract specification of a behavior. Unlike a trajectory, it does not specify every state or everything known about each state. It only specifies some of the facts that should be true during the course of the behavior and partial temporal/causal orderings among those facts. The intuitive meaning of a CPD is that: 0 For each node in the CPD, there must be a state in the trajectory in which the condition specified by the node is satisfied, and 0 For each pair of nodes directly connected by an arc, the causal and temporal relationships specified by the arc must exist in the trajectory. In order for us to evaluate these conditions against a behavior, we must define their meanings in terms of the languages used to describe a (simulated or actual) behavior. In this paper, we will do so in terms of the behavior representation formalism described earlier. 630 Vescovi However, note that CFRL itself is independent of the particular behavior representation language used, and that one would need to provide different definitions in order to evaluate functional specifications in CFRL against behaviors generated by a different scheme. We first present the definition of a causaZ dependency relation between sentences in a trajectory and the causuZity constraints that can be associated with a CPD arc. We then define the requirements for a trajectory to match a CPD and for a trajectory to match a function goal. Finally, we use those definitions to define the requirements for a trajectory to achieve a function. A few words about notation: We will attach [s] to a sentence to denote the sentence holds in state s. Therefore, p[s] means that p holds in state s. We will also associate a state with models and variables to denote sentences as follows: m[s] : An instance of model fragment m is active in s . v[s] : The value of variable v in s. (i.e., an axiom of the form (= (v&e v s) c) for some constant c.) We will use the relations c, >, =, and S to express temporal ordering among states in a trajectory. For example, for states sl and s2 in a trajectory, “~1 e ~2” means that sl strictly precedes s;! in time. Note that ordering is total for states in a trajectory because a trajectory is a linear sequence of states, while the ordering is partial for states in a CPD. Intuitively, we say p2 is causally dependent on pl in trajectory Tr, written pi * ~2, when it can be shown that pl being true in Tr eventually leads top;! being true in 73-. Definition 2: Causal Dependency The causal dependency relation, a, is a binary relation between sentences in a trajectory with the following properties: 1. For all atomic sentences p, states s, model fragments m, and variables v: a) If ~tsol~ pts11, . . . p[s] (i.e., ifp is part of the initial conditions and is never changed), then @ * p[s]. (And we say that p[s] is exogenous.) b) If model fragment m represents an event and asserts p, and if there exists a state sj such that sj < s, wp[sj], m]sj/, and p[sH for all k > j (i.e., p became true at some point before s due to m), then m]sj] *p[s]. c) If model fragment m represents a continuous process and has p as a consequence, and if there exists a state sj such that ~j < S, -p[s+], m[sJ , and p[skJ for all k > j (i.e., p became true at some point before s due to m), then m[~+p[s]. d) If model fragment m has p as a condition, then p[s] 3 WI. e) If v occurs in p as a term and p is not v[s], then v[s] * ptsl. f) If v is an exogenous variable, @ a v[s]. g) For all variables v’ such that v’ -> v is in the causal ordering1 in s : (i) v’[s] 3 v[s] ; (ii) If p[s] is the equation through which v depends on v’, then p[s] * v[s]. h) For all variables v’ such that v and v’ are in a feedback loop in the causal ordering in s: (i) v’[s] * v[s] and v[s] * v’[s] ; (ii) For each equation p such that p is part of the feedback loop and v appears in p, p[s] * v[s]. i) If sl is the state immediately following s, and dv is the time-derivative of v in s, then dv[s] * v[sl], 2. =3 is transitive. When pi * pj, we will say that pj is causally dependent on pi or that pi causes pj. Given statements p[si] and p[sj] such that p[si] 3 pfsj], we call the causal sequence of statements starting from p[si] and leading to p[sj] the causal path from p[si] to p[s$. Having defined the meaning of a causal relation among statements, we can now explain the meaning of the predicates used to justify causal arcs in a CPD. efinition 3: Causality constraints Given an arc a from node ni to n- in a CPD and a model fragment m, causality constraints of the following form can be associated with a : a) (by-function-of m) -- meaning that the causal path ni to nj includes a consequence of an instance of m; from b) (wits-participation-of m) -- meaning that the causal path from ni to nj includes a consequence of an instance of a model fragment in which an instance of m participates. These predicates do not imply specific commitments as to how the components participate in the causal process. They give the designer the capability of using whatever component has the desired function, independent of its particular mechanism. We can now present the definitions on which verification of a trajectory with respect to a CPD is based. efinition 4: Matching of a state and a node A state s in a trajectory and a node n in a CPD are said to match if the condition specified in n is true in s. Having defined the meaning of a causal relation among statements in a trajectory, we can now define the meaning lCausa1 ordering is a technique for determining causal dependency relations among variables in a set of equations (Iwasaki & Simon 1986). Reasoning about Physical Systems 631 of the causal and temporal relations between linked nodes of a CPD. Definition 5: Satisfying the constraints of an arc If a is an arc from node ni to nj in a CPD, then the causal and temporal constraints of a are satisfied at states si and sj if both of the following conditions are satisfied: a) Si < (= Or 3 Sj when ni c (= or 5) nj , respectively. b) If arc a is causal and if ni and/or nj are Boolean combinations of conditions, then the causal relation between ni and nj can be rewritten as a Boolean combination of causal relations of the form ei =+ ej, where ei and ej are atomic conditions. ei[sil JC ej[sj] is satisfied if for every variable2 vi used in ei and every variable vj used in ej, vi[si] * YjrsjJ and the causal path from vi[sil to v j[sj] satisfies the causal justification on a. Definition 6: Matching of a CPD and a trajectory Let T be a trajectory consisting of a linear sequence of m states, sl through s m. Let CPDl be a CPD consisting of a set of nodes, NI, and a set of arcs, A]. CPDl and T are said to match iff all the following conditions are satisfied: a) The initial nodes of CPDl match the initial state sl in T. b) For each remaining node n in NJ, there exists a state in T that matches n such that for every arc a in AI from nodes ni to n., the temporal and causal constraints specified by a are satisfied by the states matched to ni and nj. Representation of the Functional Goal (GF) The functional goal of a function (denoted by GF) is represented as an expression consisting of CPDs, conditions, quantifiers, and Boolean connectives. Nested expressions using connectives are allowed, but a quantifier cannot appear in the scope of another quantifier. Each CPD must appear in the scope of one and only one quantifier. There are two quantifiers, ALWAYS and SOMETIMES. Connectives are AND, OR, IMPLIES, and NOT. Syntactically, the connectives are used in the same way as ordinary logical connectives. The following are example GF expressions: (ALWAYS (AND cpdl cpd2 (OR cpd3 cpdq))) (OR (ALWAYS cpdl) (SOMETIMES (AND cpd2 cpdj ))) (ALWAYS (NOT cpdl )) 2 The variables used in CFRL can be different from the variables in terms of which the trajectory states are defined, since CFRL descriptions represent a device-level perspective, while states in the trajectory represent a component or physical process- level perspective. Correspondences between CPD variables and trajectory variables are made when the function is matched against a specific trajectory. Quantifiers align the initial nodes of the CPDs in their scope as well as specify whether the described behavior must hold in every subsequence of the trajectory or only in some of them. The connectives and quantifiers are to be interpreted as specified in the following definition of matching a GF and a trajectory. Definition 7: Matching of a GF and a trajectory Let T be a trajectory consisting of a linear sequence of m states, sl through sm, * Ti denote subsequences of T from si through sm; and ccpd-exp> denote a Boolean combination of CPD’s and conditions. Then: a) (ALWAYS ccpd-exp>) matches T iff <cpd-exp> matches Ti for each Ti (i = 1 to m). b) (SOMETIMES <cpd-exp>) matches T iff <cpd-exp> matches Ti for some Ti (i = 1 to m). C) (AND ccpd-expg> ccpd-expl> . ..) matches T iff every conjunct matches T. d) matches T iff at least (OR one <cpd-expg> <cpd-expl> . ..) of the disjuncts matches T. e) (NOT <cpd-exp>) matches T iff ccpd-exp> does not match T. f) (IMPLIES ccpd-expg> ccpd-expl>) matches T iff <cpd-expg> does not match T or <cpd-expl> does match T. g) Condition c matches T iff c is true in the initial state of T. Finally, we complete the definition of the meaning of a function, as follows: Definition 8: A trajectory achieving a function A trajectory T achieves a function F when the condition specified in CF holds throughout T and GF matches T. Discussion and Summary In this paper, we have presented CFRL, a language for specifying an expected function of a device and defined its semantics in terms of the type of behavior representation widely used in model-based qualitative simulation. The language allows one to explicitly state the physical context in which the function is to be achieved and to describe the function as an expected causal sequence of events. Since the concept of causal interactions among components is essential to the understanding of a function, the language allows explicit representation of causal interactions and constraints on such interactions. CFRL is based on the work on Functional Representation (Sembugamoorthy & Chandrasekaran 1986), and it is a further extension of the work presented in (Iwasaki & Chandrasekaran 1992). We have extended the expressive power of the function specification languages 632 Vescovi described in those papers and have provided a formal foundation for the semantics of the resulting language. Franke (Franke 1991) also proposed matching design intent with simulated behavior. Unlike other work on functional representation, he focuses on representing the purpose of a design modification and not that of a device itself. He developed a representation scheme, called TED, in which he expresses the purpose for making a modification in a structure. TED’s representation of a function can be a sequence (not necessarily a linear) of partial descriptions, which is matched against states in a sequence of qualitative states generated by QSIM. To prove that a function is achieved by a modification, he compares the behavior of the original structure and that of the modified structure. Bradshaw and Young (Bradshaw & Young 1991) also represent the intended function in a manner similar to Functional Representation. They built a system called DORIS, which uses the knowledge generated by qualitative simulation for evaluating device behavior as well as for diagnosis and explanation. The most important characteristic that distinguishes our work from those by Franke and by Bradshaw and Young is the central role causal knowledge plays in CFRL. We conjecture that causal relations are an essential part of functional knowledge, and that representation of functional knowledge must allow explicit description of the causal processes involved. Furthermore, verification of a function must ascertain that the expected causal chain of events take place, since the satisfaction of the functional goal alone does not necessarily indicate that the device is functioning as intended. Because the semantics of CFRL is defined in terms of matching between a behavior and a functional specification, the language is immediately useful for the purpose of behavior verification. We have designed and implemented an algorithm that verifies a behavior produced by the DME system with respect to a function specified in CFRL as defined in this paper. Initial testing of the algorithm has included verifying the functional specifications of the EPS as given above. Care must be taken in designing such an algorithm to assure that exponential search is not required to find a match between a trajectory and a CPD. We are currently in the process of analyzing the computational complexity of the problem and our algorithm. We expect formal functional specifications to have many uses throughout the life cycle of a device (Iwasaki et al. 1993). For example, in the early stages of the design process, designers often do “top down” design by incrementally introducing assumptions about device structure and causality relationships. Such design evolution could be expressed as incremental refinements of a CFRL functional specification. DME could assist a designer in this functional refinement process by assuring that each successive specification is indeed a refinement of its predecessor so that any device that satisfies the refinement also satisfies the predecessor. eferences Bradshaw J.A.; and Young R.M. 1991. Evaluating Design Using Knowledge of Purpose and Knowledge of Structure. IEEE Expert April. Crawford J.; Farquhar A.; and Kuipers B. 1990. QPC : A Compiler from Physical Models to Qualitative Differential Equations. In Proceedings of the Eight National Conference on Artificial Intelligence. Forbus K.D. 1984. Qualitative Process Theory. Artificial Intelligence 24. Forbus, K. D. 1989. The Qualitative Process Engine. In Readings in Qualitative Reasoning about Physical Systems. Weld, D. S., and de Kleer, J. Eds. Morgan Kaufmann. Franke D.W. 1991. Deriving and Using Descriptions of Purpose. IEEE Expert April. Iwasaki Y.; and Simon H.A. 1986. Causality in device behavior. Art@cial Intelligence 29:3-32. Iwasaki Y.; and Low C.M. 1991. Model Generation and Simulation of Device Behavior with Continuous and Discrete Change. Technical Report, KSL, Dept. of Computer Science, Stanford University. Iwasaki Y .; and Chandrasekaran B. 1992. Design Verification through Function and Behavior-Oriented Representations : Bridging the gap between Function and Behavior. In Proceedings of the Second International Conference on Artificial Intelligence in Design, Pittsburgh. Iwasaki Y.; Fikes R.; Vescovi M.; and Chandrasekaran B. 1993. How Things are Intended to Work : Capturing Functional Knowledge in Device Design. In Proceedings of the Thirteenth International Joint Conference on Artifiicial Intelligence. Keuneke A. 1989. Machine Understanding of Devices; Causal Explanation of Diagnostic Conclusions. Ph.D. thesis, Laboratory for AI Research, Dept. of Computer & Information Science, The Ohio State University. Lockheed Missiles and Space Company. 1984. SMM Systems Procedure for Electrical Power Subsystem. dot #D889545A, SE-23, Vol. 3. Sembugamoorthy V.; and Chandrasekaran B. 1986. Functional Representation of Devices and Compilation of Diagnostic Problem-Solving Systems. In Kolodner J.L. and Riesbeck C.K. (editors), Experience, Memory and Reasoning, Lawrence Erlbaum Associates, Hillsdale, NJ. Reasoning about Physical Systems 633 | 1993 | 94 |
1,425 | by Asympto easoning Kenneth Man-kam Yip* Department of Computer Science Yale University P.0. Box 2158, Yale Station New Haven, CT 06520. yip-ken@&. yule. edu Abstract One of the hardest problems in reasoning about a physical system is finding an approximate model that is mathematically tractable and yet captures the essence of the problem. Approximate models in science are often constructed by informal rea- soning based on consideration of limiting cases, knowledge of relative importance of terms in the model, and understanding of gross features of the solution. We show how an implemented program can combine such knowledge with a heuristic sim- plification procedure and an inequality reasoner to simplify difficult fluid equations. Introduction Many important scientific and technological problems - from life in moving fluids, to drag on ship hulls, to heat transfer in reentering spacecrafts, to motion of air masses, and to evolution of galaxies - arise in connec- tion with fluid equations. In general, these equations form a system of coupled nonlinear partial differential equations, which presents enormous analytical and nu- merical difficulties. We are interested in making computers to help scien- tists and engineers analyze difficult fluid problems. By this we do not mean the development of new computer technology for more machine cycles and memory nor clever numerical methods nor better turbulence models nor techniques for automatic grid generation or body definition. Advances in all these areas will no doubt en- hance the applicability of direct numerical approaches to fluid problems. A thorough understanding of the physics involved, however, requires much more than numerical solutions. The present computers generate too much low-level output and that makes the process of discovering interesting flow phenomena and tracking important structures tedious and error-prone. Our goal is to build a new generation of smart, ex- pert machines that know how to represent - not just present - the important features of the solutions so *Supported in part by NSF Grant CCR-9109567. that they can talk about them, reason about them, and use them to guide further experiments or build simpli- fied mathematical models. Our programs are not big number-crunchers; nor are they symbolic calculators like Macsyma. Rather we view them as models of what some scientists do when they are investigating physi- cal phenomena. We want our computer programs to simulate how scientists analyze these phenomena; they should be able to formulate approximate models, to perform qualitative and heuristic analyses, to provide a high-level executive summary of these analyses, and to give meaningful information that helps a scientist in understanding the phenomena. One of the most important skills in developing un- derstanding of a physical phenomenon is the ability to construct approximate models that are mathemat- ically tractable but yet retain the essentials of the phenomenon. The scientist must exercise judgment in choices of what idealizations or approximations to make. Making such judgement often requires an un- derstanding of the gross features of the solution, knowl- edge of the relative importance of terms in the model, and consideration of limiting cases. The purpose of this paper is to demonstrate how this kind of knowl- edge can be embodied in a computer program to tackle the difficult problem of model approximation in fluid dynamics. Related works in AI include research in model selec- tion and model generation. Addanki’s graph of models guides the selection of an appropriate model from a set of handcrafted models [Addanki et al., 19911. Weld’s model sensitivity analysis provides an alternative but more general approach to model selection [Weld, 19921. Falkenhainer and Forbus automate model generation by composing suitable model fragments [Falkenhainer and Forbus, 19911. Another relevant line of work concerns order of mag- nitude reasoning. Raiman introduces order of mag- nitude scales to extend the power of qualitative alge- bra [Raiman, 19911. Weld explores related ideas in a technique called exaggeration in the context of com- parative analysis [Weld, 19901. Mavrovouniotis and Stephanopoulos combines numerical and symbolic or- 634 Yip From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. der of magnitude relations in analyzing chemical pro- cesses [Mavrovouniotis and Stephanopoulos, 19881. Our project differs from these works in two major aspects. First, whereas all the previous works deal with either qualitative models or models specified by algebraic or ordinary differential equations, we ana- lyze systems of nonlinear partial differential equations (PDEs). Second, we base our programs on a theory of asymptotic order of magnitude of functions, which we believe is closer to what applied mathematicians or fluid dynamicists use. ’ The Task We are interested in the task of model simplification, a part of a larger process of modeling-analysis-validation the purpose of which is to establish our confidence in the applicability of an approximate model in describ- ing certain physical phenomenon. Model simplification takes three inputs: (1) a detailed model, (2) a descrip- tion of the parameters, dependent variables, and inde- pendent variables of the model, and (3) essential phys- ical effects to be included. Its output is one or more simplified models with constraints on parameters to represent the applicability of the models. Detailed fluid models are usually available from stan- dard textbooks and so are the physical meanings of pa- rameters and variables. The description of variables is problem-dependent; it often includes their boundary values and estimated maximum order of magnitude. Knowledge of which physical effects are essential can come from experimental observations concerning the phenomenon. For instance, a model that neglects vis- cosity will predict zero drag on a solid body in steady flow; results diverge from physical reality. In general, the simplified model is valid only under a range of parameter values. For instance, the approx- imation may require the Reynolds number to be large and conditions like this are represented by symbolic constraints among the parameters. As our model problem, we use Prandtl’s boundary layer approximation for high Reynolds number flows, which is probably the single most important approxi- mation made in the history of fluid mechanics. For ease of exposition, we consider the case of two-dimensional, steady, incompressible flow over a flat plate (Fig. 1). The same technique will work for three-dimensional, unsteady flow over arbitrary bodies. The detailed model is the 2D steady incompressible Navier-Stokes equations (Fig. 2). Equations (1) and (2) are the momentumequations, while (3) is the equa- tion of continuity (or conservation of mass). The model is a system of three coupled PDEs containing three un- knowns U, w, and p. The objective is to simplify the model in the limit Re + 00. ‘The asymptotic theory is also commonly used in the analysis of algorithms. outer flow (iviscid) Two-Dimensional steady, incompressible Navier u and v are the horizontal and normal of the velocity, p is the pressure, and Re is the Prandtl’s idea is that at high Reynolds numbers vis- cosity remains important near the body surface even if it could be disregarded everywhere else. As long as the “no-slip” condition holds, i.e., that fluids do not slip with respect to solids, there will be a thin layer around the body where rapid changes of velocity pro- duce notable effects, despite the small coefficient &. The layer in question is called boundary layer. To get a feel of the type of reasoning involved in the derivation of the boundary layer approximation, we will quote a passage, slightly edited for our purpose, from a standard fluid dynamics textbook [Yih, 19771: To start with we assume that 6*, the width of the boundary layer, is small compared with L, the length of the flat plate if Re is large. That means 6 = $ << 1, and the range of the boundary layer y is 6. Since u and c are all of order of unity, equation (3) states that v is of order 6. Now the convective terms in equation (1) are all of O(1). A glance at the viscous terms in equation (1) reveals that $ << 3 so that the first can be neglected and the viscous terms can be replaced by &&. Since in the boundary layer the viscous terms are of the sirne order of magnitude as the inertial terms, 1 8”u Re c-19 = O(1); this shows that: Re = O(-$ (4) To see how p varies, we turn to equation (2). Again the term s can be neglected since it is added to a much larger term Reasoning about Physical Systems 635 2. Then all the terms involving v are of O(S). Hence the pressure variation with respect to y in the boundary layer is of O(a2), and can be neglected. Thus we take the pressure out- side the boundary layer to be the pressure inside. But outside the boundary layer, the pressure distribution p(z) is a func- tion of G only. So we can replace the partial derivative of the pressure term by the total derivative. Thus the flow in the boundary layer is governed by: i a2u =--++- Re dy2 (5) to which must be added the equation of continuity (3). Much can be learned from this explanation. First, we notice that the simplified model consists of only two equations (5) and (3)) -and two unknowns u and & the momentum equation (2) is discarded. The pressure p becomes a known bound ary term to be given by the solution to the outer flow, the farfield apnroximation, where viscosity can be totally ignored. Second, the exi planation refers to physical meanings of the terms in the equations; we have inertia terms, convective terms, viscous terms, and pressure terms. Third, the reason- ing makes heavy use of order of magnitude estimate to justify the elimination of small terms. Fourth, given a few basic order of magnitude estimates (such as those of 6, u, and z), estimates for more complicated quanti- ties involving partial derivatives are automatically in- ferred. In particular, it derives the important conclu- sion that the dependency of the pressure on y, i.e., the variation across the thin boundary layer, can be neglected at this level of approximation. -Finally, by balancing the inertia terms and the viscous terms, it obtains a quantitative condition on the range of pa- rameter values Re, equation (4), for which the approx- imation is valid. Characteristics of the Problem Domain Some Terminology Fluids obey Newton’s laws of motion. The momentum equations 11) and (2) are just examples of Newton’s 2nd Law (F = ma).‘ In fluid mechanics, it is customary to have the acceleration or the inertia terms written on the left hand side of the equation, while the remaining force terms on the right. See Fig. 3. Figure 3: Meaning of terms Navier-Stokes Equations. in the 2D steady incompressible Since the motion of a fluid particle can change with both time and space, the inertia consists of two parts: the local acceleration (i.e., rate of change of velocity with respect to time), and the convective accelera- tion (i.e., product of velocity and the velocity gradi- ent) s A steady flow is one in which the local accelera- tion is zero. The applied forces on the fluid can be divided into two types: (1) surface forces, caused by molecular attractions, include pressure and friction forces due to viscosity, and (2) body forces result- ing from external force fields like gravity or magnetic field. It is often convenient to define the pressure term to include gravity (i.e., p + pgy, where p is density of fluid, g gravitational constant, and y is the vertical co- ordinate). When the divergence of the fluid velocity is zero (equation (3)), the flow is called incompressible, which just means that the mass of fluid inside a given volume is always conserved. The momentum equations express a balance of op- posing forces on the fluid: the inertia forces keep the fluid moving steadily against the effects of pressure gra- dient and viscous forces. Reynolds number is simply the ratio between the inertia and the viscous forces; it is an indication of the relative importance of viscos- ity - actually the unimportance since high Reynolds numbers are associated with slightly viscous flow. Ontology Description of fluid motion involves a variety of quanti- ties: (1) the fundamental quantities: time, space, and mass, (2) the usual dynamical quantities from particle mechanics such as velocity, acceleration, force, pres- sure, and momentum, (3) quantities that are less fa- miliar but can be easily derived from the more basic ones: velocity gradient and pressure gradient, convec- tive acceleration, viscous shearing forces, and turbulent stress, (4) dimensionless parameters such as Reynolds number, and (5) scale parameters, such as 6, which de- termine the length, time, or velocity scale of interest. Asymptotic Order of Magnitude of Puuct ions Flows often vary widely in character depending on the relative magnitude of certain parameters or variables. For instance, the flow near a jet may be highly irreg- ular, but at a large distance the mean velocity profile may become quite regular; this is the so-called farfield approximation. Another example is the Reynolds num- ber. Small Reynolds number are often associated with laminar (smooth) flow, whereas large Reynolds num- bers flow are quite erratic. So it should not be sur- prising that most useful approximations in fluid me- chanics (and in many other branches of physics) are dependent on a limit process, the approximation be- coming increasingly accurate as a parameter tends to some critical value. In our model problem, for exam- ple, we would be interested in how the boundary layer velocities u and v behave as Re becomes large. 636 Yip More generally, we will consider the asymptotic behavior of a function f(e) as c approaches some criti- cal value ~0. Without loss of generality, we can assume co = 0, since translation can be used to handle any limiting values. There are several ways to describe the asymptotic (C- ~0) and inversion (L) non-zero finite and infiniee behavior of a function with varying degrees of preci- sion. For instance, we could describe the limiting value f(c) as E + 0 qualitatively, i.e., whether it is bounded, vanishing, or infinite. Or, we could describe the limit- ing value quantitatively by giving a numerical value for the bound. But it is most useful to describe the shape of the function qualitatively as a limit is approached. The description uses the order symbols 0 (“big oh”), o (“little oh”), and - (“asymptotically equal”) to ex- press the relative magnitudes of two functions. Definition 1 f(6) = O(g(c)), E -+ 0 if lim,_.*o i ‘, = K i-i where K is a finite number. Definition 2 f(c) = o(g(E)), E -+ 0 if Definition 3 f(c) - g(c), E -+ 0 if Typically, we will use a convenient set of simple func- tions-inside an order symbol ; they are called the gauge functions because they are used to describe the shape of an arbitrary function in the neighborhood of a criti- cal point. Common gauge functions include the powers and inverse powers of E. For example, sin(c) = O(E) as 6 --+ 0. For more complicated problems, ‘logarithms and exponentials of powers of E may also be used. The asymptotic order of magnitude must be distin- guished from the numerical order of magnitude. If f = 106g, then f and g differ by 6 numerical orders of magnitude, but they are still of the same asymptotic order. However, in a physical problem the variables are normally scaled in such as way that the proportionality constant I< will be close to 1.. Below we list some useful rules of operation on order symbols: 1. O(fs) = W)W 2. O(f + 9) = maxtW>9 O(9)> 3 0 O(f) + o(f) = O(f) ;* . = O(K 1 g(t) 1 dt) as c -4 0. Order relations cannot in general be differentiated. That is, if f = O(g), th en it is not generally true that f’ = O(g’). However, using the definition of the to- tal differential of a function f (x, y), df = gdx + $dy -u df-x df-Y where df-x and df-y are the partial differentials, we can derive some useful rules involving partial deriva- tives: I. 0( g)O(dz) = O( df-x) 2. 0( g)O(dy) = 0( d&y) 3. O(df) = max(O(df-x), O(df-y)) Theory of Simplification The basic idea in simplification is to identify small terms in an equation, drop these terms, solve the sim- plified equation, and check for consistency. But this does not always work. Consider the following simple polynomial: 3E2X3 + x2 - tzx - 4 - 0 - in the limit 6 -+ 0. We might naively drop the cu- bic and the linear terms because their coefficients are small. But if we do that, we only get two roots x = f2, losing the third root. Thus, the process of simplifica- tion leads to a loss of important information. What went wrong ? The problem is that terms that, appear small are not really small. The missing root depends inversely on E in such a way that the cubic term is not negligible even its coefficient becomes small. To fix this problem, we introduce three concepts: an undetermined gauge, a significant gauge, and a maximal set. To begin, we will assume x = O(P) where n is still undetermined - hence the name unde- termined gauge. The order of each term is then: @2+&-*-&=O O(f 3n+2) O(e) o(c”+‘) O(1) To determine the relative importance of terms, we use the heuristic that we only retain the smallest number of terms that will balance the equation. Since we must allow the situation where two or more terms may have the same asymptotic order, we group terms into equiv- alence classes by the relation -. A maximal set is any such class that is not smaller than any other classes. As an example, the cubic polynomial above has four maximal sets each containing one term. The heuristic can then be stated as follows: Heuristic of minimal complication (or Method of Dominant Balance): If the equation has two or more maximal sets, bal- ance two of them; these two maximal sets are called dominant. Assume the remaining sets are negligible. Self-consistent choices of dominant maximal sets cor- respond to significant simplified equations. Applying this heuristic to the polynomial, we get six cases to consider. For instance, one possibility is that the first two terms are dominant, i.e., c2x3 - x2 >> EX,~. Equating the two undetermined gauges, we get 3n + 2 = 2n and this implies n = -2. The remaining terms are 0(6-l) and O(l), which is consistent with the assumption that the first two terms are dominant. So this possibility is included. On the other hand, if we assume c2x3 - EX >> x2, 4, we get n = -$. But then x2 = O(E--l) >> O(d), violating the assumption that it should be much smaller than the first term. This possibility must be excluded. A similar analysis Reasoning about Physical Systems 637 shows that only one more possibility, when the second and fourth terms are dominant, i.e., n = 0, is self- consistent. So the heuristic concludes that we should consider two simplified polynomials: and 3c2x3 + x2 = 0 * 2 - J- 3c2 The values of en for which we get self-consistent domi- nant maximal sets are called significant gauges. The balancing of the dominant maximal sets produces sim- plified equations that correspond to qualitatively sig- nificant asymptotic behaviors. Implementation: The Details Our method has two main parts: (1) a preprocessor, which given the input specification of a model, creates internal representations of quantities, equations, and a constraint network connecting the quantities, and (2) a model-simplifier, which finds all the self-consistent ap- proximate models by the heuristic of minimal compli- cation. The model-simplifier relies on three procedures - a constraint propagator, a graph searcher, and an inequality bounder - to determine the order of magni- tude of quantities and their relationships. We describe each of these five pieces in turn. The Preprocessor The problem specification is defined by the macro defmodel, which takes a name, a list of quantity de- scriptions, the momentum and continuity equations in infix form, relations defining external pressure and free stream velocities, and a list of estimated orders of mag- nitude. (defmodel prandtl-boundary-layer-with-pressure-gradient (with-independent-variables ((x :lower-bound 0 :upper-bound 1 :physicaI-features ‘(space streamwise)) (y :lower-bound 0 :physical-features ‘(space transverse))) . . . ;;similar descriptions for U, V, P, Re, etc.;; . . . (with-essential-terms (viscous inertia) (with-equations ((streamwise-momentum-equation (U * (d U / d x) + V * (d U / d y) = - (d P /d x) + (d2 U / d2 x) / Re + (d2 U / d2 y) / Re)) (transverse-momentum-equation (U * (d V / d x) + V * (d V / d y) = - (d P /d y) + (d2 V / d2 x) / Re + (d2 V / d2 y) / Re)) (continuity (Cd U / d x) + (d V / d Y) = 0))) (with-relations (constant U 1) (constant x 1) (constant y ‘delta) (constant PO 1)))))) Quantities Quantities are represented by CLOS objects. They are divided into four types: (I) independent vari- ables (space and time), (2) dependent variables (e.g., 638 Yip pressure, velocity), (3) controllable parameters (e.g., Reynolds number), and (4) scale parameters (e.g., length scale S). Each quantity has slots for its upper bound, lower bound, boundary values, physical fea- tures, and relations which other quantities. A depen- dent variable contains additional information about its dependency on the independent variables. For exam- ple, the dependent variable U depends on both x and Y- The input specifies nine quantities - x, y, U, V, U-inJ PO, P, R e, and delta. But a total of 60 quantities will be created. The reason is that for each dependent variable, quantities corresponding to its to- tal differential, partial differentials, and derivatives are also automatically generated. For instance, the depen- dent variable U generates 5 additional quantities: dU, d U-x, d U-y, g9 and z. Quantities are also gener- ated for each term in the equations and relations. An example would be the dependent variable d2Udx2/RE corresponding to the viscous term k(G). Input quantities have associated physical features such as space, velocity, and pressure. These features are used to determine the physical meaning of derived quantities by simple rewrite rules. For instance, a ve- locity quantity differentiated by a space quantity gives a velocity-gradient quantity. The physical meaning of a term in the equation is determined in a similar fash- ion. For example, a term that is the product of a ve- locity quantity and a velocity gradient represents the convective inertia term. A Constraint Language Equations involving quantities are represented as con- straints so that when all but one quantities are known the value of the remaining one can be computed in terms of the others Our constraint language has 6 primitives: 1. 2. 3. 4. 5. 6. The equality constraint, (== ql q2), asserts that O(q1) = O(q2). E xample: the continuity equation (3) is represented by (== dudx dvdy). The multiplier constraint, (multiplier ql q2 q3), specifies that the quantities ql, q2 and q3 must be re- lated by the equation O(q1) x O(q2) = O(q3). Example: (multiplier u dudx ududx) . The maximum constraint, (maximum ql q2 q3), spec- ifies that O(q3) = max(O(ql), O(q2)). Example: (maximum du-x du-y du). The variation constraint, (variation f x df-x), cap- tures the inference that when the partial differential of a function f( 2, y) with respect to x is much less than the value of f at its outer boundary, then f is asymptotically equal to its boundary value. Symboli- cally, df-x = o(fo) + O(f) = O(fo), where fo is the value of f at its outer boundary in the x-direction. The total-variation constraint, (total-variation f df), specifies: 0td.f) = O( upperbound (f) - lowerbound (f)) . The constant constraint, (constant q v), just says that O(q) = ?I. The constraint language allows simple inferences about quantities to be made. For instance, using the continuity equation (3) and the known order of mag- nitudes for the quantities U, 2, and y, the value for V is automatically deduced. Qualitative Order Relations An important type of inference is the determination of the ordering relationship between two quantities. For instance, in order to drop a term A, the system has to show that A is much smaller than another quantity B in the equation. For models involving a few scale pa- rameters, such as our model problem, the relationship can be determined by relatively simple algebraic ma- nipulations. But for quantities involving three or more scale parameters, the algebra can be quite complicated. A simpler inference technique is to represent the or- der relationships explicitly in a directed graph whose nodes are quantities and edges are labeled order rela- tions, and to use a breadth-first search to find paths between quantities. The idea is similar to Simmons’ graph search in a quantity lattice [Simmons, 19861, but we generalize it to include symbolic factors in the order relations. Let’s look at an example (Fig. 4a). We have 4 quantities: A, B, C, and D. Assume S is a small parameter. The following relations are also known: (1) O(A) = O(B), (2) O(B) = SO(D), and (3) O(A) = SO(C). To show that O(C) = O(D), we find the shortest path between them, collecting the sym- bolic factor of each edge of the path. The symbolic factors are divided into two groups: the <<-factors, and the >>-factors depending on whether the edge is labeled < or >. In the example, the <<-factors consists of one factor S, while the >>-factors consists of one factor j. The inference procedure can also handle partial m- formation. For instance, in the graph shown in Fig. 4b, it will correctly conclude that E >> H even it is not told what the symbolic factor of edge F >> G is. A N *B <<6 <<6 v ? 0 C *D (a) @I Figure 4: Graph search to determine order relations Inequality Bounder The constraint propagator and the graph searcher are fast but they cannot determine more subtle ordering relationships. For instance, given S2 = O(k) and S < 1, they can’t deduce that & x i << 1. This problem in its general form is equivalent to the satisfi- ability of a set of inequality constraints. To solve this problem, we use a version of the sup-inf bounding al- gorithm first proposed by [Bledsoe, 19751 and extended by [Brooks, 19811 and [Sacks, 19871 to deal with non- linear inequalities. Our algorithm is simpler because there is no need to deal with nonmonotonic functions such as the trigonometric functions. Simplification Algorithm The purpose of the simplification algorithm is to search for all self-consistent simplified models corresponding to a detailed input model. A simplified model is self- consistent if the terms neglected are consistent with the dominant balance assumptions, and it contains the essential terms specified by the input. The algorithm determines the maximal sets for each momentum equa- tion, balances all possible pairs of maximal sets, and eliminates the inconsistent ones. It terminates when each momentum equation has only one maximal set. The principal steps of simplification are: 1. If the model has no unsimplified momentum equation, then return the model. 2. Otherwise, pick the first unsimplified momentum equation and consider all possible pairwise dominant balances. 3. Propagate the effects of the dominant balance and record any assumptions made on parameters due to the balance. 4. If the resulting model is self-consistent, call simplification recursively on it. Otherwise, return nil. The algorithm will terminate because during each call of simplification, the number of maximal sets is reduced by at least one. So each recursive call will return either a simplified model or nil if the partially simplified model is not self-consistent. erformance Trace The following script shows how the program produces the boundary layer approximation for our model prob- lem. The problem generates 60 quantities and 65 con- straints; it takes about 60 sets real time on a Spare 330. The program builds model-l according to the in- put description. Each momentum equation has three maximal sets. The program simplifies the transverse momentum equation by balancing its maximal sets; there are three possible balances. The first choice - balancing viscous stress and pressure gradient - is not consistent. > (search-simplifications *model*) Making <MODEL-2: PRANDTL-BOUNDARY-LAYER> from <MODEL-l: PRANDTL-BOUNDARY-LAYER).... Balancing two terms: D2VDY2/RE (VISCOUS STRESS TRANSVERSE) DPDY (PRESSURE-GRADIENT) in TRANSVERSE-MOMENTUM-EQUATION with I parameter assumption: Reasoning about Physical Systems 639 (<< RE (- DELTA -211 that is not self-consistent is certainly a poor approxi- The model is not self-consistent because the simplifiedmation. Inpractice,an approximatemodelisvalidated equations do not contain the essential INERTIA term. by subjecting its predictions to experimental and nu- merical checks. In fact, there still exists no theorem which speaks to the validity and accuracy of Prandtl’s boundary layer approximation, but ninety years of ex- perimental results leave little doubt of its validity and its value. The second choice - balancing viscous stress and in- ertia - generates a consistent model model-3. Since model-3 is not completely simplified, the program goes on to simplify its streamwise equation, which now has two maximal sets. So there is only one balanc- ing choice; the result is a consistent model model-4. The program also finds the correct condition on the Reynolds number. Making <MODEL-$: PRANDTL-BOUNDARY-LAYER> from <MODEL-3: PRANDTL-BOUNDARY-LAYER>... Balancing two terms: D2UDY2/RE (VISCOUS STRESS TRANSVERSE) DPDX (PRESSURE-GRADIENT) in STREAMWISE-MOMENTUM-EQUATION with 1 parameter assumption: (= RE (- DELTA -2)). <MODEL-4: PRANDTL-BOUNDARY-LAYER> is self-consistent The final choice of balance for the transverse equa- tion is inconsistent. Let’s check that model-4 has the correct boundary layer equations (equations (5) and (3)): > (model-simplified-equations model-41 ((U * (D U / D X)) + (V * (D U / D Y)) = - (D P / D X) + ((D2 U / D2 Y) / RE)) ((D U / D X) + (D V / D Y> = 0) Evaluation The program has been tested on several problems in- cluding ODES and PDEs representing flows in turbu- lent wake and turbulent jet. The turbulent wake prob- lem, for instance, has 89 quantities and 112 constraints; it takes the program about 90 sets real time to find two simplified models. When does the simplification heuristic fail? There are equations for which balancing two maxi- mal sets does not give any self-consistent approxima- tions. For instance, the ODE 2 - f = 9 requires a S-term balance because all the palrwise balances are inconsistent. Our algorithm incorporates a system- atic search starting from a-term balance until a self- consistent model is found. How good are the approximate models? There is no simple answer to this question. It is known that solutions to a self-consistent approximate model derived by dominant balances can be grossly in- accurate. A simple example is an ill-conditioned set of linear algebraic equations, in which a small change in the coefficients can lead to a large change in the solu- tion vector. The situation for PDEs is much worse be- cause, except in rare cases, it is not known whether the approximate model has a solution at all or whether the solution if exists will be unique. The strongest claim one can made seems to be this: An approximate model 640 Yip Conclusion We have demonstrated how a heuristic simplification procedure can be combined with knowledge of asymp- totic order of functions, relative importance of terms, and gross physical features of the solution to capture certain aspects of the informal reasoning that applied mathematicians and fluid dynamicists use in finding approximate models - informal because the approxi- mation is done without firm error estimates. The key to the simplification method is to examine limiting cases where the model becomes singular (i.e., when the naively simplified model has a different qualitative be- havior from the original model). This idea of simplifi- cation by studying the most singular behaviors is very general: it comprises the core of many powerful ap- proximation and analysis techniques that have proven to be extremely useful in reasoning about behaviors of complicated physical systems. References Addanki, S; Cremonini, R; and Penberthy, J.S. 1991. Graphs of models. Artificial Intelligence 51. Bledsoe, W.W. 1975. A new method for proving cer- tain presburger formulas. In Proceedings IJCAI- 75. Brooks, R.A. 1981. Symbolic reasoning among 3d models and 2d images. Artificial Intelligence 17. Falkenhainer, B. and Forbus, K.D. 1991. Composi- tional modelinng: finding the right model for the job. Artificial Intelligence 51. Mavrovouniotis, M.L. and Stephanopoulos, G. 1988. Formal order-of-magnitude reasoning in process engi- neering. Computer Chemical Engineering 12. Raiman, Olivier 1991. Order of magnitude reasoning. Artificial Intelligence 51(l). Sacks, Elisha P. 1987. Hierarchical reasoning about inequalities. In American Association for Artificial Intelligence. Simmons, Reid 1986. Commonsense arithmetic rea- soning. In Proceedings AAAI-86. Weld, D.S. 1990. Exaggeration. Artificial Intelligence 43. Weld, D.S. 1992. Reasoning about model accuracy. Artificial Intelligence 56. Yih, Chia-shun 1977. FZuid Mechanics. West River Press. | 1993 | 95 |
1,426 | A s Craig Boutilier and Verhica Becher Department of Computer Science University of British Columbia Vancouver, British Columbia CANADA, V6T 172 email: cebly,becher@cs.ubc.ca Abstract We propose a natural model of abduction based on the re- vision of the epistemic state of an agent. We require that explanations be sufficient to induce belief in an observation in a manner that adequately accounts for factual and hypo- thetical observations. Our model will generate explanations that nonmonotonically predict an observation, thus general- izing most current accounts, which require some deductive relationship between explanation and observation. It also provides a natural preference ordering on explanations, de- fined in terms of normality or plausibility. We reconstruct the Theorist system in our framework, and show how it can be extended to accommodate our predictive explanations and semantic preferences on explanations. I htroduction A number of different approaches to abduction have been proposed in the AI literature that model the concept of ab- duction as some sort of deductive relation between an expla- nation and the explanandum, the “observation” it purports to explain (e.g., Hemp&s (1966) deductive-nomological ex- planations). Theories of this type are, unfortunately, bound to the unrelenting nature of deductive inference. There are two directions in which such theories must be generalized. Fist, we should not require that an explanation deductively entail its observation (even relative to some background the- ory). There are very few explanations that do not admit exceptions. Second, while there may be many competing explanations for a particular observation, certain of these may be relatively implausible. Thus we require some notion of preference to chose among these potential explanations. Both of these problems can be addressed using, for exam- ple, probabilistic information (Hempel 1966; de KIeer and Williams 1987; Poole 1991; Pearl 1988): we might simply require that an explanation render the observation sufficiently probably and that most likely explanations be preferred. Ex- planations might thus nonmonotonic in the sense that Q may explain p, but cvAymay not (e.g., P(/?Icr) may be sufficiently high while P (p 1 cy A y ) may not). There have been proposals to address these issues in a more qualitative manner using “logic-based” frameworks also. Peirce (see Rescher (1978)) discusses the “plausibility” of explanations, as do Quine and Ullian (1970). Consistency-based diagnosis (Reiter 1987; de Kleer, Mackworth and Reiter 1990) uses abnormality assumptions to capture the context dependence of explana- tions; and preferred explanations are those that minimize abnormalities. Poole’s (1989) assumption-based framework captures some of these ideas by explicitly introducing a set of default assumptions to account for the nonmonotonicity of explanations. We propose a semantic framework for abduction that cap- tures the spirit of probabilistic proposals, but in a qualitative fashion, and in such a way that existing logic-based proposals can be represented as well. Our account will take as central subjunctive conditionals of the form A + B, which can be interpreted as asserting that, if an agent were to believe A it would also believe B. This is the cornerstone of our notion of explanation: if believing A is sufficient to induce belief in B, then A explains B. This determines a strong, predictive sense of explanation. Semantically, such condi- tionals are interpreted relative to an ordering of plausibility or normality over worlds. Our conditional logic, described in earlier work as a representation of belief revision and default reasoning (Boutilier 1991; 1992b; 1992c), has the desired nonmonotonicity and induces a natural preference ordering on sentences (hence explanations). In the next sec- tion we describe our conditional logics and the necessary logical preliminaries. In Section 3, we discuss the concept of explanation, its epistemic nature, and its definition in our framework. We also introduce the notion of preferred ex- planations, showing how the same conditional information used to represent the defeasibility of explanations induces a natural preference ordering. To demonstrate the expres- sive power of our model, in Section 4 we show how Poole’s Theorist framework (and Brewka’s (1989) extension) can be captured in our logics. This reconstruction explains se- mantically the non-predictive and paraconsistent nature of explanations in Theorist. It also illustrates the correct man- ner in which to augment Theorist with a notion of predictive explanation and how one should capture semantic prefer- ences on explanations. These two abilities have until now 642 Boutilier From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. been unexplored in this canonical abductive framework. We conclude by describing directions for future research, and how consistency-based diagnosis also fits in our system. 2 Conditionals and Belief Revision The problem of revising a knowledge base or belief set when new information is learned has been well-studied in AI. One of the most influential theories of belief revision is the AGA4 theory (Alchourr&r, Gardenfors and Makinson 1985; Gardenfors 1988). If we take an agent to have a (de- ductively closed) belief set K, adding new information A to K is problematic if K l- -A. Intuitively, cermin beliefs in K must be retracted before A can be accepted. The AGM theory provides a set of constraints on acceptable belief re- vision functions *. Roughly, using Ki to denote the belief set resulting when K is revised by A, the theory maintains that the least “entrenched” beliefs in K should be given up and then A added to this contracted belief set. Semantically, this process can be captured by considering a plausibility ordering over possible worlds. As described in (Boutilier 1992b; Boutilier 1992a), we can use a fam- ily of logics to capture the AGM theory of revision. The modal logic CO is based on a propositional language (over augmented with two modal operators 0 and E. Lcpt denotes the propositional sublanguage of this bimodal language Lg. The sentence Da is read as usual as “CX is true at all equally or more plausible worlds.” In contrast, &r is read “(Y is true at all less plausible worlds.” A CO-model is a triple M = (IV, 5, cp), where W is a set of worlds with valuation function ‘p and 5 is a plausibility ordering over W. If w 5 v the w is at least as plausible as v. We insist that 5 be transitive and connected (that is, either w 5 v or v < w for all w, v). CO-structures consist of a totally-ordered set of clusters of worlds, where a cluster is simply a maximal set of worlds C E W such that w 5 v for each w, v E C (that is, no extension of C enjoys this property). This is evident in Figure l(b), where each large circle represents a cluster of equally plausible worlds. Satisfaction of a modal formula at w is given by: 1. M l+,, Oa iff for each v such that v 5 w, M bV cr. 2. M bw 60 iff for each v such that not v 5 w, M b,, cy. We define several new connectives as follows: Oa! =df lOl&; &! -df 76--a; &k =df Oa A fiCY; and &Y -df l&t. It is easy to verify that these connectives have the following truth conditions: Oa (&) is true at a world if CY holds at some more plausible (less plausible) world; ECL (&) holds iff (;Y holds at all (some) worlds, whether more or less plausible. The modal logic CT40 is a weaker version of CO, where we weaken the condition of connectedness to be simple re- flexivity. This logic is based on models whose structure is that of a partially-ordered set of clusters (see Figure l(a)). Both logics can be extended by requiring that the set of worlds in a model include every propositional valuation over (a) <JT40-model More usible I 0 z (b) CO-model Figure 1: CT40 and CO models (so that every logically possible state of affairs is possible). The corresponding logics are denoted CO* and CT40*. Ax- iomatizations for all logics may be found in (Boutilier 1992b; Boutilier 1992a). For a given model, we define the following notions. We let llall denote the set of worlds satisfying for- mula (Y (and also use this notion for sets of formulae K). We use min( cu) to denote the set of most pkausible a-worldd The revision of a belief set I< can bc represented using CT40 or CO-models that reflect the degree of Iplausibility accorded to worlds by an agent in such a belief state. To capture revision of K, we insist that any such K-revision model be such that ~~1~~~ = min(T); that is, IlKI forms the (unique) minimal cluster in the model. This reflects the intuition that all and only K-worlds are most plausible (Boutilier 1992b). The CT40-model in Figure l(a) is a K- revision model for K = Cn( TA, B), while the CO-model in Figure l(b) is suitable for K = Cn( ‘A). To revise K by A, we construct the revised set Ki by considering the set min(A) of most plausible A-worlds in AL In particular, we require that II Ki I I = min(A); thus B E K; iff B is true at each of the most plausible A-worlds. We can define a conditional connective 3 such that A + B is true in just such a case: (A + B) -df B(A 1 O(A A q (A > B))) Both models in Figure 1 satisfy A 3 B, since B lholds at each world in the shaded regions, min(A), of the mod- els. Using the Ramsey test for acceptance of conditionals (Stalnaker 1968), we equate B E Ki with M + A j B. Indeed, for both models we have that Ki = Cn(A, B). If the model in question is a CO*-model then this characteriza- tion of revision is equivalent to the AGM model (Boutilicr ‘We assume, for simp licity, that such a (limiting) set exists for each cr E &PL, though the following technical developments do not require this (Boutilier 1992b). Representation and Reasoning 643 1992b). Simply using CT40*, the model satisfies all AGM postulates (Gardenfors 1988) but the eighth. Properties of this conditional logic are described in Boutilier(1990; 1991). We briefly describe the contraction of K by -A in this semantic framework. To retract belief in -A, we adopt the belief state determined by the set of worlds I I I< I I U min( A). The belief set K,‘, does not contain -A, and this operation captures the AGM model of contraction. In Figure l(a) K- VA = Cn(B), while in Figure l(b) KzA = Cn(A > B). A key distinction between CT40 and CO-models is il- lustrated in Figure 1: in a CO-model, all worlds in min(A) must be equally plausible, while in CT40 this need not be the case. Indeed, the CT40-model shown has two maximally plausible sets of A-worlds (the shaded regions), yet these are incomparable. We denote the set of such incomparable subsets of min(A) by PI(A), so that min(A) = lJPI(A)? Taking each such subset to be a plausible revised state of affairs rather than their union, we can define a weaker notion of revision using the following connective. It reflects the intuition that at some element of PI(A), C holds: (A * C) fdf +A) v 8(A A q (A > C)) The model in Figure l(a) shows the distinction: it satisfies neitherA a CnorA j lC,butboth.A -+ CandA ---f +Z’. There is a set of comparable most plausible A-worlds that satisfies C and one that satisfies 1C. Notice that this connective isparaconsistent in the sense that both C and 1C may be “derivable” from A, but C A 4’ is not. However, ---f and 3 are equivalent in CO, since min(A) must lie within a single cluster. Finally, we define the plausibility of a proposition. A is at least as plausible as B just when, for every B-world w, there is some A-world that is at least as plausible as w . This is expressed in LB as E (B > OA). If A is (strictly) more plausible than B, then as we move away from II K 11, we will find an A-world before a B-world; thus, A is qualitatively “more likely” than B. In each model in Figure 1, A A B is more plausible than A A 1 B. 3 Epistemic Explanations Often explanations are postulated relative to some back- ground theory, which together with the explanation entails the observation. Our notion of explanation will be somewhat different than the usual ones. We define an explanation rel- ative to the epistemic state of some agent (or program). An agent’s beliefs and judgements of plausibility will be crucial in its evaluation of what counts as a valid explanation (see Gardenfors (1988)). We assume a deductively closed belief set K along with some set of conditionals that represent the revision policies of the agent. These conditionals may repre- sent statements of normality or simply subjunctives (below). There are two types of sentences that we may wish to ex- plain: beliefs and non-beliefs. If p is a belief held by the agent, it requires a factual explanation, some other belief cy 2PZ(A) = {r&z(A) n C : C is a cluster}. that might have caused the agent to accept ,L?. This type of explanation is clearly crucial in most reasoning applications. An intelligent program will provide conclusions of various types to a user; but a user should expect a program to be able to explain how it reached such a “belief:’ to justify its reasoning. The explanation should clearly be given in terms of other (perhaps more fundamental) beliefs held by the pro- gram. This applies to advice-systems, intelligent databases, tutorial systems, or a robot that must explain its actions. A second type of explanation is hypothetical. Even if ,0 is not believed, we may want a hypothetical explanation for it, some new belief the agent could adopt that would be sufficient to ensure belief in /3. This counter-factual reading turns out to be quite important in AI, for instance, in diagnosis tasks (see below), planning, and so on (Ginsberg 1986). For example, if A explains B in this sense, it may be that ensuring A will bring about B. If a is to count as an explanation of pin this case, we must insist that cr is also not believed. If it were, it would hardly make sense as a predictive explanation, for the agent has aheady adopted belief in cy without committing to /3. This leads us to the following condition on epistemic explanations: if cy is an explanation for p then cy and p must have the same epistemic status for the agent. In other words, cuEKiffpEKandlcrEKifflpEK.3 Since our explanations are to be predictive, there has to be some sense in which 0 is sufficient to cause acceptance of p. On our interpretation of conditionals (using the Ramsey test), this is the cast just when the agent believes the conditional (Y 3 p. So for cr to count as an explanation of 0 (in this predictive sense, at least) this conditional relation must hold.4 In other words, if the explanation were believed, so too would the observation. Unfortunately, this conditional is vacuously satisfied when p is believed, once we adopt the requirement that a be be- lieved too. Any cy E K is such that cy a ,&; but surely arbitrary beliefs cannot count as explanations. To determine an explanation for some ,0 E K, we want to (hypothetically) suspend belief in /3 and, relative to this new belief state, eval- 3This is at odds with one prevailing view of explanation, which takes only non-beliefs to be valid explanations: to offer a current belief cr as an explanation is uninformative; abduction should be an “inference process” allowing the derivation of new beliefs. We take a somewhat different view, assuming that observations are not (usu- ally) accepted into a belief set until some explanation is found and accepted. In the context of its other beliefs, ,f3 is unexpected. An ex- planation relieves this dissonance when it is accepted (Gbdenfors 1988). After this process both explanation and observation are be- lieved. Thus, the abductiveprocess should be understood in terms of hypothetical explanations: when it is realized what could have caused belief in an (unexpected) observation, both observation and explanation are incorporated. Factual explanations are retrospec- tive in the sense that they (should) describe “historically” what explanation was actually adopted for a certain belief. In (Becher and Boutilier 1993) we explore a weakening of this condition on epistemic status. Preferences on explanations (see below) then play a large role in ruling out any explanation whose epistemic status differs from that of the observation. 4See the below for a discussion of non-predictive explanations. 644 Boutilier uate the conditional cy j ,& This hypothetical belief state should simply be the contraction of K by p. The contracted belief set Kp is constructed as described in the last section. We can think of it as the set of beliefs held by the agent before it came to accept /3. 5 In general, the conditionals an agent accepts relative to the contracted set need not bear a strong relation to those in the original set. Fortunately, we are only interested in those conditionals cy + ,O where (Y E K. The AGM contraction operation ensures that TX $! KP. This means that we can determine the truth of LY +,6 relative to 1<i by examining conditionals in the original belief set. We simply need to check if VP + la relative to K. This is our final criterion for explanation. If the observation had been absent, so too would the explanation. We assume, for now, the existence of a model M that captures an agent’s objective belief set I< and its revision policies (e.g., M completely determines Ki, K; and ac- cepted conditionals A + B). When we mention a belief set K, we have in mind also the appropriate model M. All con- ditionals are evaluated with respect to K unless otherwise indicated. We can summarize the considerations above: Definition A predictive explanation of p E Lcp~ relative to belief set I< is any cy E Lcp~ such that: (I) cy E K iff t;c and icy E K iff lp E I<; (2) (Y j ,B; and (3) -a. As a consequence of this definition, we can have the follow- ing property of factual explanations: position 1 If cy, /3 E K then CY explains ,B iff Q 3 ,0 is accepted in K;. Thus factual explanations satisfy our desideratum regarding contraction by /?. Furthermore, for both factual and hypo- thetical explanations, only one of conditions (2) or (3) needs to be tested, the other being superfluous: position 2 (i) If cy, p E K then cy explains ,B ifs-@ 3 la; (ii) If a, ,6 @- K then cv eqolains /3 iff 0 j ,8. Figure 2 illustrates both factual and hypothetical explanations. In the first model, wet grass (IV) is explained by rain (R), since R 3 W holds in that model. Similarly, sprinkler S explains W, as does S A R. Thus, there may be competing explanations; we discuss preferences on these below. Intuitively, cu explains ,6’ just when /3 is true at the most plausible situations in which Q holds. Thus, explana- tions are defeasible: W is explained by R; but, R together with C (the lawn is covered) does not explain wet grass, for R A C + 1W. Notice that R alone explains W, since the ‘exceptional” condition C is normally false when R (or otherwise), thus need not be stated. This defeasibility is a feature of explanations that has been given little attention in many logic-based approaches to abduction. The second model illustrates factual explanations for W. Since W is believed, explanations must also be believed. R and 4 are candidates, but only R satisfies the condition on factual explanations: if we give up belief in W, adding R is ‘We do not require that this must actually be the case. Hypothetical Factual More Plausible Figure 2: Explanations for “Wet Grass” sufficient to get it back. In other words, 1 W a -R. This does not hold for + because 1W 3 S is false. Notice that if we relax the condition on epistemic status, we might accept S as a hypothetical explanation for factual belief R. This is explored in (I&her and Boutilier 1993). Semmtie ferences: Predictive explanations are very general, for any CY that induces belief in /3 satisfies our condi- tions. Of course, some such explanations should be ruled out on grounds of implausibility (e.g., a tanker truck exploding in front of my house explains wet grass). In probabilistic approaches to abduction, one might prefer most probable explanations. In consistency-based diagnosis, explanations with the fewest abnormalities are preferred on the grounds that (say) multiple component failures are unlikely. Pref- erences can be easily accommodated within our framework. We assume that the ,8 to be explained is not (yet) believed and rank possible explanations for ,0.6 An adopted explanation is not one that simply makes an observation less surprising, but one that is itself as unsurprising as possible. We use the plausibility ranking described in the last section. nition If (Y and (Y’ both explain p then a is at least as preferred as cy’ (written (Y <p cu’) iff M b *O(CY’ > OCY). The preferred explanations of ,f3 are those QY such that not Q’ <p a for all explanations a’. Preferred explanations are those that are most plausible, that require the “least” change in belief set K in order to be accepted. Examining the hypothetical model in Figure 2, we see that while R, S and R A S each explain W, R and S are preferred to R A S (I may not know whether my sprinkler was 6We adopt the view that an agent, when accepting p, also accepts its most plausible explanation(s). There is no need, then, to rank factual explanations according to plausibility - all explanations in K are equally plausible. In fact, the only explanations in K can be those that are preferred in Ki . Representation and on or it rained, but it’s unlikely that my sprinkler was on in the rain). If we want to represent the fact’ say, that the failure of fewer components is more plausible than more failures, we simply rank worlds accordingly. Preferred explanations of ,f? are those that predict /? and presume as few faults as possible? We can characterize preferred explanations by appealing to their “believability” given p: Proposition 3 a is a preferred explanation for p ifl M k -(P I* 4 la In the next section, we discuss the role of -+ further. This approach to preferred explanations is very general, and is completely determined by the conditionals (or de- faults) held by an agent.’ We needn’t restrict the ordering to, say, counting component failures. It can be used to represent any notion of typicality, normality or plausibility required. For instance, we might use this model of abduction in scene interpretation to “explain” the occurrence of various image objects by the presence of actual scene objects (Reiter and Mackworth 1989). Preferred explanations are those that match the data best. However, we can also introduce an extra level of preference to capture preferred interpretations, those scenes that are most likely in a given domain among those with the best fit. We should point out that we do not require a complete semantic model M to determine explanations. For a given incomplete theory, one can simply use the derivable condi- tionals to determine derivable explanations and preferences. This paper simply concentrates on the semantics of this pro- cess. All conditions on explanations can be tested as object- level queries on an incomplete KB. However, should one have in mind a complete ordering of plausibility (as in the next section), these can usually be represented as a compact object-level theory as well (Boutilier 1991). Other issues arise with this semantic notion of explana- tion. Consider the wet grass example, and the following conditionals: R 3 W, S + W and S A R 3 W (note that the third does not follow from the others). We may be in a situation where rain is preferred to sprinkler as an explana- tion for wet grass (it is more likely). But we might be in a situation where R and S are equally plausible explanations.g We might then have W + (S G 1 R). That is’ S and R are the only plausible “causes” for W (and are mutually ex- clusive). Notice that S E -R is a preferred explanation for W, as is S V R. We say cy is a covering explanation for /3 iff cu is a preferred explanation such that ,L? 3 cv. Such an cy represents all preferred explanations for /3.1° 71n consistency -based systems, explanations usually do notpre- diet an observation without adequate fault models (more on this in the concluding section). *Direct statements of belief, relative plausibility, integrity con- straints, etc. in LB may also be in an agent’s KB. ‘We can ensure that R A S is less likely, e.g., by asserting S + YRandR =$lS. “Space limitations preclude a full discussion (see (Becher and Boutilier 1993)), but we might think of a covering explanation as the disjunction of all likely causes of ,L3 in a causal network (Pearl Pra tics: We note that p is always an explanation for itself. Indeed’ semantically /3 is as good as any other explanation, for if one is convinced of this trivial explanation, one is surely convinced of the proposition to be explained. There are many circumstances in which such an explanation is reasonable (for instance, explaining the value of a root node in a causal network); otherwise we would require infinite regress or circular explanations. The undesirability of such trivial explanations, in certain circumstances, is not due to a lack of predictive power or plausibility, but rather its uninformative nature. We think it might be useful to rule out trivial explanations as a mat- ter of the pragmatics of explanation rather than semantics, much like Gricean maxims (but see also Levesque (1989)). But’ we note, that in many cases, trivial (or overly specific) explanations may be desirable. We discuss this and other pragmatic issues (e.g., irrelevance) in the full paper (Becher and Bout&r 1993). We note that in typical approaches to diagnosis this problem does not arise. Diagnoses are usually selected from a pre-determined set of conjectures or com- ponent failures. This can be seen as simply another form of pragmatic filtering, and can be applied to our model of abduction (see below). 4 Reconstructing eorist Poole’s (1989) Theorist system is an assumption-based model of explanation and prediction where observations are explained (or predicted) by adopting certsin hypotheses that’ together with known facts, entail these observations. We illustrate the naturalness and generality of our abductive framework by recasting Theorist in our model. It shows why Theorist explanations are paraconsistent and non-predictive, how they can be made predictive, and how a natural account of preferred explanation can be introduced to Theorist (and Brewka’s (1989) extension of it). Our presentation of Theo- rist will be somewhat more general than that found in (Poole 1989), but unchanged in essential detail. We assume the existence of a set 2J of defaults, a set of propositional formulae taken to be “expectations,” or facts that normally hold (Boutilier 1992c). We assume 2) is consistent.” Given a fixed set of defaults, we are inter- ested in what follows from a given (known) finite set of facts 3; we use F to denote its conjunction. A scenario for 3 is any subset D of D such that 3 U D is consistent. An extension of 3 is my maximal scenario. An explanation of p given 3 is any cy such that {(Y) U 3 U D /= /? for some scenario D of {cu} U 3. l2 Finally, /3 is predicted given 3 iff 3 U D k ,0 for each extension D of 3. In the definition of prediction in Theorist, we find an im- plicit notion of plausibility: we expect some maximal subset of defaults, consistent with 3, to hold. Worlds that violate 1988). We are currently investigating causal explanations in our conditional framework and how a theory might be used to derive causal influences (Lewis 1973; Goldszmidt and Pearl 1992). “Nothing crucial depends on this however. 12Theorist explanations are usually drawn from a given set of conjectures, but this is not crucial. 646 Boutilier Figure 3: A Theorist Model more defaults are thus less plausible than those that violate fewer. We define a CT40*-model that reflects this. Definition For a fixed set of defaults D, and a possible world (valuation) W, the violation set for r.~ is defined as V(W) = {d E V : w b 4). YI’he Theorist model forD is Mz, = (W,<,cp)whereWandcpareasusual,and<isan ordering of plausibility such that v 5 w iff V(v) C v ( w ) . Thus, MD ranks worlds according to the sets of defaults they violate. We note that MD is a CT40*-model, and if D is consistent, MD has a unique minimal cluster consisting of those worlds that satisfy each default. It should be clear that worlds w , v are equally plausible iff V(w) = V(v), so that each cluster in MD is the set of worlds that violate a particular subset D C D. The a-worlds minimal in MD are just those that sa&fy some maximal subset of defaults consistent with a. Theorem 4 ,0 is predicted given 3 iff MD b F 3 p. Thus, predictions based on 3 correspond to the belief set obtained when 2) is revised to incorporate 3. This is the view of default prediction discussed in (Boutilier 1992c). We now turn our attention to explanations. Theorist ex- planations are quite weak, for CV. explains ,0 whenever there exists any set of defaults that, together with (Y, entails p. This means that Q might explain both ,0 and l/3. Such explana- tions are in a sense paraconsistent, for (Y cannot usually be used to explain the conjunction p A -p. hrthermore, such explanations are not predictive: if cy explains contradictory sentences, how can it be thought to predict either? Consider a set of defaults in Theorist which assert that my car will start (S) when I turn the key (T), unless my battery is dead (B). The Theorist model MD is shown in Figure 3. Suppose our set of facts 3 has a single element B. When asked to explain S, Theorist will offer T. When asked to explain +, Theorist will again offer T. If I want my car to start I should turn the key, and if I do not want my car to start I should turn the key. There is certainly something unsatisfying about such a notion of explanation. Such explanations do, however, correspond precisely to weak explanations in CT40 using -+. 5 a, is a Theorist explanation of p given 3 iff M~~==cY.AF-+@ This illustrates the conditional and defeasible semantic un- derpinnings of Theorist’s weak (paraconsistent) explanations in the conditional framework. In our model, the notion of predictive explanation seems much more natural. In the Theorist model above, there is a possibility that T A B gives S and a possibility that T A B gives 15’. Therefore, T (given B) explains neither possibility. One cannot use the explanation to ensure belief in the “observation” S. We can use our notion of predictive explanation to extend Theorist with this capability. Clearly’ predictive explanations in the Theorist model MD give us: finition cy is a predictive explanation for /3 given 3 iff p is predicted (in the Theorist sense) given 3 U {a}. oaem 6 cy is a predictive explanation for ,8 given 3 iff b a A F 3 ,f3 (i.e., ifs 3 U D U {CY} b p for each extension D). Taking those cu-worlds that satisfy as many defaults as ble to be the most plausible or typical cu-worlds, it is cl revising by cv should result in acceptance of those situations, and thus cy should (predictively) explain p iff p holds in each such situation. Such explanations are often more useful than explanations for they suggest suficient conditions cy ill (defeasibly) lead to a desired belief p. Weak expla- nations of the type originally defined in Theorist’ in contrast’ merely suggest conditions that might leac! to ,0. Naturally, given the implicit notion of plausibility deter- 2), we cm characterize preferred explanations in These turn out to be exactly those explanations the violation of as few defaults as possible. nition Let cu, CY’ be predictive explanations for /3 given 3. o is at least as preferred as a! (written CY 53 a’) iff each extension of 3 U {a’} is contained in some extension of3 u {a}. Theorem 7 a 53 (3~’ ifsMv b ~((cx’ A F) > O(a A F)). So the notion of preference defined for our concept of epis- temic explanations induces a preference in Theorist for pre- dictive explanations that are consistent with the greatest sub- sets of defaults; that is, those explanations that are sible or most normal (see Konolige (1992) who similar notion). This embedding into CT40 provides a compelling se- mantic account of Theorist in terms of plausibility and belief revision. But it also shows directions in which Theorist can be naturally extended, in particular’ with predictive ex- planations and with preferences on semantic explanations, notions that have largely been ignored in assumption-based explanation. In (Becher and Boutilier 1993) we show bow these ideas apply to Brewka’s (1989) prioritized extension of Theorist by ordering worlds in such a way that the prioritization rela- tion among defaults is accounted for. If we have a prioritized default theory D = D1 u - - - D, , we still cluster worlds ac- cording to the defaults they violate; but should w violate fewer high priority defaults than v, even if it violates more low priority defaults, w is considered more plausible than v. Representation and Reasoning 647 This too results in a CT40*-model; and prediction, (weak and predictive) explanation, and preference on explanations are all definable in the same fashion as with Theorist. We also show that priorities on defaults, as proposed by Brewka, simply prune away certain weak explanations and make oth- ers preferred (possibly adding predictive explanations). For instance, the counterintuitive explanation T above, for S given B, is pruned away if we require that the default T 3 S be given lower priority than the default T A B > 1s. A model for such a prioritized theory simply makes the world TBS less plausible than TBS. We note, however, that such priorities need not be provided explicitly if the Theorist model is abandoned and defaults are expressed directly as conditionals. This preference is derivable in CT40 from the conditionals T a S and T A B + -S automatically. 5 Conchding Remarks We have proposed anotion of epistemic explanation based on belief revision, and preferences over these explanations us- ing the concept of plausibility. We have shown how Theorist can be captured in this framework. In (Becher and Boutilier 1993), we show how this model can be axiomatized. We can also capture consistency-based diagnosis in our framework, though it does not usually require that explanations be pre- dictive in the sense we describe. Instead, consistency-based diagnosis is characterized in terms of “might” counterfac- tuals, or excuses that make an observation plausible, rather than likely (Becher and Boutilier 1993). Of course, fault models describing how failures are manifested in system behavior make explanations more predictive, in our strong sense. However, the key feature of this approach is not its ability to represent existing models of diagnosis, but its ability to infer explanations, whether factual or hypothetical, from existing conditional (or default) knowledge. We are also investigating the role of causal explanations in abduc- tion, and how one might distinguish causal from non-causal explanations using only conditional information. Acknowledgements: Thanks to David Poole for helpful comments. This research was supported by NSERC Re- search Grant OGF’O121843. eferences Alchournk, C., Gtidenfors, P., and Makinson, D. 1985. On the logic of theory change: Partial meet contraction and revision functions. Journal of Symbolic Logic, 505 10-530. Becher, V. and Boutilier, C. 1993. Epistemic explanations. Techni- cal report, University of British Columbia, Vancouver. forth- coming. Boutilier, C. 1991. Inaccessible worlds and irrelevance: Prelimi- nary report. In Proceedings of the Twelflh International Joint ConferenceonArt@cial Intelligence, pages413-418, Sydney. Boutilier, C. 1992a. Conditional logics for default reasoning and belief revision. Technical Report KRR-TR-92-1, University of Toronto, Toronto. Ph.D. thesis. Boutilier, C. 1992b. A logic for revision and subjunctive queries. In Proceedings of the Tenth National Conference on Artijicial Intelligence, pages 609-615, San Jose. Boutilier, C. 1992c. Normative, subjunctive and autoepistemic defaults: Adopting the Ramsey test. In Proceedings of the lirzird International Conference on Principles of Knowledge Representation and Reasoning, pages 685-696, Cambridge. Brewka, 6. 1989. Preferred subtheories: An extended logical framework for default reasoning. In Proceedings of the Eleventh International Joint Conference on Artificial Intel- ligence, pages 1043-1048, Detroit. de Kleer, J., Mackworth, A. K., and Reiter, R. 1990. Characterizing diagnoses. In Proceedingsof the Eighth National Conference on Artificial Intelligence, pages 324-330, Boston. de Kleer, J. and Williams, B. C. 1987. Diagnosing multiple faults. Artgcial Intelligence, 32:97-130. Gkdenfors, P. 1988. Knowledge in Flux: Modeling the Dynamics of Epistemic States. MIT Press, Cambridge. Ginsberg, M. L. 1986. Counterfactuals. Artificial Intelligence, 30( 1):35-79. Goldszmidt, M. and Pearl, J. 1992. Rank-based systems: A simple approach to belief revision, belief update, and reasoning about evidence and actions. In Proceedings of the Third Intema- tional Conference on Principles of Knowledge Representation and Reasoning, pages 66 l-672, Cambridge. Hempel, C. G. 1966. PhilosophyofNaturalScience. Prentice-Hall, Englewood Cliffs, NJ. Konolige, K. 1992. Using default and causal reasoning in diag- nosis. In Proceedings of the Third International Conference on Principles of Knowledge Represetiation and Reasoning, pages 509-520, Cambridge. Levesque, H. J. 1989. A knowledge level account of abduction. In Proceedings of the Eleventh International Joint Conference on Artificial Intelligence, pages 1061-1067, Detroit. Lewis, D. 1973. Causation. Journal of Philosophy, 70:556-567. Pearl, J. 1988. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, San Mateo. Poole, D. 1989. Explanation and prediction: An architecture for default and abductive reasoning. Computational Intelligence, 5:97-l 10. Poole, D. 1991. Representing diagnostic knowledge for proba- bilistic horn abduction. In Proceedings of the Twelflh In- ternational Joint Conference on Artificial Intelligence, pages 1129-l 135, Sydney. Quine, W. and Ullian, J. 1970. The Web of Belief. Random House, New York. Reiter, R. 1987. A theory of diagnosis from first principles. Artiji- cial Intelligence, 32:57-95. Reiter, R. and Mackworth, A. K. 1989. A logical framework for depiction and image interpretation. Artijcial Intelligence, 41:125-155. Rescher, N. 1978. Peirce ‘s Philosophy of Science: Critical Studies in his Theory of Induction and Scienti@c Method. University of Notre Dame Press, Notre Dame. Stalnaker, R. C. 1968. A theory of conditionals. In Harper, W., Stalnaker, R., and Pearce, G., editors, Ifs, pages 41-55. D. Reidel, Dordrecht. 198 1. 648 Boutilier | 1993 | 96 |
1,427 | Revision By Conditional Beliefs Craig Boutilier Univ. of British Columbia Dept. of Computer Science Vancouver, BC V6T 122 CANADA cebly@cs.ubc.ca Abstract Both the dynamics of belief change and the process of rea- soning by default can be based on the conditional beZief set of an agent, represented as a set of “if-then” rules. In this paper we address the open problem of formalizing the dynamics of revising this conditional belief set by new if-then rules, be they interpreted as new default rules or new revision policies. We start by providing a purely semantic characterization, based on the semantics of conditional rules, which induces logical constraints on any such revision process. We then in- troduce logical (syntax-independent) and syntax-dependent techniques, and provide a precise characterization of the set of conditionals that hold after the revision. In addition to formalizing the dynamics of revising a default knowledge base, this work also provides some of the necessary formal tools for establishing the truth of nested conditionals, and attacking the problem of learning new defaults. Consider a child using a single default “typically birds fly”, to predict the behavior of bids. Upon learning of the class of penguins and their exceptional nature she considers revising her current information about birds to include the informa- tion that penguins are birds yet “typically penguins do not fly”. This process is different from that usually modeled in approaches to nonmonotonic reasoning and belief revision, where upon discovering that Tweety is a (nonflying) penguin she simply retracts her previous belief that ‘Iweety does fly. Instead, the example above addresses the issue of revising the set of conditional beliefs, namely, the default rules that guide the revision of our factual beliefs. In this p are concerned with the dynamics of such conditional beliefs. Our objective is to characterize how the conditional informa- tion in a knowledge base evolves due to the incorporation of the new conditionals, which rules should be given up in case of inconsistency, and what principles guide this process.’ Qne well-known theory addressing the dynamics of fac- tual beliefs is that proposed by Alchourron, Gardenfors and Makinson (1985; 1988). The AGM theory t&es epistemic ‘We will not address the important question of why and when an agent decides to revise its conditional beliefs or defaults. Moishs Goldsddt Rockwell International 444 High Street Palo Alto, CA 94301 U.S.A. moises@rpal.rockwell.com states to be deductively closed sets of (believed) sentences and characterizes how a rational agent should change its set K of beliefs. This is achieved with postulates constraining revision functions *, where Ki represents the belief set that results when K is revised by A. Unfortunately, the AGM theory does not provide a calculus with which one can re- alize the revision process or even specify the content of an epistemic state (Bout&r 199% Doyle 1991; Nebell991). Recent work (Boutilier 199% Goldszmidt AGM revision can be captured by assumi has a knowledge base (KB) containing subjunctive condi- tionals of the form A + B (where A and B are objective formulae). These conditionals define the agent’s belief set and guide the revision process via the Ramsey test (Stal- naker 1968): A ---+ B is accepted iff revision by A results in a belief in B. Such conditionals may be given a prob- abilistic interpretation (Goldszmidt 1992): each A + B is associated with a conditional probability statement arbitrar- ily close to one. They may also be interpreted a statements in a suitable modal logic (Boutilier 1992a). The correspond- ing logics (and indeed semantics) are identical (Boutilier 1992a), and furthermore there is a strong relation between these conditionals and conditional default rules (Boutilier 1992c; Goldszmidt and Pearl 1992a). The AGM theory has two crucial limitations. First, the conditionals (or revision policies) associated with K, that determine the form of K& provide no guidance for deter- mining the conditionals accepted in Ki itself. The theory only determines the new factual beliefs held after revision. Even if conditionals are contained in K, the AGM theory cannot suggest which conditionals should be retained or re- tracted in the construction of Ki. Subsequent revisions of Ki can thus be almost arbitrary. Second, the theory provides no mechanism for revising a belief set with new condition- als. Thus, the ision policies of an agent cannot, in general, be changed.2 is paper provides a solution to this second problem, and extends our recent work on a solution to the first problem (Bout&r 1993; Goldszmidt and Pearl 1992b). 2Surprisingly, these two issues have remained largely unex- plored, due largely to the G%rdenfors (1988) triviality result, which points to difficulties with the interpretation of conditional belief sets. But these can be easily circumvented (Boutilier 1992~). Representation and Reasoning 649 From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. In this paper we focus on a particular model of condi- tional revision that extends propositional natural revision introduced by Boutilier (1993). The natural revision model addresses the problem of determining new conditional be- liefs after revision by factual beliefs, and extends the notion of minimal change (characteristic of the AGM theory) to the conditional component of a ID. Thus, when a factual revi- sion is applied to KZ3, the revised KB’ contains as much of the conditional information from KB as possible. The exten- sion to conditional revision presented here preserves these properties and possesses the crucial property that the beliefs resulting from any sequence of (conditional or factual) up- dates can be determined using only properties of the original ranking, and tests involving simple (unnested) conditionals.3 A model for revising H3 with new conditional belief (e.g., a rule C + D) is crucial for a number of reasons. The prob- lem of truth conditions for nested conditionals is subsumed by this more general problem. The semantics of conditionals with arbitrary nesting requires an account of revision by YEW conditionalinformution. To test the truth of (A + B) -+ 6, we must first revise KZ3 by A + B and then test the status of C (Goldszmidt and Pearl 1992b). Also, it is clear that our beliefs do not merely change when we learn new factual information. We need a model that accounts for updating our belief set with new conditional probabilities and new subjunctive conditionals to guide the subsequent revision of beliefs. Given the strong equivalence between condition- als of the type described here and conditional default rules (Boutilier 1992c; Goldszmidt and Pearl 1992a), a model of conditional revision provides an account of updating a IKB with new default rules. Any specification of how an agent is to learn new defaults must describe how an agent is to incorporate a new rule into its corpus of existing knowledge. Hence, the process we study in this paper is crucial for pro- viding a semantic core for learning new default information. We first review the basic concepts underlying belief revi- sion. We then describe the basics of conditional belief revi- sion by presenting a set of operations on ranked-models, and an important representation theorem. Finally, we explore a syntax-independent and a syntax-dependent approach to the conditional revision of a KB. Propositional Natural Revision In this section we briefly review a semantic account of belief revision (we refer the reader to (Gardenfors 1988; Gold- szmidt and Pearl 1992b; Boutilier 1992b) for details). We assume the existence of a deductively closed belief set K 3A second method o f revision is the model of J-conditioning (Goldszmidt and Pearl 1992b): when KB is updated with a new fact A, the revised KB’ is determined by Bayesian conditionaliza- tion, giving rise to a qualitative abstraction of probability theory (Adams 1975; Goldszmidt 1992). This mechanism preserves the (qualitative) conditional probabilities in KB as much as possible and thus guarantees that the relative strength of the conditionals also remains constant The extension of Jconditionalization to the conditional revision case is explored in the full version of the paper (Boutilier and Goldszmidt 1993). over a classical propositional language Lcp~. Revising this belief set with a new proposition A is problematic when K + lA, for simply adding the belief A will cause incon- sistency. To accommodate A, certain beliefs must be given up before A is added. The AGM theory of revision provides a set of constraints on revisionfinctions jc that map belief sets K into revised belief sets K;. Any theory of revision also provides a theory of conditionals if we adopt the Ramsey test. This test states that one should accept the conditional “If A then B” just when B E K:. A key representation result for this theory shows that changes can be modeled by assuming an agent has an order- ing of epistemic entrenchment over beliefs: revision always retains more entrenched propositionsin preference to less en- trenched ones. Grove (1988) shows that entrenchment can be modeled semantically by an ordering of worlds. This is pursued by Boutilier (1992b) who presents a modal logic and semantics for revision. A revision model M = (W, 5, p) consists of a set of worlds W (assigned valuations by cp) and an plausibility ordering 5 over W. If v 5 w then v is at least as plausible as w. We insist that ,< be transitive and connected(sow 5 vorv 5 wforallv,w). Wedenoteby 1 IAl 1 the set of worlds in M satisfying A (those ut such that M bu, A). We define the set of most plausible A-worlds to be those worlds in A minimal in 5; so min( M, A) is just We assume that all models are smooth in the sense that min(M, A) # 8 for all (satisfiable) A E Lcp~.~ The ob- jective belief set K of a model M is the set of (Y E Lcp~ such that min( M, T) & 1 Ia I I (those cu true at each most plausible world). Such cy are believed by the agent. These objective orfactualbeliefs capture the agent’s judgements of true facts in the world. They should be contrasted with the conditional beliefs of an agent, described below. To capture the revision of a belief set K, we define a K-revision model to be any revision model such that min( M, T) = II K 11. That is, all and only those worlds satisfying the belief set are most plausible. When we revise K by A, we must end up with a new belief set that includes A. Given our ordering, we simply require that the new belief set correspond to the set of most plausible A-worlds. We can define the truth conditions for a conditional connective as M b=w A + B iff min(M,A) & IlBll (1) Such conditional beliefs characterize the revision policies, hypothetical beliefs or defaults of an agent. Equating A + B with B E Ki, this definition of revision characterizes the same space of revision functions as the AGM theory (Boutilier 1992b). The AGM theory and the semantics above show how one might determine a new objective belief set KJi from a given K-revision model; but it provides no hint as to what new con- ditionals should be held. To do so requires that a new revision 41-Ience there exist most plausible A-worlds. This is not re- quired, but’the assumption does not affect the equivalence below. 650 Boutilier model, suitable for Ki, be specified. Natural revision, pro- posed by Boutilier (1993), does just this. Given a K-revision model M, natural revision specifies a new model MJi suit- able for the revision of Ki (i.e., a Ki-revision model). Roughly, this model can be constructed by “shifting” the set min(M, A) to the bottom of the ordering, leaving all other worlds in the same relative relation. This extends the notion of minimal change to the relative plausibility of worlds. To believe A, certainly Ki-worlds must become most plausi- ble, but nothing else need change (Boutilier 1993). Hence, natural revision constructs a new ranking to reflect new ob- jective beliefs. With such a ranking one can then determine the behavior of subsequent objective revisions. But no ex- isting model of revision accounts for revision of a ranking to include new conditionals. In the next section we extend natural revision so that new conditional information can be incorporated explicitly in a model. elief sion: wisiing el Given a revision model M, we want to define a new model M:-tB that satisfies A --+ B but changes the plausibility ordering in M as little as possible. We do this in two stages: fkt, we define the contraction of M so that the “negation” A -+ 1B is not satisfied; then we define tie eqansion of this new model to accommodate the conditional A ---) B. LetM = (W,s,p). Definition 1 The natural contraction operator - maps M into MiwB, for any simple conditional A 3 B, where M;-.B = (w, L’, cp), and: 1. ifv,w#min(M,AATB)thenv<twiffv<w 2.ifw~min(M,AAlB)then: (a)w<‘viffu<v for some u E min(M, A); and (b) v 5’ w iff v 5 u for some u E min(M, A) Figure 1 illustrates this process in the principle case, showing how the model MAB is constructed when M b A -+ B. Clearly, to “forget” A 4 B we must construct a model where certain minimal A-worlds do not satisfy B. If M satisfies A --) B, we must ensure that certain A A 1 B-worlds become at least as plausible as the minimal A-worlds, thus ensuring that A -+ B is no longer satisfied. Natural contraction does this by making the most plausible A A YB-worlds just as plausible as the most plausible A-worlds. Simply put, the minimal A A YB-worlds (the light-shaded region) are shifted to the cluster containing the minimal A-worlds (the dark-shaded region). We have the following properties:5 Proposition 1 Let M be a revision model. (1) MibB k A --+ B; (2) Zf M k A --+ B then MidB = M; and (3)IfM kA-tlBthenM;*, FAA BAA+B. Theorem 2 Let M; denote the natural propositional con- traction of M by (objective belief3 A (as defined in (Boutilier 1993)). 7’hen M; = Mf+A. ‘We let a + /3 stand for -)(a + /I). Thus, propositional contraction is a special case of condi- tional contraction. We define the expansion of M by A + B to be tie model Mi+, constructed by making the minimal changes to M required to accept A + B. While we do not require that M k A + 1B in the following definition, we will only use this definition of expansion for such models. finition2 The natural evansion operator + for any simple conditional A ---, B, where MAfB = (W I’, cp), and: 1. ifv$!min(M,AA-B)thenw<‘viffw+ 2. if v E min( M, A A 1B) then: (a) if w E min( M, A A 1B) then w <’ V; and (b)ifw$!min(M,AAlB)thenw2viffw<vand there is no u E min(M, A A B) such that u 5 w Figure 1 illustrates this process in the principle case, showing how the model M,f, is constructed when M k A + B. Clearly, to believe A + B we must construct a model where all minimal A-worlds satisfy B. If M fails to satisfy A + B, we must ensure that the minimal A A TB-worlds become less plausible than the minimal A-worlds, thus ensuring that A ---+ B is satisfied. Natural expansion does this by making the most plausible A A 1 B-worlds (the -shaded region) less plausible than the most plausible A-worlds (the light- shaded region). This leaves us with A 3 B, but preserves the relative plausibility ranking of all other worlds. In par- ticular, while the set of minimal A A YB-worlds becomes less plausible than those worlds with which it shared equal plausibility in M, its relationship to more or less plausible worlds is unchanged. Once again, the idea is that the condi- tional belief set induced by A should contain only B-worlds, that all other conditionals should remain unchanged to greatest extent possible. ition 3 Let M be a revision model such that + 1B. (I) M+ bA+B;and l= A + B theAn$i+, = M. We can now define revision by a conditional A --) B. Briefly, to accept such a conditional we first “forget” A + 1B and then “add” A + B. finition 3 The natural revision operator M* A+B, for any simple conditional A MLB = wi4?L?. M into , where This definition of revision reflects the Levi identity (Levi 1980). Figure 1 illustrates this process in the principle case, showing how the model Mi+B is constructed when M b 1 B. Natural revision behaves as expected: ition 4 Let M be a revision model. (I) MiwB b A-+B;and(2)IfM~A+BthenM&,=M. 5 Let Mi denote the natural propositional revi- sion of M by (objective belief3 A (as de$ned in (Boutilier I993)). Then M; = MT,, . Thus, we can view propositional revision as a special case of conditional revision. We will henceforth take Mi as an abbreviation for M;,, . Representation and Reasoning 651 0 K iv- A->B ~ . . . . . . p%.f@# _ ‘yq$$-# A ,.. .:+:.:.:.:.:. 0 x M i 0 A A4+ A ->B Figure 1: Contraction, Expansion and Revision of a Model These results show that natural conditional revision can reasonably be called a revision operator. To show that this revision operator is indeed “natural,” we must determine its precise effect on belief in previously accepted conditionals. In particular, we would like a precise characterization the simple conditionals cu + ,8 satisfied by the revised model M* A-cB. The following result (Thm. 6) shows that the truth of such conditionals in Mi-B is completely determined by the set of simple conditionals satisfied by M. Thus, the truth of an arbitrarily nested conditional under natural revision can be determined by the truth of simple conditionals in our original model. We note that revision models are complete in that they satisfy every simple conditional or its negation. We do not require that a conditional KB be complete in this sense. We describe how this semantic model can be applied to an incomplete H3 in the next section. We now show which conditionals are satisfied by a model MLB. In (Boutilier and Goldszmidt 1993) we also describe similar characterizations of models Mi+B and Mi.+B. We begin by noting that if M F A + lB, then M* - M&. A-+B - In particular, if M b A + B, then WLB = M and no conditional beliefs are changed. We assume then that M f= A --j +?, the principle case of revi- sion. We also introduce the notion ofplausibility: a sentence P is at least as plausible as & (relative to M) iff the mini- mal P-worlds are at least as plausible (in the ordering 5) as the minimal &-worlds. This will be the case exactly when M~(PVQ)f,lP.WewriteP<p&ifformulaPis more plausible that &, and P =p $ if P and & are equally plausible.6 To determine whether cy + ,O holds in Mz+B, we simply need to know how the relative position of worlds in 11 cy 11 is affected by the revision. The relative plausibility of A and cv in M is crucial in determining this. If cy is more %ausibility is also induced by the manking of formulae (Gold- szmidt and Pearl 1992b): P 5p Q iff K(P) 5 K(Q). ;d . . . . . . . . . . . . . . :..:..::.. .:.:.:.:.: :.‘.‘.‘.“A .ggg gig . . . . f al ::g A 0 A M* A->B plausible than A, then shifting A A B-worlds down cannot affect the most plausible cu-worlds. If (Y is less plausible, the most plausible cw-worlds might change, but only if there are cu-worlds among the most plausible A A B-worlds. Finally, there are several different types of changes that can occur if o and A are equally plausible. Theorem6LetM bA + -B and let <p be the plausi- bility ordering determined by M. Let a, ,8 E Lcp~. l.IfCY<pAthenM~-+B~=a-+iiffM~=j-). 2. Ifa >p A then (a) rf M j= A A B + TX then M2-B k=+@iifSMb=+@. (b)I&- bAAB+a,then /=cu-+,kIiflM~AAB/\a-),& 3. rfcv =Ap+f then (a)IfM bAAB+wandMbcu+Athen M2-B b&+PifiM ~AABAw+ (W$=AABf- TaandM bc+Athen l=-+Pifl MAF%ABAa+PandM +cYA~A--+@. (c)I~M~AAAB +wandM~AA~B-,~cuth.en Mi-B b=+piffM j=cy+,& (d)IfM bAAB -,-xx,M~AA~Bf,~crand M~a~AthenM~,,~cu-,piffM~cu~p. (e)IfMbAAB --,~Cy,M~AA~Bf,~aandM~ a f) A then MidB j=w+j3iffM~~A~A-Q. While this characterization results appears complex, it is rather intuitive, for it captures the interactions caused by the relative plausibility of A and other propositions cu. As an example, suppose we believe that a power surge will nor- mally cause a breaker to trip (S + B) and this will prevent equipment damage (S -+ 10); but if the breaker doesn’t trip there will be damage (S A 1B ---) D). Our charac- terization shows that, should we learn the breaker is faulty (S -+ -B), we should also change our mind about potential 652 Boutilier damage, and thus accept S --) D. However, information The breaker example above exemplifies this approach. such as T ---b + will continue to be held (the likelihood of Ckarly, we do not want to resort to generating all models of a power surge does not change). Hence, our factual beliefs KZL?. Fortunately, our representation theorem allows us to use (e.g., 1s) do not change, merely our conditionalbelief about any logical calculus for simple conditionals alone to deter- the breaker: what will happen VS. mine the set of all such consequences. A simple conditional Theorem 6 also shows that the conditionals that hold in cy + p will be in the logical revision of KB iff the appropri- the revised model Mi+B can be completely characterized ate set of simple conditionals (from Theorem 6) is derivable in terms of the conditionals in M. This allows us to use the from KB (e.g., one may use the calculus of (Boutilier 1992b; mechanisms and algorithms of Goldszmidt and Pearl (1992b) Goldszmidt and Pearl 1991)). for computing the new model (Boutilier and Goldszmidt The main problem with an approach based on logical re- 1993). This also demonstrates that an arbitrary nested con- vision is that it is extremely cautious. A direct consequence ditional sentence (under natural revision) is logically equiv- alent to a sentence without nesting (involving disjunctions of conditionals). Thus, purely propositional reasoning mech- anisms (Pearl 1990) can be used to determine the truth of nested conditionals in a conditional KB. Indeed, in many circumstances, a complete semantic model can be repre- sented compactly and reasoned about tractably (Goldszmidt and Pearl 1992b). We explore this in the full paper. When we revise by A ---) B we are indicating a willingness to accept B should we come to accept A. Thus, we might expect that revising by A --+ B should somehow reflect propositional revision by B were we to restrict our attention to A-worlds. This is indeed the case. Let M\CY deuote the model obtained by eliminating all cu-worlds from Mm T~CXMWU 7 (Mi,B)\-A = (M\lA)g This shows that accepting A 3 B is equivalent to accepting B “given” A. Thus, natural revision by conditionals is in fact a conditional form of the propositional natural revision of Bout&r (1993). The only reason the characterization theorem for conditional revision is more complex is the fact that we can “coalesce” partial clusters of worlds, something that can’t be done in the propositional case. We also note that (M:,B)\A = M\A; that is, the relative plausibility of -A-worlds is unaffected by this revision. of this cautious behavior is that the syntactic structure of KB is lost: it plays no role in the revision process! For instance, the revisions of either of (A + B, A ---) C} or {A + B A 6) by A --) 1B are identical. Yet, in some cases, conditional revision of the first set should yield a KB equivalent to (A + lB, A + C} simply because A + 1B conflicts only with A -+ B. Yet logical revision forces into consideration models in which A + C is given up as well. This is not unreasonable, in generaJg but the syntactic structure may also be used in revision. The strategy we propose isolates the portion of KB incon- sistent with the new rule A 4 B, which will be denoted by I, an en applies logical revision to K.BI alone. Letting J= - DIP the revised set a>,B is the union of J and the logically revised #BI (with A 3 B). We first introduce the notion of consistency: A set KB is consistent iff there exists at least one model M such that, for each A ---) B E #B, min(M, A) & IlBll and min(M, A) # Q. A conditional A -+ B is tolerated by the set { Ci -+ Di }, 1 < i 5 n iff the propositional formula A A B /\izy { Ci > Di) is satisfiable. The notion of toleration constitutes the basis for isolating the inconsistent set of KB. A set contain- ing a rule tolerated by that set will be called a con&mabZe evising a Conditional Knowll If a conditional KB contains a complete set of simple condi- set. The following theorem presents necessary and sufficient conditions for consistency (Goldszmidt and Pearl 1991): tionals (i.e., defines a unique revision model) we &n use the definitions above to compute the revised KB. Often we may use techniques to complete a M.? as well (Pearl 1990). In practice, however, iW will usually be an inGomplete set of premises or constraints. We propose the following method of logical revision. Since iU3 is not complete, it is satisfied by each of a set llK73lJ of revision models, each of these a “possible” ranking for the agent. When a new conditional A + B is learned, revision proceeds in the following way. If there are elements of &U?lI that satisfy A 4 B, these be- come the new possible rankings for the agent.7 In this case we have ml,, 3 KBU (A + B). If thisisnottbecase, each possibility in IIKBII must be rejected. To do this, we revise each ranking in IIKBII and consider the result of this revision to be the set of new possibilities. m1-B is then KB is consistent ifl every nonempty subset KB’ C A3 is con@mable. of all minimally un le subsets for KB. Thus, K& contains only the conditionals in KB that are responsible for the inconsistencies in KB. In a syntax-directed revision of K.B we are primarily interested in un * the conditionals in the original KZ? that are still vali e revision process. The S operator below serves this purpose (note that S is built *E.g., Nebel(199 1) has advocated syntax-dependent revision. ‘Indeed, this is exactly analogous to the generality of the AGM theory. Given K = Cn{ A, B}, it is not known whether B E KzA or not. Logically, the possibility of a connection between A and B exists, and should be denied or stated (or assumed) explicitly. loIf KB is consistent, then KB, is the empty set. (C+D:M~,B~C ---) D for all M E IIIYBJI) ‘This is is equivalent to asking if KB u {A --) B) is consistent (see Def. 4 and Thm. 8). Representation and Reasoning 653 on top of a logical revision process). Given a set KZ3, and a simple conditional A ---+ B, let S(KB, A + B) denote the set of conditionals C + Dsuchthat: (l)C-+ DHChnd (2) KBf;;,, b C + D, where m2-B denotes the logical revision of KB by A + B. We define the syntactic revision of KB by A + B as follows: Definition 5 Let KB be consistent, and let A -+ B be a simple conditional. Let K& denote the MCI of KB U (A-B),=Q =K+(A-+B),andKBJ=KB- K&. The syntactic revision of KB by A - B, written m>,B, will be m>,*B = S(KBI,A + B) UKBJ U {A -+ B}. Note that in the case where A ---) B is consistent with respect to m, m>,B will be simply the union of the original KB and the new conditional A + B. Also, the syntactic revision of {A --+B,A-+C}byA ---) 1B will be the set {A- lB,A- C]sincekX~ = (A+ B}. Inthebreaker exampleabove, therevisionof {S ---) B, SA-IB + D, S -+ ‘D} by S + 1B will yield {S - lB, S A 1B ---) D} which entails the conditional S + D (as in the case of logical revision). Given that the revision of KB* is based on Theorem 6 and notions of propositional satisfiability (i.e., toleration), the resulting set of conditionals can be computed effectively. The major problem in terms of complexity is the uncovering of the MCI set KBr which seems to require an exponential number of satisfiability tests. Concluding Remarks We have provided a semantics for revising a conditional #B with new conditional beliefs in a manner that extends both the AGM theory and the propositional natural revision model. Our results include a characterization theorem, pro- viding computationally effective means of deciding whether a given conditional holds in the revised model. We have also provided a syntactic characterization for the revision of a KB. We remark that, as in the case of proposals for objective be- lief revision (including the AGM theory), we make no claims or assumptions about the complex process by which an agent decides to incorporate a new conditional belief (or default rule) into its corpus of knowledge. We merely provide the formal means to do so. Conditional belief revision defines a semantics for arbi- trary nested conditionals as proposed in (Goldszmidt and Pearl 1992b), extending the semantics for right-nested con- ditionals studied in (Boutilier 1993). By describing the pro- cess by which an agent can assimilate new information in the form of conditionals, conditional belief revision is proposed as a basis for the learning of new default rules. We note that the same techniques can be used to model revision by conditionals in a way that respects the proba- bilistic intuitions of J-conditioning. Analogues of each of the main results for natural revision are shown in the full pa- per (Boutilier and Goldszmidt 1993). We also explore other mechanisms for revising a KB and the relationship of our models to probabilistic conditionalization and imaging. We discuss further constraints on the revision process to reflect a causal interpretation of the conditional sentences. Acknowledgeruents: Thanks to Judea Pearl for helpful comments. This research was supported by NSERC Grant GGPO121843 and by Rockwell International Science Center. eferences Adams, E. W. drecht. 1975. The L,ogic of Conditionals. D.Reidel, Dor- Alchourron, C., Gkdenfors, P., and Makinson, D. 1985. On the logic of theory change: Partial meet contraction and revision functions. Journal of Symbolic Logic, 50:510-530. Boutilier, C. 1992a. Conditional logics for default reasoning and belief revision. Technical Report KRR-TR-92-1, University of Toronto, Toronto. Ph.D. thesis. Boutilier, C. 1992b. A logic for revision and subjunctive queries. In Proc. of AAAI-92, pages 609-615, San Jose. Boutilier, C. 1992~. Normative, subjunctive and autoepistemic defaults: Adopting the Ramsey test. In Proc. of KR-92, pages 685-696, Cambridge. Boutilier, C. 1993. Revision sequences and nested conditionals. In Proc. of ZJCAI-93, Chamber-y. (to appear). Boutilier, C. and Goldszmidt, M. 1993. Revising by conditionals. Technical report, University of British Columbia, Vancouver. (Forthcoming). Doyle, J. 1991. Rational belief revision: Preliminary report. In Proc. of KR-91, pages 163-174, Cambridge. Gadenfors, P. 1988. Knowledge in Flux: Modeling of Epistemic States. MIT Press, Cambridge. the Dynamics Goldszmidt, M. 1992. Qualitative probabilities: A normative framework for commonsense reasoning. Technical Report R-190, University of California, Los Angeles. Ph.D. thesis. Goldszmidt, M. and Pearl, J. 1991. On the consistency of defeasible databases. Arti$ciaZ Intelligence, 52:121-149. Goldszmidt, M. and Pearl, J. 1992a. Rank-based systems: A simple approach to belief revision, belief update, and reasoning about evidence and actions. In Proc. of KR-92, pages 661-672, Cambridge. Goldszmidt, M. and Pearl, J. 1992b. Reasoning with qualitative probabilities can be tractable. In Proceedings of the Eighth Conference on Uncertainty in AI, pages 112-120, Stanford. Grove, A. 1988. Two modellings for theory change. Journal of Philosophical Logic, 17:157-170. Levi, I. 1980. The Enterprise of Knowledge. MT Press, Cam- bridge. Nebel, B. 1991. Belief revision and default reasoning: Syntax- based approaches. In Proc. of KR-91, pages 417-428, Cam- bridge. Pearl, J. 1990. System Z: A natural ordering of defaults with tractable applications to default reasoning. In Vardi, M., ed- itor, Proceedings of Theoretical Aspects of Reasoning about Knowledge, pages 121-135. Morgan Kaufmann, San Mateo. St&raker, R. C. 1968. A theory of conditionals. In Harper, W., Stalnaker, R., and Pearce, G., editors, Ifs, pages 41-55. D. Reidel, Dordrecht. 198 1. 654 Boutilier | 1993 | 97 |
1,428 | Joseph Y. Halgern IBM Almaden Research Center 650 Harry Road San Jose, CA 95120-6099 halpern@almaden.ibm.com 408-927-l 787 Abstract We extend two notions of “only knowing”, that of Halpern and Moses [1984], and that of Levesque [1990], to many agents. The main lesson of this paper is that these approaches do have reasonable extensions to the multi-agent case. Our results also shed light on the single-agent case. For example, it was always viewed as significant that the HM notion of only knowing was based on S5, while Levesque’s was based on K45. In fact, our results show that the HM notion is better un- derstood in the context of K45. Indeed, in the single- agent case, the HM notion remains unchanged if we use K45 (or KD45) instead of S5. However, in the multi- agent case, there are significant differences between K45 and S5. Moreover, all the results proved by Halpern and Moses for the single-agent case extend naturally to the multi-agent case for K45, but not for S5. 1 Introduction There has been over twelve years of intensive work on various types of nonmonotonic reasoning. Just as with the work on knowledge in philosophy in the 1950’s and 1960’s, the focus has been on the case of a single agent reasoning about his/her environment. However, in most applications, this environment includes other agents. Surprisingly little of this work has focused on the multi- agent case. To the extent that we can simply represent the other agents’ beliefs as propositions (so that “Al- ice believes that Tweety flies” is a proposition just like “Tweety flies”), then there is no need to treat the other agents in a special way. However, this is no longer the case if we want to reason about the other agents’ rea- soning. In fact, we need to reason about other agents’ reasoning when doing multi-agent planning; moreover, much of this reasoning will be nonmonotonic (see [Mor- genstern 19901 for examples). *The work of the author is sponsored in part by the Air Force Office of Scientific Research (AFSC), under Contract F49620-91-C-0080. The United States Government is au- thorized to reproduce and distribute reprints for governmen- t al purposes. we show how to extend to the multi- related approaches to nonmonotonic based on the notion of “only know- t In this paper, agent case two reasoning, both ing”: that of Halpern and Moses [1984] (hereafter called the HM notion) and that of Levesque [1990]. The main lesson of the paper is that, despite some subtleties, both approaches do have reasonable extensions to the multi- agent case. Our results also shed light on the single- agent case. For example, it was always viewed as sig- nificant that the HM notion of only knowing was based on S5, while Levesque’s was based on K45.l In fact, our results show that the HM notion is better under- stood in the context of K45. Indeed, in the single-agent case, the HM notion remains unchanged if we use K45 (or KD45) instead of S5. However, in the multi-agent case, there are significant differences between K45 and S5. Moreover, as we show here, all the results proved by Halpern and Moses for the single-agent case extend naturally to the multi-agent case for K45, but not for s5. 2 M notion of “all1 The intuition behind the HM notion is straightforward: In each world of a (Kripke) structure, an agent consid- ers a number of other worlds possible. In the case of a single agent whose knowledge satisfies S5 (or K45 or KD45), we can identify a world with a truth assignment, and a structure with a set of truth assignments. Truth in these logics is with respect to situations (W, uf), con- sisting of a structure W, representing the set of truth assignments (worlds) that the agent considers possible, and a truth assignment w, intuitively representing the “real world” .2 The more worlds an agent considers pos- sible, the less he knows. Thus, (W, w) is the situation where (Y is all that is known if (1) (IV, w) /= LCY (so that the agent knows cy) and (2) if (W’, w’) b Lo, then ‘Due to lack of space, we are forced to assume that the reader is familiar with standard notions of modal logic. De- tails can be found in [Hughes and Cresswell 1968; Halpern and Moses 19921. 2For KD45, we require that W be nonempty; for S5, we require in addition that w E W. Representation and Reasoning 655 From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. W’ C W. If there is no situation (W, w) satisfying (1) and (2), then CY is said to be dishonest; intuitively, it cannot then be the case that “all the agent knows” is Q. A typical dishonest formula is Lp V Lq. To see that this formula is dishonest, let Wp consist of all truth assignments satisfying p, let Wp consist of all truth as- signments satisfying Q, and let w satisfy p A Q. Then (W,, w) b Lp V Lq, and (W*,w) k LpV Lq. Thus, if LpV Lq were honest, there would have to be a situation (W, w’) such that (W, w’) b LpV Lq and W 3 Wp U Wq It is easy to see that no such situation exists. Notice that in the case of one agent, the notions of honesty and “all I know” coincide for K45, KD45, and S5. We want to extend this intuition to the multi-agent case and-in order to put these ideas into better perspective-to other modal logics. We consider six logics, three that do not have negative introspection, K,, T,, and S4,, and three that do, K45,, KD45,, and 55 ‘)2 .3 Below, when we speak of a modal logic S, we are referring to one of these six logics; we refer to K45,, KD45, and S5, as introspective Zogics, and K,, T,, and S4, as non-introspective i’ogics (despite the fact that positive introspection holds in S4,). As we shall see, “all I know” behaves quite differently in the two cases. There are philosophical problems involved in dealing with a notion of “all I know” for the non-introspective logics. What does it mean for an agent to say “all I know is a” if he cannot do negative introspection, and so does not know what he doesn’t know. Fortunately, there is another interpretation of this approach that makes sense for arbitrary modal logics. Suppose that a says to b, “All i knows is o?’ (where i is different from a and b). If b knows in addition that i’s reasoning satisfies the axioms of modal logic S, then it seems reasonable for b to say that i’s knowledge is described by the “min- imal” model satisfying the axioms of S consistent with Lia, and for b to view a as dishonest if there is no such minimal model. Of course, the problem lies in defining what it means for a model to be “minimal”. Once we consider multi- agent logics, or even nonintrospective single-agent log- its, we can no longer identify a possible world with a truth assignment. It is not just the truth assignment at a world that matters; we also need to consider what other worlds are accessible from that world. This makes it more difficult to define a reasonable notion of mini- mality. To deal with this problem, we define a canonical collection of objects that an agent can consider possible. These will act like the possible worlds in the single- agent case. The kind of objects we consider depends on whether we consider the introspective or the non- 3The subscript n in all these logics is meant to emphasize the fact that we are considering the n-agent version of the logic. We omit it when considering the single-agent case. Details and axiomatizations can be found in [Halpern and Moses 19921. introspective logics, for reasons that will become clearer below. We start with the non-introspective case. Fix a set @ of primitive propositions, and agents 1 9’“l n. We define a (rooted) k-tree (over @) by induc- tion on b: A O-tree consists of a single node, labeled by a truth assignment to the primitive propositions in +. A (ilc + 1)-tree consists ofa root node labeled by a truth assignment, and for each agent i, a (possibly empty) set of directed edges labeled by i leading to roots of distinct b-trees.4 We say a node w’ is the i-successor of a node w in a tree if there is an edge labeled i leading from w to w’. The depth of a node in a tree is the distance of the node from the root. We say that the k: + l-tree Tk+l is an extension of the E-tree 57 if Z&+1 is the re- sult of adding some successors to the depth-k: leaves in Q. Finally, an w-tree T, is a sequence (To, Tl, . . .), where Tk is a b-tree, and Tk+l is an extension of Tk, for I% = 0, 1, 2, , . . . . (We remark that w-trees are closely related to the knowledge structures of [Fagin, Halpern, and Vardi 1991; Fagin and Vardi 1986],-although we do not pursue this connection here.) We now show that with each .situation we can asso- ciate a unique w-tree. We start by going in the other direction. We can associate with each K-tree T (k # w) a Kripke structure M(T) defined as follows: the nodes of T are the possible worlds in M(T), the & accessi- bility relation of M(T) consists of all pairs (w, w’) such that w’ is an i-successor of w in T, and the truth of a primitive proposition at a world w in M(T) is deter- mined by the truth assignment labeling w. We define the depth of a formula by induction on structure. Intuitively, the depth measures the depth of nesting of the & operators. Thus, we have depth(p) = 0 for a primitive proposition p; depth(lcp) = depth(y); dePth(PA+) = max( depth(y), depth($)); depth(&cp) = 1 + depth(y). If M and M’ are (arbitrary) structures, w is a world in M, and 20’ a world in M’, we say that (M, w) and (M’, w’) are equivalent up to depth k, and write (M, w) Z-I~ (M’, w’) if, for all formulas cp with depth(cp) I k, we have (M, w) b cp iff (M’, w’) + cp. For convenience, if wo is the root of T, we take M(T) /= ~3 to be an abbreviation for (M(T), wo) b ‘p, and write (M, w) G-F~ M(T) rather than (M, w) ~-6 (M(T), WO). Proposition 2.1: Fix a situation (M, w). For all 6, there is a unique k-tree TM,W,k such that such that (M, w) zk M(TM,w,k). Moreover, TM,W,k+l is an ex- tension of TM,w,k. Let TM,~ be the w-tree (TM,~,o, TM,~,I, TM,,,,+. . .). By Proposition 2.1, TM,~ can be viewed as providing a canonical way of representing the situation (M, w) in terms of trees. We use (w-)trees as a tool for defining what agent i considers possible in (M, w). Thus, we define i’s possibilities at (M, w), denoted Possi(M, w), to be (TM,~I : (w, w’) E K;). 4Since we are allowing a node to have no successors, any k-tree is also a (k + l)-tree. 656 Halpern Intuitively, for a to be i-honest, there should be a sit- uation (M, w) for which i has the maximum number of possibilities. Formally, if S is a non-introspective logic, we say that cy is S-i-honest if there is an S-situation (M, w), called an S-i-ma&mum situation for cy, such that (M, w) /= L; cy, and for all S-situations (M’, w’), if (%w’) I= Licp, th en Possi(M’, w’) C Possi(M, w). If Q! is S-i-honest, we say that agent i knows /3 if all he knows is a, and write o!kip, if (M, w) /= Lip for some S-i-maximum situation for cr.5 How reasonable are our notions of honesty and ki? The following results give us some justification for these definitions. The first gives us a natural characterization of honesty. Theorem 2.2: If S is a non-introspective logic, then the formula Q! is S-i-honest iff Lia! is S-consistent, and for all formulas ‘p1, . . . , vk, if I=s Lia * (Licpl V . ..V Licpk), then bs Lice 3 Livj 1 for some j E (1,. . . , k). Thus, a typical dishonest formula in the case of T, or S4, is Lip V .&Q, where p and 4 are primitive propo- sitions. If CII is Lip V Liq, then &a! j (Lip V Liq) is valid in T, and S4,, although neither Lia! 3 Lip nor Lia! + Liq is valid. However, the validity of Lia j (Lip V Liq) depends on the fact that Licr 3 a. This is not fan axiom of K,. In fact, it can be shown that LipV Liq is K,-i-honest. Thus, what is almost the archetypical “dishonest” formula is honest in the con- text of K,. As the following result shows, this is not an accident. Theorem 2.3: All formulas are K,-i-honest. A set S of formulas is an S-i-stable set if there is some S-situation (M, w) such that S = (‘p : (M, w) k Kicp}. We say the situation (M, w) corresponds to the stable set S. This definition is a generalization of the one given by Moore [1985] (which in turn is based on Stalnaker’s definition [1980]); Moore’s notion of stable set corresponds to a K45-stable set in the single-agent case. (See [Halpern 19931 for some discussion as to why this notion of stable set is appropriate.) Since a stable set describes what can be known in a given situation, we would expect a formula to be honest if it is in a minimum stable set. This is indeed true. Theorem 2.4: If S is a non-introspective logic, then a! is S-i-honest ifl there is an S-i-stable set S” containing a which is a subset of every S-i-stable set containing cy. Moreover, if a! is stable, then cyki/3 $0 E S”. This characterization of honesty is closely related to one given in [Halpern and Moses 19841; we discuss this in more detail below. 5There may be more than one S-i-maximum situation for cr; two S-i-maximum situations for a may differ in what j # i considers possible. However, if (M, W) and (M’, w’) are two S-i-maximum situations for (Y, then (M, W) + L$ iff (M’, w’) i= L/3. Our next result gives another characterizion of what agent i knows if “all agent i knows is a”, for an honest formula o. Basically, it shows that all agent i knows are the logical consequences of his knowledge of ~1. Thus, “all agent i knows” is a monotonic notion for the non- introspective logics. Theorem 2.5: If S is a non-introspective logic and a! is S-i-honest, then ab;P iff bs Lia! + L#. This completes our discussion of the non-introspective logics. We must take a slightly dif- ferent approach in dealing with the introspective logics. To see the difficulties if we attempt to apply our ear- lier approach without change to the introspective case, consider the single-agent case. Suppose <f consists of two primitive propositions, say p and q, and suppose that all the agent knows is p. Surely p should be hon- est. Indeed, according to the framework of Halpern and Moses [1984], there is a maximum situation where p is true where the structure consists of two truth assign- ments: one where both p and q are true, and the other where p is true and q is false. Call this structure M. There is, of course, another structure where the agent knows p. This is the structure where the only truth as- signment makes both p and q true. Call this structure M’. Let w be the world where both p and q are true. We can easily construct TM,~ and TM~,~; the trouble is that Possr (M, w) and Pass r (M’, w) are incompara- ble. What makes them incomparable is introspective knowledge: In (M, w), the agent does not know q; so, because of introspection, he knows that he does not know q. On the other hand, in (M’, w), the agent does not know this. These facts are reflected in the trees. We need to factor out the introspection somehow. In the single-agent case considered, this was done by con- sidering only truth assignments, not trees. We need an analogue for the multi-agent case. We define an i-objective k-tree to be a k-tree whose root has no i-successors. We define a i-objective w- tree to be an w-tree all of whose components are i- objective. Given a k-tree T, let p be the result of re- moving all the i-successors of the root of T (and all the nodes below it). Given an w-tree T = (To, Tl, . . .), let T = (Z&T&..). Th e way we factor out introspection is by considering i-objective trees. Intuitively, this is because the i-objective tree corresonding to a situation (M, w) eliminates all the worlds that i considers pos- sible in that situation. Notice that in the case of one agent, the i-objective trees are precisely the possible worlds. We define IntPossi (M, w) = (Ti : T E Possi (M, w)}. (IntPoss stands for introspective possibilities.) The fol- lowing result assures us that we have not lost anything in the introspective logics by considering IntPossi in- stead of Possi. Lemma 2.6: If M is an S-structure, and S is an intro- spective logic, then Possi(M, w) is uniquely determined Representation and Reasoning 657 by IntPossi(M, w). In the case of the introspective logics, we now repeat all our earlier definitions using IntPoss instead of Puss. Thus, for example, we say that that cr is S-i-honest if there is an S-situation (M, w) such that (M, w) k Licy, and for all S-situations (M’, w’), if (M’, w’) k Licp, then IntPoss; (M’, w’) C IntPossi(M, w). We make the analogous change in the definition of ki. Since i- objective trees are truth assignments in the single-agent case, it is easy to see that these definitions generalize those for the single-agent case given in [Halpern and Moses 19841. We now want to characterize honesty and “all agent i knows” for the introspective logics. There are some significant differences from the non-introspective case. For example, as expected, the primitive proposition p is S-l-honest even if S is introspective. However, due to negative introspection, 1Llq 3 LllLlq is S-valid, so we have I=s Lip + (Llq V LllLlq). Moreover, we have neither ks Lip 3 Llq nor bs Lip 3 LIlLlq. Thus, the analogue to Theorem 2.2 does not hold. We say a formula is i-objective if it is a Boolean com- bination of primitive propositions and formulas of the form Lj cp, j # i, where y3 is arbitrary. Thus, q A L,L,p is l-objective, but Lip and q A Lip are not. Notice that if there is only one agent, say agent 1, then the l- objective formulas are just the propositional formulas. As the following result shows, the analogue of Theo- rem 2.2 holds for KD45, and K45,, provided we stick to i-objective formulas. Theorem 2.7: For S E (KD45,, K45,}, the formula cr is S-i-honest ifl for all i-objective formulas (PI,. . ., $ok, if FS Lia * (Licplv . . . V Licpk) then bs Lia! 3 Lipj, for some j E (1,. . ., k}. This result does not hold for S5,; for example, t= ~5, Lip =+ (Llq V L1L2lL2L1q) (this follows from the fact that +ss, lL1q + L1L21L2L1q). How- ever, it is easy to see that ks5, Lip + Llq and F ~5, Lip + L1L2lL2Llq. Since p is S5,-l-honest, Theorem 2.7 fails for S5,. Theorem 2.7 is a direct extension of a result in [Halpern and Moses 19841 for the single-agent case. Two other characterizations of honesty and “all I know” are given by Halpern and Moses, that can be viewed as analogues to Theorems 2.4 and 2.5. As we now show, they also extend to K45, and KD45,, but not S5,. One of these characterizations is in terms of stable sets. The direct analogue of Theorem 2.4 does not hold for the introspective logics. In fact, as was already shown in [Halpern and Moses 19841 for the single-agent case, any two consistent stable sets are incomparable with respect to set inclusion. Again, the problem is due to introspection. For suppose we have two consis- tent S-i-stable sets S and S’ such that S c S’, and cp E S’ - S. By definition, there must be situations (M, w) and (M’, w’), corresponding to S and S’ respec- tively, for which we have (M, w) b Licp and (M’, w’) k Licp. By introspection, we have (M, w) b L; Licp and (M’, w’) b Lil Licp. This means that Li cp E S and 1 Licp E S’. Since S C S’, we must also have Li v E S, which contradicts the assumption that S’ is consistent. We can get an analogue of Theorem 2.4 if we con- sider i-objective formulas. Define the i-kernel of an S-i-stable set S, denoted keri(S), to consist of all the i-objective formulas in S. Theorem 2.8: For S E {KD45,, K45,}, a formula cu is S-i-honest ifl there is an S-i-stable set Si containing Q! such that for all i-stable sets S containing CII, we have keri(Si) C keri (S). Moreover, a! is S-i-honest, then cxki/3 iflp E S%. As we show in the full paper, Theorem 2.8 does not hold for S5,. This is not an artifact of our definition of honesty for S5,, since in fact we can show that for no formula cr is there an S5,-i-stable set containing a! whose i-kernel is a minimum. Finally, let us consider the analogue to Theorem 2.5. In contrast to the non-introspective case, inference from “all agent i knows” is nonmonotonic for the introspec- tive logics. For example, we have pk,lLlq, even though ks Lip j LllLlq. This seems reasonable: if all agent 1 knows is p, then agent 1 does not know q and (by introspection) knows that he does not know this. As shown in [Halpern and Moses 19841, there is an elegant algorithmic characterization of “all agent i knows” in the single-agent case. We extend it to the multi-agent case here. We recursively define a set Di (a) that intu- itively consists of all the formulas agent i knows, given that agent i knows only cy (and reasons using modal logic S): cp E 23$(a) iff bs (Lia! A (p*si) =+ Licp, where pali ’ is the conjunction of Li$ for all subformulas Li$ of cp for which $ E D:(a), and lLi$ for all subfor- mulas Li$ for which $ $ D$ (a) (where ‘p is considered a subformula of itself). Thus, the algorithm says that the agent knows cp if it follows from knowing cy, together with the formulas that were decided by recursive appli- cations of the algorithm. Then we have: Theorem 2.9: For S E {KD45,, K45,}, the formula a is i-honest ifl D:(a) is (propositionally) consistent. If cx is S-i-honest, then skip iff p E Di (cu). While the analogue to Theorem 2.9 does not hold for S5,, the algorithm is correct for honest formulas. Theorem 2.10: If cy is S5,-i-honest, then cxkg,,p ifl P E D&b). We now characterize the complexity of computing honesty and “all i knows”. Theorem 2.11 : For S E (Tn,S4n : n 2 1) U {KD45,, K45,, S5, : n > 21, the problem of comput- ing whether (x is S-i-honest is PSPACE-complete. 658 Halpern Of course, the problem of computing whether ar i-honest is trivial: the answer is always “Yes”. is K,- Theorem 2.12 : For S E {Kn,Trr, S4, : n 2 l> U {KD45,, K45,, S5, : n 2 21, if CY is S-i-honest, then the problem of deciding if cxkij3 is PSPACE-complete. We close this section by briefly comparing our ap- proach to others in the literature. Fagin, Halpern, and Vardi [1991] d e fi ne a notion of i-no-information ezten- sion that can also be viewed as characterizing a notion of “all agent i knows” in the context of S5,. However, it is defined only for a limited set of formulas. It can be shown that these formulas are always S5,-i-honest in our sense, and, if a! is one of these formulas, we have CX~“,, ,0 iff ,0 is true in the i-no-information extension of cy. The fact that these two independently motivated definitions coincide (at least, in the cases where the i-no-information extension is defined) provides further evidence for the reasonableness of our definitions. Vardi [1985] d e fi nes a notion of “all agent i knows” for S4,, using the knowledge-structures approach of [Fagin, Halpern, and Vardi 19911, and proves Theorem 2.5 for Sk in the context of his definition. It is not hard to show that our definition of honesty coincides with his for S4,. However, the knowledge structures approach does not seem to extend easily to the introspective log- its. Moreover, using our approach leads to much better complexity results. For example, all that Vardi was able to show was that honesty was (nonelementary-time) de- cidable. Parikh [1991] d e fi nes a notion of “all that is known” for S5, much in the spirit of the definitions given here. Among other things, he also starts with k-trees (he calls them normal models), although he does not use i-objective trees. However, rather than focusing on all that some fixed agent i knows as we have done, Parikh treats all agents on an equal footing. This leads to some technical differences between the approaches. He was also able to obtain only nonelementary-time algorithms for deciding whether a formula was honest in his sense. 3 Levesque’s notion of “only knowing” Despite the similarity in philosophy and terminology, Levesque’s notion of “only knowing” differs in some sig- nificant ways from the HM notion (see [Halpern 19931 for a discussion of this issue). Nevertheless, some of the ideas of the previous section can be applied to extending it to many agents. Levesque considers a K45 notion of belief, and intro- duces a modal operator 0, where Oa! is read “only be- lieves CX”. The 0 operator is best understood in terms of another operator introduced by Levesque denoted N. While La! says “Q! is true at all the worlds that the agent considers possible”, No is viewed as saying “Q! is true at all the worlds that the agent does not con- sider possible”. Then Oa! is defined as an abbreviation for La! A Nlcx. Thus, Oa holds if CY is true at all the worlds that the agent considers possible, and only these worlds. We can read La! as saying “the agent knows at least 01”) while Nlcu says “the agent knows at most Q” (for if he knew more, than he would not consider possible all the worlds where Q! is true). In the case of a single agent, since worlds are asso- ciated with truth assignments, it is easy to make pre- cise what it means that the agent does not consider a world possible: it is impossible if it is not one of the truth assignments the agents considers possible. Thus, Levesque defines: (W, w) + No if (W, w’) b Q for all w’ $! W. Two important features of this definition are worth mentioning here. First, the set of all worlds is abso- lute, and does not depend on the situation: it is the set of all truth assignments. Thus, the set of impossible worlds given that W is the set of worlds that the agent considers possible is just the complement of W (rela- tive to the set of all truth assignments). Second, when evaluating the truth of CY at an “impossible world” w’, we do not change W, the set of worlds that the agent considers possible. (We remark that it is this second point that results in the main differences between this notion of “all I know” and the HM notion; see [Halpern 19931.) Of course, the problem in extending Levesque’s no- tion to many agent lies in coming up with an analogue to “the worlds that the agent does not consider possi- ble”. This is where our earlier ideas come into play. Before we go into details on the multi-agent case, we mention one important property of this notion of “only knowing”. Moore [1985] defines a stable ezpansion of cy to be a (K45-)stable set S such that S is the closure under propositional reasoning of {a} U {La! : Lcr E S} U {lLo! : ~Lcr E S}. Notice that for any stable set S, there is a unique set Ws of truth assignments such that cp E S iff ( WS, w) b Lp for all w E WS. Levesque shows that S is a stable expansion of Q! iff ( WS, w) /= Oa! for all w E WS. We now turn to extending Levesque’s definitions to the multi-agent case. We first extend the language of knowledge by adding modal operators Ni and Oi for each agent i = l,..., n. Following Lakemeyer, we call the full language ON&. We say that a formula in ON,& is basic if it does not involve the modal operators Oi or Ni. Finally, we take the language ONLC, to be the sublanguage of ON&, where no Oj or Nj occurs in the scope of an Oi , Ni , or Li , for i # j. In analogy to Levesque, we define Oicv as the conjunction Lid Ni ~QI. The problem is to define Nia. As in the single-agent case, we want Nice to mean that Q! is true at all the worlds that i does not consider possible. So what are the worlds that i does not consider possible? Perhaps the most straightforward way of making sense of this, used by Lakemeyer [1993], is to define Ni in terms of the complement of the Ici relation. We Representation and Reasoning 659 briefly outline this approach here. Given a structure M = (W, /cl,. . . , ?C,, x), let Ei(w) = (w’ : (w, w’) E Xi). Ici (w) is the set of worlds that agent i considers possible at w. We write w ~:i w’ if lci (w) = lci (w’). Thus, if w RQ w’, then agent i’s possibilities are the same at w and w’. Finally, Lakemeyer defines: (M, w) bLak Nia! if (M, w’) /= cy for all 20’ such that (w, w’) 6 lci and w e:i w’. By restricting attention to worlds w’ such that w wi w’, Lakemeyer is preserving the second property of Levesque’s definition, namely, that when evaluating the truth of a formula at an impossible world, we keep the set of agent i’s possibilities unchanged. However, this definition does not capture the first property of Levesque’s definition, that the set of impossible worlds is absolute. Here it is relative to the structure. To get around this problem, Lakemeyer focuses on a cer- tain canonical model, which intuitively has “all” the possibilities .(j It is only in this model that the Ni (and thus the 0;) operators seem to have the desired behav- ior. (We discuss to what extent they really do have the desired behavior in this canonical model below.) We want to define Ni and Oi in a reasonable way in all models. We proceed as follows: (M, w) b N;a! if. (M’, w’) b CY for all (M’, w’) such that Th,,,,, 4 IntPossi(M, w) and IntPossi (M, w) = IntPossi (M’, w’). The analogues to Lakemeyer’s definitions should be obvious: we replace (w, w’) $ lCi by T&,,w, 6 IntPossi( M, w) and w pi w’ by IntPossi( M, w) = IntPoss; (M’, w’). What evidence do we have that this definition is rea- sonable? One piece of evidence is that we can extend to the multi-agent case Levesque’s result regarding the relationship between only knowing and stable expan- sions. To do this, we first need to define the notion of stable expansion in the context of many agents. We say that S is a K45,-i-stable expansion of a if S is a K45,-i-stable set and S is the closure under K45, of (a} u (Lia : Lia! E S} U (lLia! : lLia! E T3.7 Next, we need to associate a situation with each K45,-i-stable set, as we were able to do in the single- agent case. Given a set S of basic formulas, we say that the K45,-situation (M, w) i-models S if, for all basic formulas y3, we have (M, w) b Lirp iff ‘p E S. In analogy to the single-agent case, the situation that we ‘This canonical model is built using standard modal logic techniques (cf. [Halpern and Moses 1992; Hughes and Cress- well 19681); th e worlds in this canonical model consist of all maximally K45,-consistent subsets of formulas. 71n Moore’s definition of stable expansion, we could have used closure under K45 instead of closure under deduc- tive reasoning. The two definitions are equivalent in the single-agent case, but modal reasoning is necessary in the multi-agent case so that agent i can capture j’s introspec- would like to associate with a stable set S is one that i-models S. There is, however, a complication. In the single-agent case, a stable set determines the set of pos- sible truth assignments. That is, given a stable set S, there is a unique set WS such that (for any w) we have ( WS, w) /= Lcp iff cp E S. The analogue does not hold in the multi-agent case. That is, given a stable set S, there is not a unique set W of i-objective w-trees such that if (M, w) i-models S, then IntPossi(M, w) = W. As we show in the full paper, two structures can agree on all basic formulas, and still differ with regard to formulas of the form Nia or Oia under b.” A similar phenomenon was encountered by Levesque [1990] when considering only knowing in the first-order case. We solve our prob- lem essentially the same way he solved his. We say that (M, w) is a maximum i-model of the stable set S if (M, w) is an i-model of S and for every i-model (M’, w’) of S, we have IntPossi (M’, w’) C IntPossi(M, w). Lemma 3.1: Every K45,-i-stable set has a maximum i-model. Theorem 3.2: Suppose S is a K45,-i-stable set and (M, w) is a maximum i-model of S. Then S is an i- stable expansion of a ifl (M, w) j= Oia. We remark that an analogous result is proved by Lake- meyer [ 19931, except that he restricts attention to situ- ations in the canonical model. More evidence as to the reasonableness of our def- initions is given by considering the properties of the operators Ni and 0;. As usual, we say that ‘p is valid, and write b (p, if (M, w) 1 cp for all situations (M, w). We write IL& cp if y3 is valid under Lakemeyer’s se- mantics in the canonical mode4 we remark that EL& is the notion of validity considered by Lakemeyer, since he is only interested in the canonical model. Theorem 3.3: For all formulas cp, if j= ‘p then bLa]E p. If p E Oni& we have j= p i# bLak p. This theorem says that Lakemeyer’s notion of valid- ity is stronger than ours, although the two notions agree with respect to formulas in the sublanguage ON&. In fact, Lakemeyer’s notion of validity is strictly stronger than ours. Lakemeyer shows that +Lak ‘Oi’Ojp; un- der his semantics, it is impossible for i to only know that it is not the case that j only knows p. This seems counterintuitive. Why should this be an unattainable state of knowledge ? Why can’t j just tell i that it is not the case that he (j) only knows p? We would argue that the validity of this formula is an artifact of Lakemeyer’s focus on the canonical model. Roughly speaking, we would argue that the canonical model is not “canonical” enough. Although it includes all the possibilities in terms of basic formulas, it does ‘This can be viewed as indicating that basic formulas are not expressive enough to describe w-trees. If we had had allowed infinite disjunctions and conjunctions into the language, then a stable set would determine the set of trees. tive reasoning. 660 Halpern not include all the possibilities in terms of the extended language. The formula O;lOjp is easily seen to be sat- isfiable under our semantics. Lakemeyer provides a collection of axioms that he proves are sound with respect to +La]c, and complete for formulas in On/C,. He conjectures that they are not complete with respect to the full language. It is not hard to show that all of Lakemeyer’s axioms are sound with respect to our semantics as well. It follows from Theorem 3.3 and Lakemeyer’s completeness result that these axioms are complete with respect to On/C, for our semantics too. It also follows from these observa- tions that, as Lakemeyer conjectured, his proof system is not complete. This follows since everything provable in his system must be valid under our semantics, and 1OilOjp is not valid under our semantics (although it is valid under his). 4 Discussion We have shown how to extend two notions of only know- ing to many agents. The key tool in both of these exten- sions was an appropriate canonical representation of the possibilities of the agents. Although we gave arguments showing that the way we chose to represent an agent’s possibilities was reasonable, it would be nice to have a more compelling theory of “appropriateness”. For ex- ample, why is it appropriate to use arbitrary trees for the non-introspective logics, and i-objective trees for the introspective logics ? Would a different representa- tion be appropriate if we had changed the underlying language? Perhaps a deeper study of the connections between o-trees and the knowledge structures of [Fagin and Vardi 1986; Fagin, Halpern, and Vardi 19911 may help clarify some of these issues. Another open problem is that of finding a complete axiomatization for ON,&. We observed that Lake- Meyer’s axioms were not complete with respect to his semantics. In fact, it seems that these axioms are es- sentially complete for OJVC, under our semantics.g We hope to report on these results in the future. Acknowledgements: I would like to thank Ron Fagin, Gerhard Lakemeyer, Grisha Schwarz, and Moshe Vardi for their helpful comments on an earlier draft of this paper. ‘The reason we say “essentially complete” here is that one of the axioms has the form N~CY + ~Lia for all basic i-objective o falsifiable in K45,. We need to extend this axiom to formulas that are not ba- sic. But the axiom system K45, does not apply to non-basic formulas. We deal with this problem by extending the lan- guage so that we can talk about satisfiability within the language. The axiom then becomes icon + (N;cx =s- ~Lia), where Con(o) holds if a is satisfiable. References Fagin, R., J. Y. Halpern, and M. Y. Vardi (1991). A model-theoretic analysis of knowledge. Journal of the ACM 91(2), 382-428. A preliminary ver- sion appeared in Proc. 25th IEEE Symposium on Foundations of Computer Science, 1984. Fagin, R. and M. Y. Vardi (1986). Knowledge and implicit knowledge in a distributed environment: preliminary report. In J. Y. Halpern (Ed.), The- oretical Aspects of Reasoning about Knowledge: Proc. 1986 Conference, San Mateo, CA, pp. 187- 206. Morgan Kaufmann. Halpern, 9. Y. (1993). A critical reexamination of default logic, autoepistemic logic, and only know- ing. In Proceedings, 3rd Kurt &de1 Colloquium. Springer-Verlag. Halpern, J. Y. and Y. Moses (1984). Towards a the- ory of knowledge and ignorance. In Proc. AAAI Workshop on Non-monotonic Logic, pp. 125- 143. Reprinted in Logics and Models of Con- current Systems, (ed., K. Apt), Springer-Verlag, Berlin/New York, pp. 459-476, 1985. Halpern, J. Y. and Y. Moses (1992). A guide to completeness and complexity for modal logics of knowledge and belief. Artificial Intelligence 54, 319-379. Hughes, G. E. and M. J. Cresswell (1968). An Intro- duction to Modal Logic. London: Methuen. Lakemeyer, G. (1993). All they know: a study in multi-agent autoepestemic reasoning. In Proc. Thirteenth International Joint Conference on Artificial Intelligence (IJCAI ‘93). Unpub- lished manuscript. Levesque, H. J. (1990). All I know: A study in autoepistemic logic. Artificial Intelligence 42(3), 263-309. Moore, R. C. (1985). S emantical considerations on nonmonotonic logic. Artificial Intelligence 25, 75- 94. Morgenstern, L. (1990). A theory of multiple agent nonmonotonic reasoning. In Proc. National Con- ference on Artificial Intelligence (AAAI ‘$O), pp. 538-544. Parikh, R. (1991). M onotonic and nonmonotonic logics of knowledge. Fundamenta Informati- cae 15(3,4), 255-274. Stalnaker, R. (1980). A note on nonmonotonic modal logic. Technical report, Dept. of Philosophy, Cor- nell University. A slightly revised version will ap- pear in Artificial Intelligence. Vardi, M. Y. (1985). A model-theoretic analysis of monotonic knowledge. In Proc. Ninth Interna- tional Joint Conference on Artificial Intelligence (IJCAI ‘851, pp. 509-512. Representation and Reasoning 661 | 1993 | 98 |
1,429 | All They Know About Gerhard Lakemeyer Institute of Computer Science III University of Bonn Riimerstr. 164 5300 Bonn 1, Germany gerhard@cs.uni-bonn.de Abstract We address the issue of agents reasoning about other agents’ nonmonotonic reasoning ability in the framework of a multi-agent autoepistemic logic (AEL). I n single-agent AEL, nonmonotonic inferences are drawn based on all the agent knows. In a multi-agent context such as Jill reasoning about Jack’s nonmonotonic inferences, this as- sumption must be abandoned since it cannot be assumed that Jill knows everything Jack knows. Given a specific subject matter like Tweety the bird, it is more realistic and sufficient if Jill only assumes to know all Jack knows about Tweety in order to arrive at Jack’s nonmonotonic inferences about Tweety. This paper provides a formaliza- tion of all an agent knows about a certain sub- ject mutter based on possible-world semantics in a multi-agent AEL. Besides discussing various prop- erties of the new notion, we use it to characterize formulas that are about a subject matter in a very strong sense. While our main focus is on subject matters that consist of atomic propositions, we also address the case where agents are the subject matter. Introduction Most of the research on nonmonotonic reasoning has concentrated on the single-agent case. However, there is little doubt that agents who have been invested with a nonmonotonic reasoning mechanism should be able to reason about other agents and their ability to rea- son nonmonotonically as well. For example, if we as- sume the common default that birds normally fly and if Jill tells Jack that she has just bought a bird, then Jill should be able to infer that Jack thinks that her bird flies. Other examples from areas like planning and temporal projection can be found in [MorSO]. One of the main formalisms of nonmonotonic reason- ing is uutoepistemic logic (AEL) (e.g. [Moo85]). The basic idea is that the beliefs of agents are closed un- der perfect introspection, that is, they know’ what ‘We use the terms knowledge and belief interchangeably they know and do not know. Nonmonotonic reason- ing comes about in this framework in that agents can draw inferences on the basis of their own ignorance. For example, Jack’s flying-bird default can be phrased as the belief that birds fly unless known otherwise. If Jill tells Jack that she has a bird called Tweety, Jack will conclude that Tweety flies since he does not know of any reason why Tweety should not be able to fly. Note that in standard AEL agents determine their beliefs and especially their non-beliefs with respect to everything they believe. Thus, if we think of Jack’s be- liefs being represented by a knowledge base (KB), then Jack’s belief that Tweety flies follows in AEL because the KB is all he believes or, as we also say, because Jack only-believes the formulas in the KB. (See [LevSO] for a formalization of AEL with an explicit notion of only-knowing.) If we want to extend autoepistemic logic to the multi-agent case, the strong assumption of having access to everything that is known is not war- ranted when applied to knowledge about other agents’ knowledge. In other words, while an agent may have access to everything she herself knows, it is certainly not the case that she knows everything agents other than herself know. How then should Jill, for exam- ple, conclude that Jack autoepistemically infers that Tweety flies without pretending that she knows every- thing Jack knows ? Intuitively, it seems sufficient for Jill to assume that she knows everything Jack knows about Tweety, which is quite plausible since Jack heard about Tweety through Jill. Thus if Jill believes that all Jack knows about Tweety is that he is a bird and, hence, that the flying-bird default applies to Tweety, then Jill is justified to conclude that Jack believes that Tweety flies. In this paper, we present an account of multi- agent AEL which explicitly models the notion of only- knowing about a subject matter, which we also call only-knowing-about, for short. The work is based on possible-world semantics and takes ideas from a single-agent version of only-knowing-about [Lak92] and combines them with a multi-agent version of only- in this paper for stylistic reasons. However, the formalism presented allows agents to have false beliefs. 662 Lakemeyer From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. knowing [Lak93]. None of the existing approaches to multi-agent AEL [MorSO, MG92, Ha193, Lak9312 addresses the is- sue of only-knowing-about and are thus forced, one way or the other, to unrealistic assumptions when it comes to reasoning about the autoepistemic con- clusions of other agents. Morgenstern and Guer- reiro [MorSO, MG92], for example, run into the prob- lem of what they call arrogance, where an agent i ascribes a non-belief to an agent j only on the ba- sis of i herself not knowing whether j has this belief. Their solution is to heuristically limit the use of such arrogant behavior depending on specific applications. In [Lak93, Hal93], the only way to model the Tweety example is to assume that Jill knows a/d Jack knows. The multi-agent AEL OL, of [Lak93] is the starting point of this paper. OL,‘s syntax and semantics are presented in the next section. We then extend OL, by incorporating an explicit notion of only-knowing- about. Besides discussing various properties of the new notion, we use it to characterize formulas that are about a subject matter in a very strong sense. While our main focus is on subject matters that consist of atomic propositions, we also address the case where agents are the subject matter. We conclude with a summary of the results and future work. The Logic OL, After introducing the syntax of the logic, we define the semantics in two stages. First we describe that part of the semantics that does not deal with only- knowing. In fact, this is just an ordinary possible- world semantics for n agents with perfect introspection. Then we introduce the necessary extensions that give us the semantics of only-knowing. The properties of only-knowing are discussed briefly. A detailed account is given in [Lak93]. Syntax Definition 1 The Language The primitives of the language consist of a countably infinite set of atomic propositions (or atoms), the con- nectives V, 7, and the modal operators Li and Oi for 15 i 5 72. (Agents are referred to as 1,2,. . . , n.) Formulas are formed in the usual way from these primitives.3 Lia should be read as “the agent i be- lieves CY ” and Oicu as “cu is all agent i believes.” A formula cy is called basic ifl there are no occurrences 0fOi (l<i<n) ina. 2Some notes on multi-agent AEL appear also in [HM84]. There has also been work in applying nonmonotonic the- ories to special multi-agent settings such as speech act theory, e.g. [Per87, AK88]. Yet these approaches do not aim to provide general purpose multi-agent nonmonotonic formalisms. 3 We will fr eel use other connectives like A, > and s-, y which should be understood as syntactic abbreviations of the usual kind. Definition 2 A modal operator occurs at depth n of a formula CY ifl it occurs within the scope of exactly n modal operators. For example, given CY = p A L~E~(L~Q V YOsr), IL1 occurs at depth 0, Lz at depth 1, and Es and 02 occur both at depth 2. IIefinition 3 A formula Q is culled i-objective (for 2 = * 1 )... n) ifl every modal operator at depth 0 is of the form Oj or Lj with i # j. In other words, i-objective formulas talk about the ex- ternal world from agent i’s point of view, which in- cludes beliefs of other agents but not his own. For example, (p V L~Q) A 103Llp is l-objective, but (p V L2q) A 1L103p is not. The Semantics of Basic Formulas Basic formulas are given a standard possible-world se- mantics [Kri63, Hin62, Hin71], which the reader is as- sumed to be familiar with.4 Roughly, a possible-world model consists of worlds, which determine the truth of atomic propositions, and binary accessibility relations between worlds. An agent’s beliefs at a given world w are determined by what is true in all those worlds that are accessible to the agent from w. Since we are concerned with agents whose beliefs are consistent and closed under perfect introspection, we restrict the ac- cessibility relations in the usual way. The resulting logic is called KDd/& .5 H)efinition 4 A Ir’D45,-Model A4 = (W, x, R1, . . . , R,J is called a KD45,-model (or simply model) ifl 1. W is a set (of worlds). 2. K is a mapping from the set of atoms into 2w. 3. RiCWxWforl<i<n. - - - 4. R; is serial, transitive, and Euclidean’ for 1 < i < n. Given a model M = (W, X, RI, . . . , Rn) and a world w E W, the meaning of basic formulas is defined as follows: Let p be an atom and Q and /3 arbitrary basic formulas. wt=p - w (5 T(P) u)+Cl e w/+a w+crV@ G3 wj=xr or w+@ wJ=L;a +==+ for all w’, if wRiw’ then w’J=o The Canonical Model It is well known that, as far as basic formulas are con- cerned, it suffices to consider just one, the so-called canonical model [HC84, HM92]. This canonical model will be used later on to define the semantics of only- knowing. *See [HC84, HM92] for an introduction. 5We use the subscript n to indicate that we are con- cerned with the n-agent case. 6Ri is Euclidean iff VW, w’, w”, if WRiw’ and WRiw”, then W’Ri w”. Representation and Reasoning 663 Definition 5 Maximally Consistent Sets Given any proof theory of KD45,, and the usual notion of theoremhood and consistency, a set of basic formulas I’ is called maximally consistent iff I’ is consistent and for every basic c~, either a! or 1~ is contained in r. The canonical KD45,-model M, has as worlds pre- cisely all the maximally consistent sets and a world zu’ is &accessible from w just in case all of i’s beliefs at w are included in w’. Definition 6 The Canonical KD45,-Model Me The canonical model is a Kripke structure M, = (W,, X, RI,. . . , &) such that 1. WC = (w 1 w is a maximally consistent set). 2. For all atoms p and w E WC, w E r(p) iflp E w. 3. wR~w’ i$ for all formulas L~Q, if Lice E w then cy E w’. The following (well known) theorem tells us that noth- ing is lost from a logical point of view if we confine our attention to the canonical model. Theorem 1 M, is a KD45,-model and for every set of basic formulas I’, I? is satisfiable7 ifl it is satisfiable in MC. The Semantics of Only-Knowing Given this classical possible-world framework, what does it mean for an agent i to only-know, say, an atom p at some world w in a model M? Certainly, i should believe p, that is, all worlds that are i-accessible from w should make p true. Furthermore, i should believe as little else as possible apart from p. For example, i should neither believe Q nor believe that j believes p etc. Minimizing knowledge using possible worlds sim- ply means maximizing the number of accessible worlds. Thus, in our example, there should be an accessible world where q is false and another one where j does not believe p and so on. It should be clear that in order for w to satisfy only-knowing a this way, the model A4 must have a huge supply of worlds that are accessible from w. While not essential for the definition of only- knowing, it turns out to be very convenient to simply restrict our attention to models that are guaranteed to contain a sufficient supply of worlds. In fact, we will consider just one, namely the canonical model of KD45n. Let us call the set of all formulas that are true at some world w in some model of KD45, a world state. The canonical model has the nice property that it contains precisely one world for every possible world state, since world states are just maximally consistent sets. With that agent i is said to only-know a formula o at some world w (in the canonical model) just in case ‘A set of basi c formulas I’ is scstisfiable iff there is a model M = (W,R,RI,...,.&) and w E W such that w+y for all y E r. cy is believed and any world w’ which satisfies Q! and from which the same worlds are i-accessible as from w is itself i-accessible from 20. Definition 7 Given a model M = (W, r, RI, . . . , R,J and worlds w and w’ in W, we say that w and w’ are i-equivalent (w xi w’) ifl for all worlds w* E W, wRiw* iff w/&w*. Given an arbitrary formula CE, a world w in a model M, let WbOia c for all 20’ 8.t. w Mi w’, wR~w’ iff w’~=cx. A formula a is a logical consequence of a set of formulas I (I’/=o) iff for all worlds w in the canonical model M,, if wky for all y E I’, then wk=a. As usual, we say that a is valid (/=a) iff {} +=a. A formula cy is satisfiable iff icy is not valid. Some Properties of the Logic Here we can only sketch some of the properties of OL,. The logic is discussed in more detail in [Lak93].8 See also Halpern’s logic of only-knowing [Ha193], which is closely related to OLL,. Given Theorem 1, it is clear that the properties of basic formulas (no Qi ‘s) are precisely those of KD45fa. Concerning only-knowing we restrict ourselves to the connection between only-knowing and a natural multi- agent version of the stable expansions of autoepistemic logic [Moo85]. Definition 8 i-Epistemic State A set of basic formulas r is called an i-epistemic state ifl there is a world w in M, such that for all basic y, +Liy ifly E r. Given a set of basic formulas r, let r = {basic y ] y # I’}, Lir = {Liy ] y E I’}, and 1LiF = {‘Liy 1 y E i?}. Definition 9 i-Stable Expansion Let A be a set of basic formulas and let bKDQs denote logical consequence in KD45,. r is called an i-stable expansion of A i$ I’ = {basic y 1 A U Lir U lLiFb,,,,y}. Note that the use of b,,,, instead of logical conse- quence in propositional logic is essentially the only dif- ference between Moore’s original definition and this one. The following theorem establishes that the i- stable expansions of a formula CY correspond precisely to the different i-epistemic states of agent i who only- knows CP. Theorem 2 Only-Knowing and i-Stable Expansions Let a be a basic formula, w E WC, and let l? = {basic Q I w~L~cv}. Then W~Oi~ iff I? is an i-stable expansion of (a). (See also [Ha1931 for an analogous result.) ‘As a minor difference, [Lak93] uses 1C4& instead of I(o45, as the base logic, that is, beliefs are not required to be consistent. 664 Lakemeyer E-Clauses We end our discussion of OL, by extend- The crucial part of the semantics then is the con- ing the notion of a clause of propositional logic to the struction of WI=,;. It is obtained in two steps. First modal case (e-clauses) and show that i-epistemic states we collect all the beliefs of i about 7r in a set I’: ;. are uniquely determined by the i-objective e-clauses Theorem 3 allows us to restrict ourselves to i-objective they contain. This will be useful in developing the se- e-clauses only. What does it mean for i to believe an e-clause c that is relevant to n? A reasonable answer seems to be the following: any formula that is believed by i and that implies c must mention the subject mat- ter. For example, if x = {p) and all i believes at w is (p V q) A r, then the clauses selected to be relevant be- liefs about r are (p V q) and weaker ones like (pV q V s). However, neither (pV r) nor any clause not mentioning p are selected. (p V r) is disqualified because it is con- tingent on r, which is also believed, thus not conveying - - mantics of only-knowing-about in the next section. Definition 10 E-Clauses An e-clause is a disjunct of the form (\ljliV (I Lj,CiV Q lLjidi), a’=1 d=u+l i=v+l where the la are literals and the ci and di are themselves e-clauses. Definition 11 Extended Conjunctive Normal Form A basic formula cy is in extended conjunctive normal form (ECNF) i$a = A tii and every IY~ is an e-clause. Lemma 1 For every basic CY there is an cy* in ECNF such that /==cu zs CX*. Every i-epistemic state is uniquely i-objective e-clauses it contains. determined by the Theorem 3 Let I’ and I? be two i-epistemic states. If I’ and I” agree on all their i-objective clauses, then r = r’. We now extend OL, to a logic that allows us to ex- press things like “x is all agent i knows about subject y.” A subject matter is defined as any finite subset 7r of atoms.g For every agent i and subject matter X, let Oi (r) be a new modal operator. 0; (r)a should be read as “CY is all agent i knows about x.” If we refer to a subject matter extensionally, we sometimes leave out the curly brackets. For example, we write Qi(p, q)cy instead of Oi({p, q})a. Formulas in this ex- tended language are formed in the usual way except for the following restriction: Oi(a) may only be ap- plied to basic formulas, that is, for any given 0; (7r) Q, the only modal operators allowed in CY are L1, . . . , L,. Given a formula cx and a subject matter 7r, we say that cy mentions R iff at least one of the atoms of x occur in 0. To define the semantics of Od(n)a, we follow an ap- proach similar to [Lak92], where only-believing-about is reduced to only-believing after “forgetting” every- thing that is irrelevant to 7r. Given a world w and a subject matter 7r, forgetting irrelevant beliefs is achieved by mapping w into a world w]n,i that is just like w except that only those beliefs of i are preserved that are relevant to 7r. Assuming we have such a ~],,a, then i only-believes cy about 7r at w just in case i be- lieves it at w and i only-believes Q at w],,i. ‘The results of the paper do not hinge on n being finite. What matters is that we have a way to refer to each subject matter in our language. If 7r is finite, this can always be done. any information about p. Given I?: a it is then easy to define w],,i in such a way that it believes only the formulas contained in I’: i. Definition 12 Given a world w in the canon&al model M,, let Y,i = {c 1 c is an i-objective e-clause, w/=Lic and for all i-objective basic CY, if wkLia and bc~ 1 c then Q mentions ~1. Lemma 2 Given a world w of M, and a subject mat- ter 7r, let Al ={LiY I Y is i-objective and LiI’r,;bLiy} U {‘Lay I y is i-objective and L;I’F,iIfLg} AZ ={l 11 is a literal in W} lJUj+{LjY I LjY E W} U Uj#i{lLjY I lLjY E wI* Then Al U A2 is consistent. Definition 13 Given w, rr, and Ai and A2 of the pre- vious lemma, let Wlr,i be a world (= maximally con- sistent set) that contains Ai and AZ. Note that by containing AZ, w],,i is exactly like w ex- cept for the beliefs of agent i. Furthermore, Ar makes sure that agent i at w],,i believes no more than what follows from r;,+ , that is, the agent believes only what is relevant to the subject matter X. Lemma 3 wln,i is unique. With this machinery we are finally ready to formally define the semantics of Oi(r)a for any basic formulas Q and subject matter I: w~Oi(~)~ c wI~,~~=O~CY and w/==Lia. Satisfiability, logical consequence, and validity in OL: are defined as for OL,. Some Properties of Only-Knowing-About So far we do not have a complete axiomatization of only-knowing-about. The following properties, which are natural generalizations of the single-agent case [Lak92], suggest that our definitions are reason- able. Definition 14 Given a formula cy, let rITo = (p I P is an atom that occurs in cy). Representation and Reasoning 665 1) ~O;(n)a > L& Follows immediately from the definition. 2) +O;(r)a if #cy and r m, = {}. In other words, an agent cannot only-know something about 7r that is totally irrelevant to X. 3) /=Oi(7r)a z Oi(7r)/3 if t==cy f /?. In other words, the syntactic form of what is only- known about ?r does not matter. 4) ~oi(p)(pvq) 3 (‘LipAlLlqAlLi’pA1La-S). Here, assuming any of the beliefs of the right-hand-side implies that i must know more about p than just p V q. 5) bOi& > Oi(n,)a. This says that, if all you know is cy and if the subject matter spans everything you know (see Definition 14), then surely cx is all you know about this subject mat- ter. The following theorem characterizes cases where rea- - soning from only-knowing-about is the same as reason- ing from only-knowing. This is interesting for at least two reasons. For one, only-knowing has a much simpler definition than only-knowing-about. For another, -the theorem tells us that even though one usually does not know all another agent knows,- there are cases where one may pretend to know all the other agent knows without drawing false conclusions. This will be very useful in the following section, where we show that Jiil is indeed able to infer that Jack believes that Tweety flies. Theorem 4 Let a and /3 be basic formulas such that 7r’p & ?T*. Then +Oi(7i-*)cr > Lip # /=Oicu > Lip. Jack, Jill, and Tweety Revisited We now demonstrate that our formalism model our initial Tweety example correctly. is able to Let us as- sume that all Jack believes about Tweety is the flying- bird default for Tweety (cr.) and the fact that Tweety is a bird (/?). Q = [Bird(Tweety) A 4$,,~~Fly(Tweety)] 3 Fly(T.) /? = Bird(Tweety) The subject matter Tweety can be characterized as the set of relevant predicates that mention Tweety, i.e. R = {Bird(Tweety), Fly(Tweety)}. We then obtain j==Ojack(~)(~ A P) 3 LjackFly(Tweety) (1) Proof : To prove this fact, note that 7raAp = 7r. Thus Theorem 4 applies and it suffices to show that kOjack(o A ,8) > Lj,,kFly(Tweety). It suffices to show that bOjaek (a A ,8) GE Ojock(Bird(Tweety) A Fly(Tweety)), since bOj,,k(Bird(Tweety) A Fly(Tweety)) > Li,,kFly(Tweety) follows immedi- ately from the semantics of Ojaek. Let wbOjaek(Bird(Tweety) A Fly(Tweety)), that is, for all w’ such that w zjack w’, wRj,,kw’ iff w’b (Bird(Tweety) A Fly(Tweety)). For all wi s.t. . w w: - . I -3 ack we obtain that w’b(Bird(Tweety) A Fly(Tweeyy)) iff w’ba A p since w’j=TLjaCklFly(Tweety). Thus wbOjack(a A P). Conversely, let wbOjack(o A P), that is, for all W’ such that w Xja,k W’, WRja,-kw’ iff w’b (a A PI- W’. Let w’ be any world such that w Mjack Let w’/=Bird(Tweety) A Fly(Tweety). Then wb=~r A fl and, hence WRjackW’. Conversely, let wRja,kw’. Obviously, w’bBird(Tweety). Assume w’#Fly(Tweety). Then W’~Ljack~Fly(TWefdy) by assumption. Then there is a world w” such that w i%ja,k W” and W”~=LjacklFly(TWeety) A Bird(Tweety) A Fly(Tweety). Hence w”f=(o A P) and, therefore, WRjackW”, contradicting the assumption that W’~LjaeklFly(TWeety). Thus wbOjaek(Bird(TWeety) A Fly(Tweety)). Given (1) and kLi(cu > p) > (Lacr > L$?) for all o, /3 and i, we immediately obtain the desired result bLjilZOjack(n)(a A P) 3 LjirlLjackFly(TWeety), that is, if Jill knows what Jack knows about Tweety, then she also knows what default inferences Jack makes about Tweety. Strictly n-Relevant Formulas It seems very hard to define what it means for a for- mula to be about a subject matter on purely syntac- tic grounds. For example, while (p > q) is intuitively about p, (p > a) A (lp 1 a) (which is equivalent to q) is not. Also note that, while (q V r) by itself is clearly not about p, (q V r) becomes nontrivial information about p as part of the formula (q V r) A (p GE q). Our notion of only-knowing-about captures these subtleties of aboutness and yields that are about a subject a useful definition of formulas matter in a very strong sense. Definition 15 Let CY be a basic formula such that pcx and let r be a subject matter. a! is called strictly r- relevant ifl Oi (7r) Q is satisfiable for some i. Intuitively, every strictly 7r-relevant about-n. piece of inform ation conveyed by a formula tells us something nontrivial p, dip V (q A P), and (q V r) A (p 5 q) are all strictly z-elevant, (P 1 ~)A(PP 1 d, PA(P 1 Q), and (P+P), on the other hand, are not. At first glance, one may want to include (p V lp) among the p-relevant formu- las. After all, Oi (p)(pVlp) is satisfiable. However, the mention of p in p V lp seems merely accidental since l=((PVlP) = f cr or every valid cr. Thus we feel justified in assuming ,that p V lp does not convey any relevant information about p. lo The following lemma identi- fies a simple case where formulas are strictly about a subject matter. Lemma 4 Let cx be a basic i-objective formula such that /+cr and liflcy. Then CY is strictly r,-relevant. Finally, the following theorem allows us to characterize strictly r-relevant basic formulas without appealing to belief or only-believing-about. loIn this light, O;(p)(pV~p) is best understood as saying that nothing is known about p. Lakemeyer Theorem 5 Let cx be a basic i-objective formula such that &x and let r be a subject matter. Let I’ = (c 1 c is a basic i-objective e-clause such that ICY > c and for all basic i-objective ,0, if b=cr > ,t3 and ,t3 > c then /3 mentions rr) Then CY is strictly x-relevant iff there are cl, . . . , ck E r such that +r\f=, ci > (Y. What Does Jill Know about Jack? So far, we have assumed that the subject matter is a set of atomic propositions, just as in [Lak92]. In a multi-agent context, there is at least one other possible subject matter, namely other agents. In other words, we may want to ask the question what Jill believes about Jack’s beliefs. For example, if all Jill believes is p V (lLja,kq A P), then all Jill believes about Jack seems to be p V 1LjackQ. It turns out to be quite easy to formalize these ideas, requiring only minor changes to the definitions we already have. For simplicity, we only consider the subject matter of one agent. Thus for all agents i and j with i # j, let O,(j) be a new modal operator. As before, we require that Oi (j) only be applied to basic formulas. Definition 16 Given a world w in the canonical model M,, let r;i = {c 1 c is an i-objective e-clause, w+Lic and for all i-objective basic a, if w~=Licr and /==cu > c then cy mentions Lj). Given a world w, let w]j,d be defined just as WI,,; was defined with I’yi now taking the role of I?: i. The se- mantics of onlylknowing about another agent is then simply: For example, given i’s knowledge base KB = pA(lLj qV q) A LkLjp, where i, j, and k: denote three different agents, we obtain Finally, if we were interested only in what i herself knows about j’s beliefs, that is, if we want to exclude LkLjp in the last example, we can do so by modifying the definition of I?:, in that we require that cu mentions Lj at depth 0. Conclusion While an agent i in general does not know all another agent j knows, i may well know all j knows about a specific subject matter X, which suffices for i to infer j’s nonmonotonic inferences regarding 7r. In this paper we formalized such a notion of only-knowing-about for two kinds of subject matters and discussed some of its properties. In addition, we were able to use our new notion of only-knowing-about to specify what it means for a formula to be strictly about a given subject matter. As for future work, it would be desirable to obtain a complete axiomatization of only-knowing-about, sim- ply because proof theories provide very concise char- acterizations. The first-order case, of course, needs to be addressed as well. We believe that our work, apart from the specific context of multi-agent autoepistemic reasoning, sheds some light on the concept of about- ness. However, much more remains to be done before this intriguing and difficult issue is fully understood. [AK881 [Ha1931 [HM84] [HM92] [HC84] [Hin62] [Hin71] [Kri63] [Lak92] [Lak93] [LevSO] References Appelt, D. and Konolige, K., A Practical Non- monotonic Theory of Reasoning about Speech Acts, in Proc. of the 26th Conf. of the ACL, 1988. Halpern, J. Y., Reasoning about only knowing with many agents, these proceedings. Halpern, J. Y. and Moses, Y. O., Towards a The- ory of Knowledge and Ignorance: Preliminary Re- port, in Proceedings of The Non-Monotonic Work- shop, New Paltz, NY, 1984, pp. 125-143. Halpern, J. Y. and Moses, Y. O., A Guide to Completeness and Complexity for Modal Logics of Knowledge and Belief, Artificial Intelligence 54, 1992, pp. 319-379. Hughes, G. E. and Cresswell, M. J., A Companion to Modal Logic, Methuen & Co., London, 1984. Hintikka, J., Knowledge and Belief: An Introduc- tion to the Logic of the Two Notions, Cornell Uni- versity Press, 1962. Hintikka, J., Semantics for Propositional Atti- tudes, in L. Linsky (ed.), Reference and Modality, Oxford University Press, Oxford, 1971. Kripke, S. A., Semantical Considerations on Modal Logic, Acta Philosophica Fennica 16, 1963, pp. 83- 94. All You Ever Wanted to Know about Tweety, in Proc. of the 3rd International Conference on Prin- ciples of Knowledge Representation and Reason- ing, Morgan Kaufmann, 1992, pp. 639-648. All They Know: A Study in Multi-Agent Non- monotonic Reasoning, in Proc. of the 13th Inter- national Joint Conference on Artificial Intelligence (IJCAI-93), Morgan Kaufmann, 1993. Levesque, H. J., All I Know: A Study in Autoepis- temic Logic, Artificial Intelligence, North Holland, 42, 1990, pp. 263-309. [Moo851 Moore, R., Semantical Considerations on Non- [MorSO] [MG92] [Per871 monotonic Logic, Artificial Intelligence 25, 1985, pp. 75-94. Morgenstern, L., A Theory of Multiple Agent Nonmonotonic Reasoning, in Proc. of AAAI-90, 1990, pp. 538-544. Morgenstern, L. and Guerreiro, R., Epistemic Log- its for Multiple Agent Nonmonotonic Reasoning I, Symposium on Formal Reasoning about Beliefs, In- tentiorzs, and Actions, Austin, TX, 1992. Perrault, R., An Application of Default Logic to Speech Act Theory, in Proc. of the Symposium on Intentions and Plans in Communication and Dis- course, Monterey, 1987. Representation and Reasoning 667 | 1993 | 99 |
1,430 | Rule Based Updates on Simple Knowledge Bases Chitta Bad Department of Computer Science University of Texas at El Paso El Paso, Texas 79968, U.S.A. chitta@cs.ep.utexas.edu Abstract In this paper we consider updates that are speci- fied as rules and consider simple knowledge bases consisting of ground atoms. We present a trans- lation of the rule based update specifications to extended logic programs using situation calculus notation so as to compute the updated knowl- edge base. We show that the updated knowledge base that we compute satisfies the update speci- fications and yet is minimally different from the original database. We then expand our approach to incomplete knowledge bases. We relate our approach to the standard revision and update operators, the formalization of ac- tions and its effects using situation calculus and the formalization of database evolution using sit- uation calculus. Introduction Most work on belief revision in the literature fo- cus on updating theories by sentences in the the- ory itself. Several different “update” operators (up- date, revision, contraction, erasure, forget etc) (KM89; GM88) and the relation between them have been stud- ied (KM92) and postulates have been suggested for some of these operators (GM88; KM92). In this paper’ we consider updates that are specified as rules (MT94b) ( similar to rules in a logic program) and present methods to compute updated knowledge bases when knowledge bases consist of a set of ground atoms. The following example illustrates the kind of updates that we consider. Consider a knowledge base consisting of three em- ployees: John, Peter and Carl; which represent a cer- tain department D in an organisation. During an or- aganisational shake up the department has to be up- dated based on the new knowledge that “If John re- mains in the department D then Peter has to leave the department D and if Carl remains in the department then John has to stay in the department”. ‘Supported by the grants NSF-IRI-92-11-662 and NSF- CDA 90-15-006. It should be noted that the intended meaning (MT94a) of the first statement is different from the statement “either John leaves the department or Pe- ter leaves the department”. If the new knowledge is specified in propositional theory or in first order logic they would be equivalent. The “if” and “then” in the statement “If John remains in the department D then Peter has to leave the department D” are treated dif- ferently from the first order implication. Our intent is to give a higher priority to “John than to Peter”. This is necessary because we might like to have the language that specifies updates to have properties (say like ‘monotonicity’) which are different from the ones held by the language of the database. To specify such rules Marek and Truszczynski (MT94a) introduce the notion of revision programs. They define P-justified revision of simple knowledge bases by a revision program. In this paper we show how to compute P-justified revisions of knowledge bases us- ing extended logic programs and situation calculus. Marek and Truszczynski ‘s definition of P-justified re- vision is only limited to the case when the CWA is assumed about the initial knowledge base. We extend the idea of P-justified revision to knowledge bases that could be incomplete and present an extended logic pro- gram that computes the revised knowledge base when the initial knowledge base may be incomplete. We then consider update rules that explicitly relates the initial knowledge base to the revised knowledge base and show how revisions can be computed for such updates. S h 1 UC ru es are beyond the scope of Marek and Truszczynski ‘s revision programs. Our approach of computing the revised knowledge base is similar to the formalization of database evolu- tion (Rei92) but uses extended logic programs (GL91) instead of first order logic. In its use of situation calcu- lus and extended logic programs our approach treats revision specifications as “actions” and a knowledge base as a “situation” and has similarity to the formal- ization of actions and their effects in (GL92). Our approach is different than the event calculus approach in (Ko92) and considers more complex revisions than discussed in it. 136 Automated Reasoning From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. Revision Specifications In this section we review the concept of revision specifications2 and P-justified revision as defined in (MT94b). Let U be a denumerable set. Its elements are referred to as atoms. A knowledge base is any subset of U. By 1U we mean the set {ia : a E U}. Elements of U U 47 are called literals. A revision specification uses a syntax similar to logic programs except that it has two special operators “in” and “out” . For any atom a the intuitive meaning in(a) is that the atom a is present in the revised knowledge base. Similarly the meaning of out(a) is that the atom a is absent in the revised knowledge base. For any atom pUin U, in(p) an out(p) are referred to as r-literals of . The statement “If John remains in the department D then Peter has to leave the department D” is written as the revision rule: out(peter) c in( john) We now formally define the revision specifications and P-justified revision. Definition 1 (MT94b) A revision rude can be of the following two forms in(p) +- in(a), . . . , i+h), out(sl), . . . , ou+b-J (1) out(p) +- in(ql), . . . , +zh), out(sl), . . . , out(s,) (2) where p, Q~‘S and sj’s are atoms. A collection of revision rules is called a revision speci- fication. cl A knowledge base B is a r-model of (satisfies) an r-literal in(p) (out(p) respectively) if p E B (p $! B, respectively). B is a r-model of the body of a rule if it satisfies each r-literal of the body. B is a r-model of a rule C if the following conditions hold: whenever B satisfies the body of C, then B satisfies the head of C. B is a r-model of a revision specification P if B satisfies each rule in P. Definition 2 (MT94b) Let P be a revision specifica- tion. By norm(P) we denote the definite program ob- tained from P by replacing each occurance of in(a) by a and each occurance of out(b) by b’. The necessary change for P is the pair (I, 0) where I = (a : a E least model of norm(P)} and 0 = {b : b’ E least model of norm(P)}. P with necessary change (1,O) is said to be coherent if I n 0 = 0. El 2Marek and Truszczynski used the term 9-eevisa’opa pro- grcrms instead of revision specifications. We believe it to be more of a specification language (similar to the language A (GL92) for specifying effects of actions) that can be im- plemented in a logical language of choice, rather than a programming language. Definition 3 (MT94b) Let P be a revision specifica- tion and DI and DR be two knowledge bases. PDF is the revision program obtained from P by elim- inating from P every rule of the type 1 or 2 such that qi # DR or sj E DR. PDF IDI is the revision program obtained from PL)~ by eliminating from the body of each rule in PDF in(a) if a E DI and out(a) if a @ DI. If PDF 1 DI with necessary change (I, 0) is coherent and DR = DI U I \ 0 then DR is called P-justified revision of DI, and we write DI z DR. El Intuitively, PDF is the set of rules obtained from P by removing all rules in P whose body is not satisfied by DR; and PDF IDI is the set of rules obtained from PDF by removing all r-literals that satisfy DI from the bodies of rules in PDF. Example 1 Let DI = {a, b} and PI be the revision specification out(b) + in(a) } PI Let DR be (a}. plD, is same as PI and P~D,IDI = {out(b)} and hence is coherent with the necessary change (8, {b}) and DR = DI U 8 \ {b}. Hence, DI z DR. Let P2 be out(b) +- in(a) out(a) +-- in(b) > ” It is easy to see that P2-justified revisions of DI are {a} and {b}. Let P3 be out(b) +-- in(a) out(u) > p3 It is easy to see that Pa-justified revisions of DI is {b}. Intuitively we can consider PI as the logic program {out-b c not out,} and P3 as the logic program out-b + not out-a out-a t Let Pa be in(a) + in(a) in(c) 4- in(c) > p4 It is easy to see that Pa-justified revisions of DI is ia, 6 cl Proposition 1 (MT94b) If a knowledge base D sat- isfies a revision specification P then D is a unique P- justified revision of D. Cl Proposition 2 (MT94b) Let P be a revision specifi- cation and DI be a knowledge base. If a knowledge base DR is a P-justified revision of DI, then DR is a r-model of P. 0 Automated Reasoning 137 Proposition 3 (MT94b) Let P be a revision speci- fication and DI be a knowledge base. If DR is a P-justified revision of DI, then DR + DI is mini- mal in the family {D + DI : D is a r-model of P }, where + denotes the symmetric difference. i.e. AsB=(A\B)u(B\A). cl It should be noted that the above proposition just says P-justified revisions are r-models of the revision specification that are minimally different from the orig- inal database. It does not say that r-models of the re- vision specification that are minimally different from the original database are P-justified revisions. In Ex- ample 1 both {a} and (b} are r-models of PI minimally different from DI but only (a} is a PI-justified revi- sion. This is similar to the logic program a t not b which has two minimal models (u} and {b}, but has the only stable model {a}. Translating Revision specifications to Extended Logic Programs In this section we translate revision specifications to extended logic programs and show that the answer sets of the translated program correspond to the P- revisions. The extended logic program Lt(P U 01) where P is the revision specification and DI is the initial knowl- edge base, uses variables of three sorts: situation vari- ables s, s’, . . ., fluent variables f, f’, . . ., and revision3 variables r, r’, . . . . The program II( PU D) consists of the translations of the individual revision rules and the initial knowledge base in P and certain other rules. We now present the translation II(P U D) where s is the situation cor- responding to the initial knowledge base DI, r corre- spond to the revision dictated by the revision specifi- cation P and fes(T, s) is the situation corresponding to the knowledge base obtained by revising the initial knowledge base with the revision specification P. Algorithm 1 [Trunsluting Revision Specifications - with CWA about the initiud database] 1. Initial Database If p is proposition in the initial database then II(PU D) cant ains (1-l) hodds(p, s) and the rule (1.2) lholds(F, s) +- not holds(F, s) which encodes the CWA about the initial database. 2. Inertia Rule (2.1) holds(F, res(r, s)) + hobds(F, s), not ab( F, r, s) 3The revision variables correspond to the action vari- ables in situation calculus and in the translation of the language A to extended logic programs in (GL92) This rule is motivated by the minimality considerat& that only changes that happens to the initial knowledge base are the ones dictated by the revision specification. 3. TransIuting the revision rules (a) Each revision rule of the type (1) is translated to the rule (3.a.l) holds(p, res(r, s)) +-- hodds(ql, res(r, s)), . . . , holds(q,, res(r, s)), lhoZds(sl, res(r, s)), . . . , lhodds(s,, res(r, s)) (b) Each revision rule of the type (2) out(P) + ~n(ai>, . . ..in(qm).out(sl),...,Out(~~) is translated to the rules: (3.b.l) lhobds(p, res(r, s)) c hobds(ql, res(r, s)), . . . , holds(q,, res(r, s)), lho/ds(sl , res(r, s)), . . . , lhodds(s,, res(r, s)) (3.b.2) ab(p, a, s) + hodds(ql , res(r, s)), . . . , hodds(q,, res(r, s)), lhodds(sl, res(r, s)), . . . , lhodds(s,, res(r, s)) Since the inertia rule (2.1) is only for the positive facts we do not need a rule defining abnormality correspond- ing to (3.a.l), but we do need such a rule corresponding to (3.b.l) to block the inertia rule and avoid inconsis- t ency. 4. Completing the revised database To encode the CWA w.r.t. the revised database II(PU DI) contains the rule (4.1) lhobds(F, res(r, s)) t not holds(F, res(r, s)) EI Example 2 Consider DI and P2 from Example 1. the translation H(P2 U 01) consists of the following rules: holds@, s) hoIds( b, s) lhoZds(b, res(rl, s)) +- hodds(a, res(rl, s)) ab(b, rl, s) +- holds@, res(rl, s)) lholds(a, res(rI, s)) t hodds( b, res(r1, s)) ab(a, rl, s) + hodds(b, res(rl, s) 1.2 2.1 4.1 r~(PzUDr) cl Theorem 1 Let P be a revision specification corre- sponding to a revision operator r and DI be an initial database. Let II(PUD1) be the translation to extended logic programs. (i) DI -% DR implies there exists a consistent answer set A of II(P U 01) such that (a) f E DR iff hoZds(f, res(r, s)) E A. (b) f e DR iff lholds(f, res(r, s)) E A. (ii) If A is a consistent answer set of II(P U 01) then DI ~DR, 4 where DR = {f : holds(f, res(r, s)) E 138 Automated Reasoning Proof:(sketch) (i) Let A=AlUAaUAsUAdUAgUAs where, Al = {hoZds(f, res(r, s)) : f E DI \ 0) AZ = { hoZds(f, res(r, s)) : f E I} A3 = (-holds(f, res(r, s)) : f $! DR} A4 = {hoZds(f, s) : f E DI} A5 = (lhoZds(f, s) : f $ D1}u A6 = {ab(f,r,s) : f E 0) It is easy to see that A is consistent. To show A as an answer set of II(PUD1) we observe that Al, As, Aq, A5 come from application of the rules (2.1), (4.1), (1.1) and (1.2) respectively and A2 and A6 come from combined application of the rules (3.a.l), (3.b.l) and (3.b.2). Moreover, there is a one to one correspondence between PDF and RA where R consists of the rules from (3.a.l) and (3.b.l). Cl Example 3 The answer sets of rI(P2 U DI) are {hoZds(a, s), hoZds(b, s), hoZds(a, res(r1, s)), lhoZds(b, res(q, s))ab(b, ~1, s)} and {hoZds(a, s), hoZds(b, s), hoZds(b, res(q, s)), lhoZds(u, res(q, s))ub(u, ~1, s)} cl ule based Revision of Incomplete Knowledge Bases The approach in the last section and in (MT94b) as- sumes that the initial knowledge base is complete. i.e. there is CWA about the initial knowledge base. In this section we define P-justified revision of possibly incom- plete knowledge bases with respect to revision specifi- cations. We believe that it is more intuitive and under- standable to define the P-justified revision through a translation to an extended logic program than directly in the style given in the previous section and hence do the former in this section. Unless otherwise specified from now on by a knowl- edge base we mean a possibly incomplete knowledge base which is a subset of U U 47. As in the last sec- tion we translate the initial knowledge base and the revision specification to an extended logic program so as to compute the revised knowledge bases. As in the previous section, our translation uses situation calcu- lus notations. The translation of an initial knowledge base DI and the revision specification P denoted by l.&(P U DI) consists of the following: Algorithm 2 [Transdating Revision Specs - without CWA about the initial database] 1. Initial Database If p is proposition in the initial knowledge base IIinc(P U DJ) contains (1.1) hoZds(p, s) If lq is proposition in the initial knowledge base &,,(P u DI) contains (1.2) lhoZds(q, s) 2. Inertia RuZes (2.1) hoZds(F, res(r, s)) + hoZds(F, s), not ub(F, r, s) (2.2) lhoZds(F, res(r, s)) + lhoZds( F, s), not ub( F’, r, s) Since our initial knowledge base need two different inertia rules. may be incomplete we 3. Translating the revision rules (a) Each revision rule of the type (1) is translated to the rules (3.a.l) and (3.a.2) ub(p’, a, s) +- hoZds(ql, res(r, s)), . . . , hoZds(q,, res(r, s)), lhoZds(sl, res(r, s)), . . . , lhoZds(s,, res(r, s)) (b) Each revision rule of the type (2) is translated to the rules (3.b.l) and (3.b.2). cl Definition 4 Let P be a revision specification and DI be an initial knowledge base. If A is an an- swer set of lI i&P U DI) then the set DR = {f : hoZds(f, res(r, s)) E A} U {if : lhoZds(f, res(r, s)) E A} is said to be a P-justified revision of DI. A knowledge base B is a r-i-model of (satisfies) an r-literal in(p) (out(p) respectively) if p E B (lp E B, respectively). B is a r-i-model of the body of a rule if it satisfies each r-literal of the body. B is a r-i-model of a rule C if the following conditions hold: whenever B satisfies the body of C, then B satisfies the head of C. B is a r-i-model of a revision specification P if B satisfies each rule in P. Proposition 4 Let P be a revision specification and DI be a knowledge base. If DR is a P-justified revision of DI, then DR+D~ is minimal in the family { DSDI : D is a r-i-model of P ), where + denotes the symmetric difference. i.e. A +B=(A\B)u(B\A). Cl Specifying revisions that depend on the previous state The revision specifications defined in the previous sec- tions can only express the relationship between the ele- ments of the revised knowledge base. Although it uses the implicit assumption that there is minimal change to the initial database, it can not explicitly state any relation between the initial knowledge bases and the revised knowledge base. For example if we would like to say that “all assistant professors with 20 journal pa- pers are to be promoted to associate professors” we can not express it using revision specifications. In this section we extend revision specifications to allow us to specify such update descriptions. An extended revision rule can be of the following two forms: Automated Reasoning 139 in(p) +-- in(ql), . . . , in&J, out(e), . . . , out(sn), was-in(tl), . . . , was-in(tk), wus~out(ul), . . . , wus~out(ul) (3) out(p) * in(a1>, * * *, in(%ra), OUf(Sl), * * - , Of+,), was-in(tl), . . . , was-in(tk), wus~out (t&l), . . . ) wus,out (Ul) (4 where p, qa’s, sj ‘s, ti’s and uj’s are atoms. An extended revision specification is a collection of ex- tended revision rules. The statement “all assistant professors with 20 jour- nal papers are to be promoted to associate professors” can be expressed using the following extended revision specification: in(ussociute(X)) t was-in(ussistunt(X)), was-in(huspuper(X, 20)) out(ussistunt(X)) t was&( assistant(X)) , was-in(huspuper(X, 20)) Similar to the last section we define revisions with re- spect to an extended revision specification using ex- tended logic programs and use situation calculus nota- tion. Algorithm 3 [Transdating Extended Revision Specifi- cations] Our translation will be same as in Algorithm 2 except the translation of the revision rules. The extended re- vision rules are translated as follows: (a) Each extended revision rule of the type (3) is translated to the rules (3.a.l’) hoZds(p, res(r, s)) +- hoZds(ql, res(r, s)), . . . , hoZds(q,, res(r, s)), lhoZds(sl, res(r, s)), . . . , lhoZds(s,, res(r, s)), hoZds(tl, s), . . . , hold+, s), -rhoZds(ul, s), . . . , lhoZds(ul, s) (3.a.2’) ab(p’, a, s) + hoZds(ql, res(r, s)), . . . , hoZds(q,, res(r, s)), lhoZds(sl, res(r, s)), . . . , lhoZds(s,, res(r, s)), not ThoZds(tl, s), . . . , not lhoZds(t,, s), not hoZds(ul, s), . . . , not hoZds(ur, s) (b) Each extended revision rule of the type (4) is translated to the rules (3.b.l’) lhoZds(p, res(r, s)) + hoZds(ql, res(r, s)), . . . , hoZds(q,, res(r, s)), lhoZds(sl, res(r, s)), . . . , lhoZds(s,, res(r, s)), holds&, s), . . . , holds&, s), ThoZds(ul, s), . . . , lhoZds(ul, s) 140 Automated Reasoning (3.b.2’) ub(p, a, s) + hoZds(ql , res(r, s)), . . . , hoZds(q,, res(r, s)), lhoZds(sl, res(r, s)), . . . , -hoZds(s,, res(r, s)), not lhoZds(tl, s), . . . , not -hoZds(tk:, s), not hoZds(ul, s), . . . , not hoZds(ul, s) Cl The use of not lhoZds(t,, s) and not hoZds(ul, s) in- stead of hoZds(tl , s) and lhoZds(ul, s) in (3.a.2’) and (3.b.2’) is to be cautious when applying the inertia rules (GL92). For example if DI = {ussistunt(john)} and we have the extended revision specification out(ussistunt(X)) t was-in(haspaper(X, 20)) for the update called “promote” we would not like to have ussistunt(john) in Dpromote because we are not sure if huspuper(john, 20) is f ulse in Dr. We would rather have Dpromote contain neither ussistunt( john) nor lussistunt (john). Definition 5 Let P be an extended revision specifi- cation and DI be an initial knowledge base. If A is an answer set of l&,.&P U DI) then the set DR = {f : hoZds(f, res(r, s)) E A} U (1f : lhoZds(f, res(r, s)) E A) is said to be a P-justified revision of Dr. Relationship with standard update operators In this section we discuss how rule based revision re- lates to standard revision and update operators. When we consider a knowledge base to be a set, of propositional facts (with CWA) it is easy to see that the concepts of update and revision (KM92) coincide. For such knowledge bases the following proposition re- lates the standard definition of updates with P-justified revision. Definition 6 For any revision rule S, fs is the propo- sitional formula obtained by replacing each out(u) in S by la and each in(u) in S by a and treating t as the implication. For any revision specification P, Fp is the propositional formula obtained by the conjunction of all the fs ‘s, for all S’s in P. 0 Proposition 5 Let P be a revision specification and DI be a knowledge base. DR is a model (in the propositional sense) of DI o Fp where o is the revision operator (KM92) iff DR + DI is minimal in the family {D t DI : D is a r-model of PI cl When we consider a knowledge base to be a set of literals then a knowledge base may have several models and update and revision (KM92) may be different de- pending upon the definition of closeness between mod- els and between theories (knowledge bases). Proposition 6 Let P be a revision specification and DI be a knowledge base. DR is a model of DI o Fp where o is the revision operator (KM92) iff DR + 01 is minimal in the family (D-+--I : D is a r-i-model of P } cl From the above propositions it is clear that the P- justified revisions computed using the translations sug- gested in this paper do not compute all the models of the standard revisions (KM92). In Example 1 both {a} and {b} are r-models of PI minimally different from DI and are also be the models of DI o Fpl but only (a} is a PI-justified revision. One possible way to obtain all the models would be to translate the revision specification PI to a first-order theory instead of an extended logic program and mini- mize the abnormality using circumscription. That has been the approach of Reiter (Rei92) to specify database evolution. On the other hand in certain cases we might need revisions to be specified as rules instead of a for- mula and also in certain cases extended logic programs may be preferred over circumscription as a computing formalism. Relation with JI and its extensions A is a specification language for representing effects of actions suggested by Gelfond and Lifschitz in (GL92). The e-propositions in A which are of the form A causes F if Pl,...,P, corresponds to the extended revision rule in(F) +- was-in(Pl), . . . , was-in(P,) when the domain consists of only action A and F, PI,. . . , Pn are positive atoms. Revision specifications of the form (1) and (2) are similar to constraints in AR (KL94), an extension of A. Although the constraints in J1R allow for formulas we believe if rules of the form (1) and (2) are used instead it may be possible to state when a domain description in the language of AR will have models with unique transition functions. Conclusion In this paper we considered the language of revision specifications for specifying revision conditions as rules. We presented a translation to extended logic programs that uses situation calculus notation so as to compute the revised knowledge base given a knowledge base con- sisting of atoms and a revision specification. We then considered knowledge bases that may be incomplete and presented a translation for computing revision for such a case. We also extended the language of revi- sion specifications to allow rules explicitly relating the initial and the revised database. Finally we compared our approach with the standard revise and update op- erators and with the specification language A. We believe a more thorough study is necessary to further relate extended revision specifications to stan- dard update operators and also to further relate with languages for reasoning about actions. In particular the impact of using rule based constraints instead of constraint formulas in &Z needs to be studied. Acknowledgement I would like to thank Prof. Wiktor Marek whose talk on “Revision Programs” in UT El Paso in Dee 93 trig- gered the ideas expanded on this paper. I would also like to thank the anonymous referees for their valuable comments. eferences M. Gelfond and V. Lifschitz. Classical negation in logic programs and disjunctive databases. New Gen- eration Computing, pages 365-387, 1991. M. Gelfond and V. Lifschitz. Representing actions in extended logic programs. In Joint International Conference and Symposium on Logic Programming., pages 559-573, 1992. P. Gardenfors and D. Makinson. Revisions in knowl- edge systems using epistemic entrenchment. In Proc. 2nd international conference on theoreticad as- pects of reasoning about knowbedye, pages 1413-1419, 1988. G. Kartha and V. Lifschitz. Actions with indirect effects. To appear in KR 94, 1994. H. Katsuno and A. Mendelzon. A unified view of propositional knowledge base updates. In Proc. of IJCAI-89, pages 1413-1419,1989. H. Katsuno and A. Mendelzon. On the difference between updating a knowledge base and revising it. In Proc. of KR 92, pages 387-394, 1992. R. Kowalski. Database Updates in the Event Calcu- lus. In The Journad of Logic Programming, 12 (1992), 121-146 W. Marek and M. Truszczynski. Revision program- ming. manuscript, 1994. W. Marek and M. Truszczynski. Revision specifica- tions by means of programs. manuscript, 1994. R. Reiter. Formalizing database evolution in the sit- uation calculus. In ICOT, editor, Proc. of the Inter- nationad Conference on Fifth Generation Computer Systems, pages 600-609, 1992. Automated Reasoning 141 | 1994 | 1 |
1,431 | Termination Analysis of OPS5 Expert Systems* Hsiu-yen Tsai Albert MO Kim Cheng Department of Computer Science University of Houston Houston, Texas 77204-3475 Email:(hsiuyen, cheng)@cs.uh.edu Abstract Bounded response time is an important requirement when rule-based expert systems are used in real-time applications. In the case the rule-based system cannot terminate in bounded time, we should detect the “cul- prit” conditions causing the non-termination to assist programmers in debugging. This paper describes a novel tool which analyzes OPS5 programs to achieve this goal. The first step is to verify that an OPS5 program can terminate in bounded time. A graphi- cal representation of an OPS5 program is defined and evaluated. Once the termination of the OPS5 program is not expected, the “culprit” conditions are detected. These conditions are then used to correct the problem by adding extra rules to the original program. Introduction As rule-based expert systems become widely adopted in new application domains such as real-time systems, ensuring that they meet stringent timing constraints in these safety-critical and time-critical environments emerges as a challenging design problem. In real ap- plications, rule firings are triggered by the changes in the environment. The computation time of an expert system is highly unpredictable and dependent on the working memory conditions. If the computation takes too long, the expert system may not have sufficient time to respond to the ongoing changes in the environ- ment, making the result of the computation useless or even harmful to the system being monitored or con- trolled. To remedy this problem, two solutions are proposed in the literature. The first one is to reduce the ex- ecution time via parallelism in the matching phase and/or firing phase of the recognize-act cycle. Sev- eral approaches (Ishida 1991; Kuo & Moldovan 1991; Schmolze 1991; Pasik 1992; Cheng 1993) have been provided to achieve this goal. The other solution is to optimize the expert system by modifying or *This material is based upon work supported in part by the National Science Foundation under Award No. CCR- 9111563 and by the Texas Advanced Research Program un- der Grant No. 3652270. resynthesizing the rule base if the response time is found to be inadequate(Zupan & Cheng 1994). There have been few attempts to formalize the question of whether a rule-based program has bounded response time. Some formal frameworks are introduced in (Browne, Cheng, & Mok 1988; Cheng & Wang 1990; Cheng et al. 1993). Their work focus on EQL (Browne, Cheng, & Mok 1988) and MRL (Wang 1990) rule-based languages, which are developed for real-time rule-based applications. Our work in this paper is related to the second so- lution. In particular, we shall investigate the timing properties of programs written in the OPS5 language, which is not designed for real-time purposes although it has been widely adopted in practice. Our experi- ence has shown that most rule-based programs are not designed for all possible data domains. Because rule- based programs are data-driven, certain input data are required to direct the control flows in the programs. Many control techniques are implemented in this man- ner and often require the absence of or a specific or- dering of working memory elments to generate initial working memory(WM). H ence, if these WMEs are not in the expected data domain, abnormal program be- havior will occur, usually leading to a cycle in the pro- gram flow. While we predict the timing bound, termi- nation should be detected as well. Here, we focus on the following points. Formalize a graphical representation of rule-based programs. Detect the termination conditions of OPS5 pro- grams. In (Ullman 1988), similar work focuses on the recursive relation in backward chaining pro- grams. Here, rule-based programs which employ for- ward chaining are discussed. If an OPS5 program is not detected to terminate for all initial program states, extract the “culprit” con- ditions which cause non-termination to assist pro- grammers in correcting the program. Modify the program to ensure program termination. The rest of the paper is organized as follows. In Sec- tion 2 we define a graph to represent OPS5 programs. Automated Reasoning 193 From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. Section 3 introduces a novel method of termination detection. Section 4 describes a technique to find the “culprit” conditions. An additional refinement phase is discussed in Section 5. Section 6 describes how the tool is constructed and provides a brief analysis of its computational complexity. Section 7 is the conclusion. Static Analysis of Control Paths Several graphical representations of rule-based pro- grams have been developed for analysis, testing, and debugging purposes. An intuitive representation is a physical rule flow graph. In such a graph, nodes repre- sent rules and an edge from node a to node b implies rule b is executed immediately after rule a is executed. Unlike programs written in a procedural language, the control flows of rule-based programs are embedded in the data and cannot be easily derived. Thus one can- not in general find physical paths among rules without running the program for every possible initial program state. Furthermore, since the developer and the tester of a rule-based program usually think in terms of log- ical paths, a physical rule flow graph is not the most appropriate abstraction. This leads to the definition of a graph called Enable Rule (ER) graph, which is adapted from (Cheng & Wang 1990) and (Kiper 1992). The control information among rules in OPS5 is repre- sented by the ER graph. To define the ER graph, we need to first define the state space graph. Definition 1 The state space graph of an OPS5 pro- gram is a labeled directed graph G=(V,E). V is a set of nodes each of which represents a set of Working Mem- ory Elements(WMEs). We say that a rule is enabled at node i ifl its enabling condition is satisfied by the WMEs at node i. E is a set of edges each of which denotes the firing of a rule such that an edge (i,j) con- nects node i to node j i# there is rule R which is en- abled at node i, and firing R will modify the Working Memory(WM) t o h ave the same WMEs at node j. Definition 2 Rule a is said to potentially enable rule b ifl there exist at least one reachable state in the state space graph of the program where (1) the enabling con- dition of rule b is false, and (2) firing rule a causes the enabling condition of rule b to become true. Since the state space graph cannot be derived with- out running the program for all allowable initial states, we use symbolic pattern matching to determine the po- tentially enable relation between rules. Rule a poten- tially enables rule b iff the symbolic form of a WME modified by the actions in rule a matches one of the enabling conditions of rule b. Here, the symbolic form represents a set of WMEs and is of the form: (classname Tattribute vl Tattribute v2 . . . lattributen vn) where vl, v2 . . . and vn are either variables or constant values and each attribute can be omitted. For exam- ple, (class Tal 3 Ta2 <x>) can be a symbolic form of the following WMEs. (class Tal 3 Ta2 4) (class Ial 3 Ia2 8 Ta3 4) (class Tal 3 Ta2 <y> ta4 CO> Example 1 illustrates the potentially enable relation. Rule a potentially enables rule b because the first ac- tion of rule a creates a WME (class-c tcl off 1~2 <x>) which symbolically matches the enabling condi- tion (class-c fcl <y>) of rule b. Note that the sec- ond action of rule a does not match the first enabling condition (class-a la1 <x> ta2 off) of rule b because variable <y> ranges in <<open close>>. Example 1 An example of the potentially enable re- lation (P a (class-a -al <x> -a2 3) (class-b ^bl <x> *b2 (Cy> CC open close >> 3) --> (make class-c -cl off -c2 Cx>> (modify 1 -a2 <y>>> (P b (class-a *al <x> -a2 off) (class-c -cl Cy>) --> (modify 1 -a2 open)) The symbolic matching method actually detects the enabling relation by checking the attribute ranges. This information can be found by analyzing the se- mantics of the rules. Definition 3 The enable-rule (ER) graph of a set of rules is a labeled directed graph G = (V, E). V is a set of nodes such that there is a node for each rule. E is a set of edges such that an edge connects node a to node b ifl rule a potentially enables rule b. Note that an edge from a to b in the ER graph does not mean that rule b will fire immediately after rule a. If rule b is potentially enabled, that only implies rule b may be added to the agenda of the rules to be fired. The previous analysis is useful since it does not re- quire us to know the contents of working memory, which cannot be obtained statically. Termination Detection The ER graph provides information about the logical paths of an OPS5 program. We can use this graph to trace the control flows of the program. Since we know the potentially enable relation between rules, we can detect if the firing of each rule in an OPS5 program can terminate. A rule is said to be terminating if the number of that rule’s firings is always bounded. Definition 4 Suppose rule b potentially enables rule a. Then there is an edge from node b to node a in the ER graph. A matched condition of rule a is one of the enabling condition elements of rule a, which may be matched by executing an action of rule b. Here, rule b is called the enabling rule of the matched condition. Definition 5 An unmatched condition is one of the enabling condition elements of a rule which cannot be matched by firing any rule, including this rule. 194 Automated Reasoning Example 2 Matched and unmatched conditions (P b (P = (cl ^a1 5) Cc2 -a2 <x>) Cc2 ^a2 <x> ^a3 2) Cc3 -a4 <x> -a5 Cy>) --> --> (modify 2 ^a2 3) > (modify 1 -a2 Cy>) > In example 2, suppose the firing of any other rule can- not match the second condition element of rule a. In the ER graph, rule b will potentially enable rule a. The first condition element (~2 Ta2 <x2) of rule a is a matched condition because it may be matched by fir- ing rule b. The second condition element (~3 ta4 <x> ta5 cy>) of rule a is an unmatched condition because it cannot be matched by firing other rules. Next, we derive a theorem to detect the termina- tion of a program. One way to predict the termination condition is to make sure that every state in the state space graph cannot be reached twice or more. How- ever, since it is computationally expensive to expand the whole state space graph, we use the ER graph to detect this property. Theorem 1 A rule r will terminate if one of the fol- lowing conditions holds: Cl. The actions of rule r modify or remove the un- matched conditions of rule r. C2. The actions of rule r modify or remove the matched conditions of rule r. All of the enabling rules of the matched conditions can terminate in bounded time. C3. Every rule, which enables rule r, can terminate in bounded time. Proof: Cl. Since the firing of any rule cannot match the unmatched conditions, the only WMEs which can match the unmatched conditions are the initial WMEs. Moreover, since the actions of rule r change the contents of these WMEs, the WMEs cannot match the unmatched conditions again after rule r is fired. Otherwise, the unmatched condition will be matched by firing rule r. This contradicts the defi- nition of unmatched conditions. Each initial WME matching the unmatched condition can cause rule r to fire at most once since we have a finite number of initial WMEs. Thus rule r can terminate in bounded time. C2. Since the enabling rules of the matched conditions can terminate in bounded time, by removing these rules, the matched conditions can be treated as un- matched conditions. According to condition 1, rule r can terminate in bounded time. C3. All rules which enable rule r can terminate in bounded time. After these rules terminate, no other rule can trigger rule r to fire. Thus rule r can ter- minate as well. Consider the following rule: (P = :::: -al 1 “a2 <x>> ^a1 4 ^a2 <x>) --> (modify 2 ^a1 3) > Suppose the second condition element (~2 tal 4 ta2 <x>) cannot be matched by firing any rule, including this rule itself. Then this condition element is an un- matched condition. Suppose there are three WMEs in the initial working memory matching this condition el- ement. Then this condition element can be matched by at most three WMEs. The actions of rule a modify these three WMEs when rule a fires. As a result, rule a can fire at most three times. If there is no cycle in the ER graph or every cycle can be broken (i.e. 7 cycle can in the OPS5 be exited), then the firings of every rule program are finite, and thus termination is detected. However, if the termination cannot be detected, we shall inspect the cycles in the ER graph. Cycles in the ER Graph Enabling Conditions of a Cycle Suppose rules pl , p2 . . . , pn form a cycle in the ER graph. W is a set of WMEs and W causes rules Pl,P2**-, p, to fire in that order. If after firing Pl,P2***, pn in that order will form the WMEs W again, then LV is the enabling condition of the cycle. We use symbolic tracing to find W if the data of each attribute are literal. Example 3 illustrates the idea. Rule p1 and pa form a cycle in the ER graph. To distinguish different variables in different rules, we as- sign different names to variables. Thus, the program is rewritten as in example 4. Example 3 Two rules with an embedded cycle (P Pi (class1 -all ( <x> <> 1 3 > (class2 -a21 Cy>) --> (modify 1 ^a11 <y>>> (P P2 (class1 -all <x>) (class2 -a21 ( <x> << 2 3 >> 3 -a22 <y>) --> (modify 1 ^a11 Cy>>> Example 4 Example 3 with modified variables (P Pl (class1 ^a11 ( <x-l> 0 1 3 > (class2 -a21 <y-l>) --> (modify 1 -all <y-l>)) (P P2 (class1 -all <x-2>) (class2 -a21 ( <x-2> << 2 3 >> 3 -a22 <y-2>) --> (modify 1 -all <y-2>) ) A symbol table is built for each variable, which is bound according to the semantics of the enabling con- ditions. Here, the symbol table is shown in table 1. Automated Reasoning 195 Table 1. Table 2. The non-terminating condition W is initially the set of all enabling conditions. Thus W is (class1 *all <x-l>) (class2 ^a21 <y-l>) (class1 -all <x-2>) (class2 ^a21 <x-2> ^a22 <y-2>) Each variable is associated with the symbol table. Now we trace the execution by firing pl first; pl enables p2 by matching the first condition. Since the first condition of rule pa can be generated from rule pl , it can be removed from W. Variable x-2 is now replaced by y-l. W is (class1 ^a11 <x-l>) (class2 -a21 <y-l>) (class2 -a21 <y-l> ^a22 <y-2>) Since x-2 is bound with 2 and 3, y-l is bound with the same items. The symbol table is modified as in table 2. After executing the action of rule ~2, W is now (class1 ^a11 <y-2>) (class2 ^a21 <y-l>) (class2 -a21 <y-l> -a22 <y-2>) To make this WM trigger pl and p2 in that order again, the WME (class1 tall <y-2>) must match the first condition of p1 . Thus variable y-2 is bound with x-l’s boundary. The symbol table is shown in table 3. W is (class1 ^a11 <y-2>) (class2 ^a21 <y-l>) (class2 ^a21 <y-l> -a22 <y-2>) where y-201 and y-1=2,3 The detailed algorithm for detecting the enabling con- ditions of cycles is described next. Algorithm 1 The Detection of Enabling Conditions of Cycles Premise: The data domain of each attribute is literal. Purpose: Rules pl, pa . . . , p, form a cycle in the ER graph. Find -a set bf WMEs l@ which fire Pl,P2*--, p, in that order such that these firings can- not terminate in bounded time. Assign different names to the variables in different rules. Initialize W to be the set of all enabling conditions ofPl,P2-*,Pn. Build a symbol table for variables. Each variable is bound with the semantics of enabling conditions. Simulate the firing of pl , p2 . . . , p, in that order. Each enabling condition of rule pi is matched from the initial WM unless it can be generated from rule If pi’s enabling condition elements can be generated by pn, substitute pn’s variables v, for correspond- ing variables vr in pl. Modify v,‘s boundary in the symbol table. In steps 4 and 5, while substituting pi-i’s variables for pjs’, check the intersection of the boundaries of pi’s and pi-l’s variables. If the intersection is empty, then terminate the algorithm. Suppose W, is the WM after firing pl, p2 . . . , pn. If Wn can match W, then W is an enabling condition of the cycle pl,p2.. . ,p,. Note that there can be more than one set of enabling conditions W of a cycle. Hence, by applying the algo- rithm, we may obtain different Ws. Prevention of Cycles After detecting the enabling conditions W of a cycle, we can add an extra rule r’ with W as the enabling conditions of r’. By doing so, once the working memory has the WMEs matching the enabling conditions of a cycle, the control flow can be switched from the cycle to r’. In example 3, r’ is (p loop-rule1 (class1 ^a11 ( <y-2> 01 3 > (class2 -a21 C <y-l> << 2 3 >> 3 > (class2 ^a21 <y-l> -a22 <y-2>) --> action . . . The action of r’ is determined by the application. The simplest way is to halt in order to escape from the cycle. To ensure the program flow switches out of the cy- cles, the extra rules r’ should have higher priorities than the regular ones. To achieve this goal, we use the MEA control strategy and modify the enabling condi- tions of each regular rule. At the beginning of the program, two WMEs are added to the WM and the MEA strategy is enforced. (startup . . . . . . (strategy mea) (make control -rule regular) (make control -rule extra)) Variable Boundary x-l <>l BEI Y-1 2,3 x-2 2,3 Y-2 <>l Table 3. pi-i. If the enabling condition element w of rule pi can be generated by firing pi- 1, then remove w from W. Substitute pi- l’s variables vi- 1 for correspond- ing variables vi in pi. Modify vi- l’s boundary in the symbol table. 196 Automated Reasoning The condition (control trule regular) is added to each regular rule as the first enabling condition ele- ment . (control trule extra) is added to each extra rule as the first enabling condition element too. Since the MEA strategy is enforced, the order of instanti- ations is based on the recency of the first time tag. The recency of the condition (control trule regular) is lower than that of the condition (control Irule extra). Thus, the instantiations of the extra rules are chosen for execution earlier than those of the regular rules. Example 5 is the modified result of example 3. Example 5 The modified result of example 3 (startup (strategy mea) (make control -rule regular) (make control -rule extra)) (P Pi (control *rule regular) (class1 -all ( <x> 0 1 3 > (class2 -a21 Cy>) --> (modify 2 ^a11 <y>>> (P P2 (control *rule regular) (class1 -all <x>) (class2 *a21 -f <x> << 2 3 >>3 -a22 Cy>) --> (modify 2 -all <y>>> (p loop-rule 1 (control -rule extra) (class1 -all ( <y-2> 0 1 3 > (class2 -a21 ( <y-l> << 2 3 >> 3 > (class2 -a21 <y-l> ^a22 <y-2>) --> (halt > > Usually, applications do not expect cycles embedded in the control paths. Thus, once the entrance of a cycle is detected, the program can be abandoned. Hence, af- ter all cycles in the ER graph are found and extra rules are added, we can guarantee that the program will ter- minate. However, we can also have exception handling on the action of the extra rules. One way to handle the exception is to remove the WMEs which match the enabling condition of a cycle. In example 5, the action of the extra rule can be (remove 2 3 4). Since the WMEs which match the enabling condition of a cycle are removed, the instantiations in the cycle are also removed. Then other instantiations in the agenda can be triggered to fire. Program Refinement The ER graph of a typical OPS5 program is complex and usually contains many cycles. Furthermore, even for a single cycle, there may exist more than one en- abling condition to trigger the cycle. This leads to a large number of extra rules in the modified programs and thus reduces their runtime performance. To tackle this problem, redundant conditions and rules must be removed after the modification. Redundant Conditions In algorithm 1, after symbolic tracing, some variables will be substituted and the boundaries may be changed too. This may cause subset relationship among the enabling condition elements of a cycle. In an extra rule, if condition element Ci is a subset of condition element Cj, then Cj can be omitted to simplify the enabling condition. In example 5, the condition (class.2 fa21 <y-l> ta22 <y-2>) is asubset of (class:! fa21). Hence, (class2 ta21) can be omitted. (p loop-rule1 (control *rule extra) (class1 ^a11 C <y-2> 01 1 > ; (class2 -a21 ( <y-l> << 2 3 >> 3 > ; omitted (class2 ^a21 ( <y-l> << 2 3 >> -a22 <y-2>) --> (halt > > Redundant Rules Since each cycle is analyzed independently, the extra rules correspond to cycles with different enabling con- ditions. If the enabling condition of rule ri is a subset of the enabling condition of rule rj, then rule ri can be removed since firing ri will definitely fire rj. The cycling information of rule rj contains that of rule ri. Thus, it is sufficient to simply provide more general in- formation. In many cases, if the set of nodes Pi which form a cycle Ci is a subset of the set Pj which form a cycle Cj, then the enabling condition of Cj is a subset of Ci’s enabling condition. The situation becomes ap- parent when the cycle consists of many nodes. Hence, we can expect to remove the extra rules whose enabling conditions are derived from larger cycles. In the following rules, rule 3 and rule 4 can be re- moved because their enabling conditions are subsets of the enabling conditions of rule 1 and 2, respectively. (P 1 (class1 -al3 ( <y-l> <> 1 3 > (class2 -a22 <y-l>) --> action . . . (P 2 (class.1 -al3 ( <x-l> <> 1 3 > (class2 -a22 <y-l>) (class4 *a41 2 ^a42 <x-3>) --> action . . . (P 3 ; redundant rule (class1 -a13 ( <y-l> CC 2 3 >> 3 > (class4 *a41 ( <y-4> 0 1 3 -a42 <y-l>) (class2 -a22 <y-l>) --> action . . . (P 4 ; redundant rule (class1 ^a13 ( <x-l> 0 13 > (class2 *a22 C <y-l> << 2 3 >> 3 (class4 -a41 <y-4> ^a42 <y-l>) (class4 ^a41 2 -a42 <x-3>) --> action . . . Automated Reasoning IW Implementation The tool has been implemented on a DEC 5000/240 workstation. Two real-world expert systems are ex- amined. The tool adds one extra rule to the OMS expert system(Barry & Lowe 1990) and 4 extra rules to the ISA expert system(Marsh 1988). Before extra rules are added, these two expert systems have 29 and 15 rules, respectively. For an OPS5 program with n rules, there are poten- tially O(n!) y 1 c c es embedded in the ER graph. How- ever, in a real application, especially in real-time ex- pert systems, it is unlikely that a cycle contains a large number of nodes. If it is detected that no path con- tains m nodes in the ER graph, there is no need to test cycles with more than m nodes. This reduces both computational complexity and memory space. To further reduce the computation time, we can store the path information. If there is no path in the order of executing rules pl , p2 . . . , pn, there is no cycle containing this path. Thus we do not need to examine the cycles with the embedded path. The ER graph ac- tually represents all possible paths between two rules. We can construct a linear list to store all impossible paths with more than two rules. Thus, it is a tradeoff between time and space. In our tool, we store impos- sible paths with up to nine nodes. Conclusion We have presented an approach to detect the termina- tion conditions of OPS5 rule-based programs. A data dependency graph (ER graph) is used to capture all of the logical paths of a rule-based program. Then this ER graph is used to detect if an OPS5 program can terminate in bounded time. More specif- ically, our technique detects rules which have a finite number of firings. Once non-termination is detected, we extract every cycle in the ER graph and find the enabling conditions of the cycles. After finding the en- abling conditions W of a cycle, rule r’ is added with W as the enabling conditions. By doing so, once the working memory has the WMEs matching the enabling conditions of a cycle, the control flow can be switched out of the cycle to r’. However, to ensure the program flow switches to r’, the program is modified such that r’ has higher priority than the regular rules. The extra rules are further refined to remove redundant condi- tions and rules. By providing programmers the “culprit” conditions, extra rules can be added to correct the program. If the cycle is an abnormal situation, we can abandon the task to guarantee the termination of the program. However, if recovery from the cycle is required, these conditions can be used to guide the programmers to correct them. Ongoing work applies the proposed technique to large rule-based systems to test its efficiency and per- formance. A tight estimation of execution time also must be resolved so that we can predict more precisely about the timing behavior of OPS5 and OPS5-style rule-based systems in terms of execution time. References Barry, M. R., and Lowe, C. M. 1990. Analyzing spacecraft configurations through specialization and default reasoning. In Proc. of the Goddard Conf, on Space Applications of Artificial Intelligence, 165-179. NASA. Browne, J. C.; Cheng, A. M. K.; and Mok, A. K. 1988. Computer-aided design of real-time rule-based deci- sion system. Technical report, Department of Com- puter Science, University of Texas at Austin. Also to appear in IEEE Trans. on Software Eng. Cheng, A. M. K., and Wang, C.-K. 1990. Fast static analysis of real-time rule-based systems to verify their fixed point convergence. In Proc. 5th Annual IEEE Conf. on Computer Assurance. Cheng, A. M. K.; B rowne, J. C.; Mok, A. K.; and Wang, R.-H. 1993. Analysis of real-time rule-based system with behavioral constraint assertions specified in Estella. IEEE Trans. on Software Eng. 19(19):863- 885. Cheng, A. M. K. 1993. Parallel execution of real- time rule-based systems. In Proc. IEEE Intl. Parallel Processing Symposium. Ishida, T. 1991. Parallel rule firing in production systems. IEEE Trans. on Knowledge and Data Eng. 3(l)* Kiper, J. D. 1992. Structural testing of rule-based expert systems. ACM Trans. on Software Eng. and Methodology l(2). Kuo, S., and Moldovan, D. 1991. Implementation of multiple rule firing production system on hypercube. J. Parallel and Distr. Computing 13(4):383-394. Marsh, C. 1988. The isa expert system: A proto- type system for failure diagnosis on the space station. Mitre report, The MITRE Corp., Houston, TX. Pasik, A. J. 1992. A source-to-source transformation for increasing rule-based parallelism. IEEE Truns, on Knowledge and Data Eng. 4(4). Schmolze, J. 6. 1991. Guaranteeing serizlizable re- sults in synchronous parallel production systems. J. Parallel and Distr. Computing 13(4). Ullman, J. D. 1988. Efficient tests for top-down ter- mination of logical rules. J. of the ACM 35(2). Wang, C.-K. 1990. MRL: The language. Tech. re- port, University of Texas at Austin, Real-Time Lab, Department of Computer Sciences. Zupan, B., and Cheng, A. M. K. 1994. Optimization of rule-based expert systems via state transition sys- tem construction. In Proc. IEEE Con& on Artificial Intelligence for Applications, 320-326. 198 Automated Reasoning | 1994 | 10 |
1,432 | A Statistical Method for Handling Unknown Words Computationa ,l Alexander Franz Linguistics Program and Center for Machine Translation Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA 15213 amf@cs.cmu.edu Robust Natural Language Processing systems must be able to handle words that are not in their lexicon. We created a classifier that was trained on tagged text to find the most likely parts of speech for unknown words. The classifier uses a contingency table to count the observed features, and a loglinear model to smooth the cell counts. After smoothing, the contingency table is used to obtain the conditional probability distribu- tion for classification. A number of features, determined by exploration (Tukey 1977), are used. For example, is the word capi- talized? Does the word carry one of a number of known suffixes? We maximize the conditional probability of the proposed classification given the features to achieve minimum error rate classification (Duda & Hart 1973). The baseline results are provided by using only the prior probabilities P(c) (column Prior). (Weischedel et al. 1993) describe a probabilistic model with four features that are treated as independent, which we reimplemented (column 4 Indep). For comparison, we created a statistical classifier with the same four features (column 4 Class). Our best model was a classifier with nine features (column 9 Class). Measure 1 Prior 1 4 Indep 1 4 Class 1 9 Class I i I I Overall Accuracy Overall Res. Amb. 2-best Accuracy 2-best Res. Amb. 0.4-beam Accuracy 0.4-beam Res. Amb. 0.4-beam Size 28% 7.6 53% 2.0 61% 1.7 77% 1.5 66% 1.2 1.2 69% 2.8 87% 1.6 81% 1.4 1.6 73% 3.4 87% 1.8 86% 1.6 1.8 (n-best) Accuracy: Percentage that the correct POS was among the 12 most likely POSs. F-beam Accuracy: All POSs with probability within beam factor F of the most probable POS. Residual Ambiguity: Mean perplexity for the POS tags in the answer set. F-beam Size: Mean number of tags in an answer set de- rived using beam factor F. The graph below shows the accuracy of the simple probabilistic model versus the statistical classifier us- ing one to nine features. The accuracy of the classi- fier is always higher and increases as more features are added, but does not decrease with nuisance features. The simple probabilistic model, on the other hand, peaks a four features, and then degrades. ,o i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ..-.......................... I 04 , , , , , , , , 1 1 2 3 4 5 6 7 6 9 10 Number of Features In future work, we will apply this method to other ambiguity resolution problems that require a combina- tion of a number of categorial disambiguating features, such as POS tagging and PP attachment. Acknowledgments: I would like to thank Jaime Carbonell, Ted Gibson, Michael Mauldin, Teddy Sei- denfeld, and Akira Ushioda. References Agresti, A. 1990. Categorical Data Analysis. New York: John Wiley & Sons. Duda, R. O., and Hart, P. E. 1973. Pattern Classify- cation and Scene Analysis. New York: John Wiley & Sons. Franz, A. 1994. Ambiguity resolution via statistical classification: Clasifying unknown words by part of speech. Tecnical Report CMU-CMT-94-144, Center for Machine Translation, Carnegie Mellon University. Tukey, J. 1977. Exploratory Data Analysis. Reading, MA: Addison-Wesley. Weischedel, R.; Meteer, M.; Schwartz, R.; Ramshaw, L.; and Palmucci, J. 1993. Coping with ambigu- ity and unknown words through probabilistic models. Computational Linguistics 19(2):359-382. Student Abstracts 1447 From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. | 1994 | 100 |
1,433 | Testing a KBS using a conceptual model Corinne Haouche DIAM-SIM & LAMSADE, Paris IX Dauphine 91, Bd de 1’Hbpital - 75634 Paris Cedex 13 - France Tel: 45 83 67 28 - Fax: 45 86 56 85 - haouche@biomath.jussieu.fr Abstract We propose a KBS testing procedure that uses a KADS conceptual model (CM). The set of V&d Inference Paths is derived from the infer- ence structure, and, a “high level” trace, repre- senting the Czsrrernt Inference Path, is built using the links established between the CM and the KBS. The comparison of this trace to the VIP can lead to modify either the code or the CM. Introduction The lack of specifications while developing a knowledge-based system (KBS) makes the KBS vali- dation a hard task. We investigate the use of an infer- ence structure, which is part of a KADS (Knowledge Acquisition and Design Support (Wielinga, Schreiber , & Breuker 1992)) conceptual model (CM), as a set of specifications in order to conduct parts of the valida- tion process. An inference structure describes existing links between roles through inferences. A role is a class of concepts that have the same behavior in a given problem. An inference is a reasoning step. We focus on testing the KBS behavior using this structure. The white box approach to test KBS uses a structural description of the system under test (e.g. (Preece et ~2. 1993)) and studies whether the test cases permit to “cover” all the parts of a KB. Our approach is close to this approach in the sense that we use a description of the KBS, but it is different regarding the nature of this description and the way we use it. In fact, we use an implemented representation of the inference struc- ture of a KADS CM of a KBS, to validate this KBS. We assume that this CM is valid but that it can still evolve. The set of Valid Inference Paths is derived from the inference structure and, a “high level” trace, repre- senting the Current Inference Path, is built using the links established between the CM and the KBS. The comparison of this trace to the VIP can lead to modify either the code or the CM. Testing with an inference structure We describe hereafter our procedure to use the infer- ence structure during the testing phase. All the infer- ence paths between initial roles, ie roles that are not outputs for any inference, and final roles, ie roles that either are not inputs for an inference or are specified as final roles, are derived automatically. These paths are correct from a syntactic point of view. Clearly, that means that each time an inference follows another, it is added to the path that is being built. However, these paths have to be checked for semantic correctness. This step is done in cooperation with the domain expert and provides the VIP. - All the Valid Inference Paths (VIP) are derived from the inference structure. - When the KBS is used on a set of data, the links between the code and the inference structure are used to build the Current Inference Path (CIP). - If CIP E VIP then the process is applied on another data case, else either the KBS or the CM have to be modified by the expert and the knowledge engineer. iscussion and perspectives This procedure is currently being tested on a real world KBS for which we developed a CM (Haouche 1993). Testing a system using its CM becomes easier because, on the one hand, we have access to a trace which is eas- ily understood, and on the other hand, the errors made are more easily localized thanks to the explicited links that are established between the CM and the KBS. Furthermore, we think that this CM is valuable to ad- dress the classical “coverage” problems and to provide criteria to stop the testing process. References Haouche, 6. 1993. Using a Conceptual Model to Vali- date KBSs. In Proceedings of the European Workshop on Validation and Verification of KBSs. Preece, A.; Chander, P.; Grossner, C.; and Radhakr- ishnan, T. 1993. Modeling rule base structure for expert system quality assurance. In IJCAI93 Work- shop on Validation of KBSs. Wielinga, B.; Schreiber, A.; and Breuker, J. 1992. KADS: a modeling approach to knowledge engineer- ing. Knowledge Acquisition 4( 1):5-53. Student Abstracts 1455 From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. | 1994 | 101 |
1,434 | The Epistemology of Physical System Modeling Kyungsook Han and Andrew Gelsey Department of Computer Science Rutgers University New Brunswick, NJ 08903 {kshan, gelsey}@cs.rutgers.edu Modeling and simulation have been typically pursued in isolation. When a model of a complex system is reported in the literature, there is a considerable em- phasis on the end result, the model. On the other hand, many works on simulation assume the existence of models, and focus on developing representations and reasoning about the models in the representations. However, not every physical system has its models ready to use for problem-solving tasks and construct- ing adequate models is not trivial. Choosing a simula- tion method.is also dependent on the kinds of models available, the creation of which in turn depends on the knowledge available and its representation. A model of a physical system is normally created by the person studying the system with considerable time and effort 4 spent. But a hand-crafted model is often error-prone and difficult to modify to solve a similar problem about other physical systems. Our work is motivated by three goals: (1) examin- ing the process of model-building and simulation, as well as the types of knowledge and their representa- tion required to perform the process or to evaluate the process and its results; (2) automating the process of model-building and simulation to reason about mov- ing objects; and (3) making the modeling process as general as possible so that common knowledge can be shared and reused instead of being duplicated. Consider, for example, a spring with one end at- tached to a fixed point and the other end attached to a block. If the block is pulled from its equilibrium po- sition and released, it shows oscillatory motion on a straight line. This harmonic oscillator is a common textbook example which is frequently used in quali- tative physics research. It is well known that the os- cillator has one degree of freedom, i.e., displacement of the block from its equilibrium position. However, if the block is pulled and rotated from its equilibrium position before being released, predicting its behavior is not as simple as before. Is the motion going to be still oscillatory ? More interesting questions include: (1)wht f P g a i a s rin is attached to a corner of a block instead of the center of the face? (2) What if a block attached to a spring is put in arbitrary position and ori- entation before being released? (3) What if two blocks are connected by a spring? (4) What if multiple blocks connected by multiple springs are put in arbitrary po- sitions and orientations? Different forms of these problems require spatial rea- soning to formulate equations of motion, in particular the ability to reason explicitly about vector quanti- ties and moving frames of reference. Many qualita- tive physics approaches which can solve the linear har- monic oscillator problem cannot handle the more com- plex problems we describe above because they lack this spatial reasoning ability. We have developed an automated modeling and sim- ulation system called ORACLE. Knowledge is repre- sented with general model fragments in a purely declar- ative, neutral, algorithm-independent form; most of the knowledge is just the same fundamental equations that appear in any standard text on the subject, with their implied semantics of vectors and frames of refer- ences. Starting with the basic, simple knowledge, OR- ACLE generates a powerful model and simulator which can be used to predict the motion of a physical system with multiple moving objects in arbitrary configura- tions. Evidence of the generality of the ORACLE ap- proach across different types of physical systems was demonstrated by the experimental results of testing it on spring-block systems in a variety of configurations, sailboats in fluids, and composite objects of rigid bod- ies. ORACLE can also model many other types of phys- ical systems with no or minor changes, including multi- ple rigid bodies connected by springs, propeller-driven airplanes, and spinning balls. This extensibility to a broad class of physical systems is possible for several reasons. First, knowledge is represented in a general form and instantiated later for particular situations so that common knowledge can be shared and reused. Second, instead of using a special purpose method in- tended to handle a certain class of physical systems only, a general method is used to construct and sim- ulate models; model fragments relevant to a physical system being modeled are identified and composed to formulate a model, and the model is applied to solve a problem. 1454 Student Abstracts From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. | 1994 | 102 |
1,435 | Planning for Component-based Configurations* Gail Haddock University of Texas - Arlington Department of Computer Science and Engineering 416 Yates, Arlington, Texas 76019 haddock@csr.uta.edu Abstract The Scenario-based Engineering Process (SEP) is a novel approach to developing complex systems (Haddock 8z Harbison 1994). SEP builds new application systems through a selection process that groups primitive compo- nents into application specific components. The selection of primitive components and the construction of interfaces among components in an application system is currently a tedious manual undertaking. The automation of this pro- cess will require a configuration system that can support the complex interactions of the components, the dynamic requirements of users, and the capabilities of providing multiple viewpoints and managing extensive domains. The University of Michigan Procedural Reasoning Sys- tem, UM-PRS (Lee et al. 1993), is a reactive reasoning and planning system based on PRS (Georgeff & Lansky, 1990). UM-PRS is currently being used in the autono- mous vehicle domain. Its ability to continually consider the real-time dynamic environment and access plans accordingly fits well into military applications, where plans have already been generated in the form of standing operating procedures and reactions to the quickly changing environment are paramount. Our SEP domain does not require the hard real-time speed of a reactionary system. However, much of the UM-PRS architecture maps readily to the configuration problem in the SEP domain. The Scenario-based Engineering Procedural Reasoning System, SEPRS, will use the architecture of UM-PRS to implement a configuration system for SEP. Primitive components will take the place of plans and will be selected according to the application requirements and the application architecture in progress. The interpreter will use the application requirements as goals to satisfy by accessing the primitive components. Components previously selected for an application archi- tecture will be in the in-process area. They are accessed by the interpreter to determine which goals are not yet sat- isfied. The interpreter will activate relevant primitive components that are maintained by the intention struc- ture. The intention structure will release the chosen primi- tive component to the component integrator. The *This research has been partially supported by the National Science Foundation, the National Center for Manufacturing Sciences, the Advanced Research Projects Agency and the State of Texas. component integrator will employ Adaptive Semantic Language techniques (Hannon 1994) to build the inter- faces and messages necessary for adding the primitive component to the application architecture. The grouping of primitive components into components remains a manual task, as this grouping can be done from a variety of view- points. For example, some groupings may be done solely for marketing purposes. The environment area of UM- PRS then becomes our system engineer. The system engi- neer’s modifications are added back to SEPRS through a component monitor, who sends the component determina- tions to the in-progress area, thus completing the cycle. Since requirements are continually accessed by the interpreter, user modifications can be interjected at any point in the architecture creation cycle. These modifica- tions may immediately cause primitive components to be deselected and their interfaces disconnected, which may then require an extensive reconfiguration of the architec- ture. SEPRS also supports the expansion of primitive components. As new technologies are invented that result in new components, those components can be added to the system. We are building a configuration system, SEPRS, for component-based architecture methodologies by adapting the UM-PRS reactive planning system. We expect it to fit well in our system engineering environment that includes scenario modeling, object-oriented analysis and design, and simulation systems. eferences Georgeff, M. and Lansky, A. 1990. Reactive Reasoning and Planning. Allen, J., Hendler, J., and Tate, A., eds., Readings in Planning, San Mateo, Ca: Morgan Kaufmann. Haddock, G., and Hat-bison, K. 1994. From Scenarios to Domain Models; Processes and Representations. In Proceeding of Knowledge-based Artificial Intelligence Systems in Aero- space and Industry. Bellingham, Wa: International Society for Optical Engineering. Harmon, C. 1994. An n-Towers Model for the Knowledge Repre- sentation of an Adaptive Semantic Language. Master’s thesis, Dept. of Computer Science Engineering, University of Texas at Arlington. Lee, J., Huber, M., Durfee, E., Kenny, P. 1993. UM-PRS: An Implementation of the Procedural Reasoning System. Techni- cal Report, University of Michigan. Student Abstracts 1453 From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. | 1994 | 103 |
1,436 | Time-Critical Scheduling in Stochastic Domains Lloyd Greenwald Thomas Dean Department of Computer Science Brown University, Box 1910, Providence, RI 02912 lgg@cs.brown:edu In this work we look at extending the work of (Dean et al. 1993) to handle more complicated schedul- ing problems in which the sources of complexity stem not only from large state spaces but from large ac- tion spaces as well. In these problems it is no longer tractable to compute optimal policies for re- stricted state spaces via policy iteration. We, in- stead, borrow from operations research in applying bottleneck-centered scheduling heuristics (Adams et al. 1988). Additionally, our techniques draw from the work of (Drummond and Bresina 1990). Consider the problem of scheduling planes and gates at a busy airport. A stochastic process describes the arrival of planes at-the airport and is affected by uncon- trollable events such as weather. Stochastic processes also govern the processing requirements for unloading and loading passengers at arrival and departure gates. Deadlines for theses operations are determined by pre- specified desired arrival and departure times. Other sources of uncertainty include gate closings. The opti- mization problem is to assign planes to gates at each time step to minimize some global measure of tardi- 6%~. In any given state there is, in general, one action for every possible assignment of planes to gates. The work of (Dean et al. 1993) introduces a gen- eral approach to planning and scheduling in stochastic domains in which a two-phase iterative procedure is employed. The first phase determines a restricted sub- set of the state space on which to focus (called the envelope) and the second phase constructs a policy for this envelope. Deliberation scheduling is employed to allocate on-line computation time between the anytime algorithms that make up each phase and across itera- tions of both phases. This approach directly addresses uncertainty by modeling the environment as a stochas- tic automaton and constructing policies to account for alternative trajectories reachable from a given start state. Restricting policy construction to a given en- velope addresses the large state space issue. Our work addresses the additional combinatorial ex- plosion of large action spaces by focusing processing on time windows rather than envelopes of specific states, and by selectively exploring the space described by the tld@cs.brown.edu window rather than exhaustively exploring the space via policy iteration. While planning domains such as robot navigation may adhere to a restricted neighbor- hood of states over time, states solved for prior time steps in scheduling domains with large action spaces do not remain relevant as the process progresses. Ad- ditionally, alternative actions from any given state lead to disjoint state spaces with little chance that the tra- jectories will merge on a common envelope of spe- cific states. By partitioning the state space along the time dimension we capture the appropriate context in scheduling domains. We generate policies by selectively exploring the state space described by the time window. For any given state we use dispatch scheduling rules such as earliest deadline first to select an action. We then em- ploy Monte Carlo simulation on a stochastic model of the domain to determine the most probable reachable states. This process is repeated for a fixed amount of time to determine a partial policy. A second phase at- tempts to improve the expected value of the policy by detecting bottlenecks and constraining associated ac- tions in further iterations of policy generation. By us- ing greedy dispatch rules on unconstrained actions, we avoid exhaustively searching large action spaces. This procedure is augmented with default reflexes for low probability states not explicitly simulated. We employ deliberation scheduling to allocate on-line processing time across time windows and phases based on antici- pated quality-time tradeoffs. References Adams, J.; Balas, E.; and Zawack, D. 1988. The shifting bottleneck procedure for job shop scheduling. Management Science 34(3):391-401. Dean, Thomas; Kaelbling, Leslie; Kirman, Jak; and Nicholson, Ann 1993. Planning with deadlines in stochastic domains. In Proceedings AAAI-93. AAAI. 574-579. Drummond, Mark and Bresina, John 1990. Anytime synthetic projection: Maximizing the probability of goal satisfaction. In Proceedings AAAI-90. AAAI. 138-144. 1452 Student Abstracts From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. | 1994 | 104 |
1,437 | The Crystallographer’s Assistant Vanathi Gopalakrishnan, Daniel Hennessy, Bruce Buchanan, Devika Subramanian* Intelligent Systems Laboratory, University of Pittsburgh, Pittsburgh, PA 15260, USA (vanathi, hennessy, buchanan, devika}Qcs.pitt.edu The only routinely used technique available today for obtaining the 3-D structure of a protein or DNA molecule is by X-ray diffracting a crystal of the macro- molecule. The rate limiting step in structure determi- nation is the process of growing a crystal of the macro- molecule. This process is not very well understood, and can take a few weeks to several years. Crystallo- graphers, therefore, are in great need of tools to aid them in the process of designing and performing ex- periments. There is a great deal of experiential data in this domain, in the form of scientific notebooks with graphical and textual representations of previous ex- periments. We are in the process of collecting, analyzing and applying the knowledge available in this domain in or- der to design and develop the Crystallographer’s Assis- tant (CA). The CA is an intelligent electronic assistant that will: help crystallographers record and maintain experimental context, offer suggestions as to experi- mental conditions that are likely to be successful for the current experiment (based on previously recorded successes and failures), and provide rationale for ex- plaining failures (based upon theories that capture the significant relationships that exist in the data). A set of about twenty-five parameters (e.g., pH, tem- perature) have been identified that affect the process of macromolecular crystallization[l]. Crystallographers systematically search this parameter space to find the optimal set of conditions under which a well diffract- ing crystal of the new macromolecule can be obtained. There exists only a preliminary understanding of the relationships that exist between two or more of these parameters. In order to convince ourselves that it is indeed possi- ble to find relationships among the various crystalliza- tion parameters from existing data, we have applied RL[2], an inductive learning program, to the data avail- able in the Biological Macromolecular Crystallization Database (BMCD). The data in the BMCD is sparse, *Dr. Subramanian is affiliated with Department of Computer Science, Cornell University, Ithaca, NY 14853. This research is supported in part by funds from the W.M. Keck Center for Advanced Training in Computational Biol- ogy at the University of Pittsburgh, Carnegie Mellon Uni- versity and the Pittsburgh Supercomputing Center. noisy, and represents only successful instances of crys- tal growth. In spite of the noisy nature of the data, RL has produced rules (and correlations) which have been considered significant by our domain experts. The lim- iting factor in the BMCD data is its lack of negative instances. The Crystallographer’s Assistant is based upon a case-based reasoning approach, and involves, as a first step, creating a database (from both existing experi- ment notebooks and on-going experiments), of about 1000 examples of crystallography experiments. These examples will provide us with both successful as well as failed experiments, and will be used both by RL as well as the case-based reasoner. Given the significant com- plexity and weak theory of the relationships between the features of the experiments, a case-based approach is being taken for similarity assessment. Experiential data concerning how the domain experts define pairs of cases to be similar and different will be used to guide the indexing and selection of cases. The result will be an experimenter’s assistant which, given the results of the latest set of experiments, will remind the user of previous experiments with similar conditions and make suggestions based upon what was done in both cases of success and failure. The results from applying RL to the BMCD data have yielded possibly significant new empirical rela- tionships, as evaluated by our expert crystallographers. We are now in the process of applying RL to the newly created database of crystallography experiments. The next step will be to make the database available to re- searchers at other sites in order to expand the database to hold 100,000 or more cases. This will provide suffi- cient data to develop a more complete domain theory with the aid of modeling and machine learning tech- niques . eferences [l] McPherson, A. Current approaches to macromolecu- lar crystallization. European Journal of Biochedstry, 189 (1990), l-23. [2] Clearwater, S., and Provost, F. 1990. RL4: A Tool for Knowledge-Based Induction. In Proceedings of the Sec- ond International IEEE Conference on Tools for Artificial Intelligence, 24-30. IEEE CS. Press. Student Abstracts 1451 From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. | 1994 | 105 |
1,438 | Reasoning About What to Plan Richard Goodwin School of Computer Science, Carnegie Mellon University 5000 Forbes Ave. Pittsburgh, Pennsylvania 15213-3890 rich@cs.cmu.edu Agents plan in order to improve their performance. How- ever, planning takes time and consumes resources that may in fact degrade an agents performance. Ideally, an agent should only plan when the expected improvement outweighs the expected cost and no resources should be expended on making this decision. To do this, an agent would have to be omniscient. The problem of how to approximate this ideal, without consuming too many resources in the process, is the meta-level control problem for a resource bounded rational agent. There are two central questions that have to be addressed for meta-level control: Where to focus planning effort and when to start executing the current best plan. These ques- tions are interrelated. To start execution, the beginning of the plan must be elaborated to a level where it is operational. Even then, execution should only begin when the expected improvement due to further planning is outweighed by the cost of delaying execution. Once the agent has committed to executing some action, the planner can then disregard any plans inconsistent with this action and can concentrate on elaborating and optimizing the rest of the plan. In my thesis research, I am exploring the use of sensitivity analysis based meta-level control for focusing computa- tional effort. The object level problem of deciding which actions to perform is modeled as a standard decision prob- lem and an approximate sensitivity analysis is performed. To facilitate the sensitivity analysis, actions, both abstract and operational, are augmented with methods for estimat- ing their resource and time requirements. Methods are also needed to estimate the likelihood of events and action out- comes. All estimates include both the expected value and the expected range or variance. Information about the preci- sion of estimates is critical when deciding whether to com- mit to a particular plan or whether to refine estimates through further computation or sensing. When presented with a new task, the planner generates abstract plans for accomplishing the new and existing tasks. A sensitivity analysis identifies which of these plans are potentially optimal and non-dominated. Dominated and never-optimal plans are discarded. The sensitivity analysis also identifies which estimates the choice between plans is most sensitive to. Estimates that affect all plans more or less equally need not be refined. For instance, the occurrence of an earthquake may adversely affect all plans equally. Determining the probability of an earthquake more exactly would not help in selecting between plans. Other factors may have differing affects. For instance, the likelihood of rain would help to choose between a plan to walk and a plan to drive somewhere. The sensitivity of a plan to particular estimates can also suggest ways of making the plan more robust. For instance, carrying an umbrella helps to reduce sensitivity to the likelihood of rain for the plan to walk. When there are a number of plans that are potentially opti- mal and non-dominated and when the potential opportunity cost of selecting the wrong plan is significant, the meta-level controller directs the efforts of the planner to refine critical estimates. Estimates of resource use and action times can be improved by elaborating abstract operators into more op- erational operators or by simulated execution. Other object level estimates can be refined by adding more sensing to the plan or by additional computation using techniques such as temporal projection (Hanks 1990). Estimating computation time for complex planners is problematic. Further research is needed to determine how to best estimate and character- ize expected plan improvement as a function of computation time. Information from the sensitivity analysis and estimates of the cost of improving the current plan are used to make the tradeoff between the cost of delaying execution and the expected improvement in the plan for doing additional planning. Often systems that make this tradeoff ignore the fact that execution and planning can be overlapped in many situations. The DTA* algorithm is one example (Russell and Wefald 1991). In related work, I show how taking into account overlapping of planning and execution can improve performance (Goodwin 1994). References Richard Goodwin. Reasoning about when to start act- ing. In K. Hammond, editor, Proceedings of the Second International Conference (AIPS94). Artificial Intelligence Planning Systems, June 1994. Steve Hanks. Practical temporal projection. In Proceed- ings, Eighth National Conference onArtijkia1 Intelligence. AAAI, July 1990. Stuart Russell and Eric Wefald. Do the Right Thing. MIT Press, 1991. 1450 Student Abstracts From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. | 1994 | 106 |
1,439 | Learning About Software Errors Via Systematic Experimentation Terrance Goan Oren Etzioni Department of Computer Science and Engineering, University of Washington Seattle, WA 98195 {g-n, etzioni)@cs.washington.edu Classical planners assume that their internal model is both correct and complete. The dynamic nature of real- world domains (e.g., multi-user software environments) makes these assumptions untenable. Several new planners (e.g.,XII [2]) have been designed to work with incomplete information, and strides have been made in planning with potentially incorrect information. But, efficient operation in the presence of incorrect information is highly depen- dent on a planner’s ability to detect errors. Failing to rec- ognize errors can result in unexpected and potentially destructive effects, as well as further corruption of the world model. This abstract describes ED (the Error Detective) which automatically generates error detection functions for a software robot (softbot). In addition to error detec- tion, the functions generated by ED accurately diugnose the cause of the errors. The automatic generation of these functions is important due to the large number of condi- tions that can affect the success of command execution. In addition, the ability to diagnose the cause of an error can greatly reduce the number of preconditions which need to be checked,/resatisficd prior to a successful execution of the operator. In tackling this problem we utilize three key insights: (1) If an operator is completely specified, every error is due to some subset of the preconditions being unsatis- fied. (2) Software error messages generally signal only one error. For example, in UNIX, if you execute the di f f command on two files x and y, where both files are not readable, the error message would be "di f f : x : Permission denied." Itprovidesnoinformation about the status of file y. More formally: error-msg(-pl and -p2) = (error-msg(-pl) or error-msg(-p2)), where -pl and -p2 represent unsatisfied preconditions. (3) Since software errors do not interact, errors can bc fixed incrementally (the decomposable fault assump- tion). This means we need not assume just a single fault has occurred, rather we assume that if error-msg(-pl and -p2) = error-msg(-p2) and we re-execute the oper- ator (after achieving p2), we will now get error- msg(-pl) which can be handled in turn. ED “learns” the error diagnosis functions via a deci- sion tree. The attributes which compose the training instances are: error message length (number of tokens) and a binary (present or not present) attribute for every token seen in the collected error messages. And the classes are sets of preconditions which may be at fault. We compared the accuracy of the decision tree with two other methods: (1) our current hand-crafted error detection functions which rely on the presence of colons in error messages and makes no attempt to diagnose the cause of the error and (2) a simple string match method which looks for an exact match (except for command argu- ments). The rate of correct diagnosis for the three methods were hand-crafted (57%), string match (71.6%), and deci- sion tree (90.4%). In addition, both the string match and decision tree methods resulted in a significant decrease (50.5 and 63.6 respectively) in the number of precondi- tions which must be checked/resatisfied prior to re-execut- ing an operator (i.e., the number of preconditions which can be rejected as the cause of the given error). ED is complementary to “The Qperator Refinement Method” presented in [l]. Where we assume correct oper- ator models while developing error detectors, Carbonell and Gil assume accurate error detection while augmenting operator models. Accurate error diagnosis allows yet another set of potentially fruitful experiments-finding the “optimal” set of operator preconditions. By “optimal” we mean the set of preconditions which best balance the cost of operator execution with the cost of error recovery. For example, it seems reasonable to execute pwd without verifying that all ancestor directories are readable. Simply execute the operator and handle the errors which will occasionally OCCUT. [l] Carbonell and Gil. Learning by Experimentation: The Operator Refinement IvIethod. Machine Learning: An Artificial Intelligence Approach vol. III. 1990. [2] Golden et al. XII: Planning for Universal Quantifica- tion and Incomplete Information. AAAI 1994. Student Abstracts 1449 From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. | 1994 | 107 |
1,440 | Low Computation Vision-Based Navigation For a Martian Rover Andrew S. Gavin Massachusetts Institute of Technology Artificial Intelligence Laboratory Department of Electrical Engineering and Computer Science 545 Technology Square NE43-737, Cambridge Massachusetts 02139 (617) 253-8837 agavin@ai.mit.edu Abstract1 In the design and construction of mobile robots vision has always been one of the most potentially useful sensory systems. In practice however, it has also become the most difficult to successfully implement. At the MIT Mobile Robotics (Mobot) Lab we have designed a small, light, cheap, and low power Mobot Vision System that can be used to guide a mobile robot in a constrained environment. The target environment is the surface of Mars, although we believe the system should be applicable to other conditions as well. It is our belief that the constraints of the Martian environment will allow the implementation of a system that provides vision based guidance to a small mobile rover. The purpose of this vision system is to process realtime visual input and provide as output information about the relative location of safe and unsafe areas for the robot to go. It might additionally provide some tracking of a small number of interesting features, for example the lander or large rocks (for scientific sampling). The system we have built was designed to be self contained. It has its own camera and on board processing unit. It draws a small amount of power and exchanges a very small amount of information with the host robot. The project has two parts, first the construction of a hardware platform, and second the implementation of a successful vision algorithm. For the first part of the project, which is complete, we have built a small self contained vision system. It employs a cheap but fast general purpose microcontroller (a 68332) connected to a Charge Coupled Device (CCD). The CCD provides the CPU with a continuous series of medium resolution gray- scale images (64 by 48 pixels with 256 gray levels at 1. This research has been graciously funded by JPL and occurred at the MIT AI Lab, which is partial- ly funded by ARPA. 1448 Student Abstracts lo-15 frames a second). In order to accommodate our goals of low power, light weight, and small size we are bypassing the traditional NTSC video and using a purely digital solution. As the frames are captured any desired algorithm can then be implemented on the microcontroller to extract the desired information from the images and communicate it to the host robot. Additionally, conventional optics are typically oversized for this application so we have been experimenting with aspheric lenses, pinholes lenses, and lens sets. As to the second half of the project, it is our hypothesis that a simple vision algorithm does not require huge amounts of computation and that goals such as constructing a complete three dimensional map of the environment are difficult, wasteful, and possibly unreachable. We believe that the nature of the environment can provide enough constraints to allow us to extract the desired information with a minimum of computation. It is also our belief that biological systems reflect an advanced form of this. They also employ constant factors in the environment to extract what information is relevant to the organism. We believe that it is possible to construct a useful real world outdoor vision system with a small computational engine. This will be made feasible by an understanding of what information it is desirable to extract from the environment for a given task, and of an analysis of the constraints imposed by the environment. In order to verify this hypothesis and to facilitate vision experiments we have build a small wheeled robot named Gopher, equipped with one of our vision systems. From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. | 1994 | 108 |
1,441 | Introspective Reasoning in a Case-based Planner Susan Fox and David Leake Lindley Hall 2 15 Indiana University Bloomington, IN 47405 (812) 855-8702 sfoxQcs.indiana.eduandleake0cs.indiana.edu Many current AI systems assume that the reason- ing mechanisms used to manipulate their knowledge may be fixed ahead of time by the designer. This as- sumption may break down in complex domains. The focus of this research is developing a model of intro- spective reasoning and learning to enable a system to improve its own reasoning as well as its domain knowl- edge. Our model is based on the proposal of (Birn- baum et al. 1991) to use a model of the ideal behavior of a case-based system to judge system performance and to refine its reasoning mechanisms; it also draws on the research of (Ram & Cox 1994) on introspective failure-driven learning. This work examines introspection guided by expec- tation failures about reasoning performance. We are developing a vocabulary of failures for the case-based system, an introspective reasoner which uses a hier- archical model of system behavior, and a method of reusing CBR for parts of the case-based planner itself. The system we are developing combines a model- based introspective reasoner with a case-based plan- ning system. The planner generates high-level plans for navigating city streets, and is similar in structure to the planner CHEF (Hammond 1989). However, we implement components of the planner using the case- based reasoning mechanisms of the planner as a whole. Our primary interest in this approach is the advantage it offers for developing the model for introspective rea- soning. We can reuse expectations that apply to the planner as a whole for its case-based parts. During the planning process, the introspective rea- soner compares the planner’s reasoning to its assertions about ideal behavior. When a failure is detected, for instance if the system judges that the retrieved case is not the “best” case in memory, the introspective reasoner considers related assertions to pinpoint the source of the failure and to suggest a solution. In this case our system creates a new index to distinguish the true best case from the bad retrieved case. Determining what information to include in the model and how to structure it are central issues. Birn- baum’s model is a set of high-level assertions applicable to many case-based planners (Birnbaum et al. 199 1). 1446 Student Abstracts While such assertions cover a wiue range of failures, they are too general to easily specify causes or repairs for failures. We propose as an alternative a hierarchical model including highly abstract assertions as well as assertions specific to this planner. Low-level assertions help to notice failures and pinpoint repairs, while high- level assertions provide connections between assertions for finding the root causes of failures. By using a hi- erarchy, the general structure of the model will apply to other systems while we retain the ability to detect and repair specific failures of our system. We are developing a vocabulary of failure types to guide our choice of assertions to include in the model. For example, identifying the failure “failing to com- plete adaptation” leads to assertions about how to gauge the progress of adaptation in this planner. We also include higher level failure types as are described in (Ram & Cox 1994); some such failures recurred for different components of the planner, leading us to use CBR to implement components themselves. We have constructed a skeletal hierarchical model and have begun testing the case-based planner with and without introspective corrections. Initial experi- mental results indicate that introspectively re-indexing memory alone improves the planner’s efficiency in re- trieval and allows it to succeed more often than without introspection. We are currently in the process of flesh- ing out the model and expanding the scope of possible repairs. References Birnbaum, L.; Collins, G.; Brand, M.; Freed, M.; Krulwich, B.; and Pryor, L. 1991. A model-based approach to the construction of adaptive case-based planning systems. In Proceedings of the DARPA CBR Worircshop, 215-224. Morgan Kaufman. Hammond, K. 1989. Case-Based Planning: Viewing Planning as a Memory Task. Academic Press. Ram, A., and Cox, M. T. 1994. Introspective reason- ing using meta-explanations for multistrategy learn- ing. In Michalski, R., and Tecuci, G., eds., Machine Learning: A Multistrategy Approach, Vol. IV. Morgan Kaufman. From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. | 1994 | 109 |
1,442 | Refining the Structure of Terminological Systems: Terminology = Schema + Views* M. Buchheitl and F. M. Donini2 and W. Nutt’ and A. Schaerp 1. German Research Center for Artificial Intelligence (DFKI), Saarbriicken, Germany {buchheit,nutt)@dfki.uni-sb.de 2. Dipartimento di Informatica e Sistemistica, Universitk di Roma “La Sapienza,” Italy (donini,aschaerf}@assi.dis.uniromal.it Abstract Traditionally, the core of a Terminological Knowledge Representation System (TKRS) consists of a so-called TBox, where concepts are introduced, and an ABox, where facts about individuals are stated in terms of these concepts. This design has a drawback because in most applications the TBox has to meet two func- tions at a time: on the one hand, similar to a database schema, framelike structures with typing information are introduced through primitive concepts and prim- itive roles; on the other hand, views on the objects in the knowledge base are provided through defined concepts. We propose to account for this conceptual separa- tion by partitioning the TBox into two components for primitive and defined concepts, which we call the schema and the view part. We envision the two parts to differ with respect to the language for concepts, the statements allowed, and the semantics. We argue that by this separation we achieve more con- ceptual clarity about the role of primitive and defined concepts and the semantics of terminological cycles. Moreover, three case studies show the computational benefits to be gained from the refined architecture. Introduction Research on terminological reasoning usually presup- poses the following abstract architecture, which reflects quite well the structure of existing systems. There is a logical representation language that allows for two kinds of statements: in the TBox or terminoG ogy, concept descriptions are introduced, and in the ABox or world description, individuals are character- ized in terms of concept membership and role relation- ship. This abstract architecture has been the basis for the design of systems, the development of algorithms, and the investigation of the computational properties of inferences. *This work was partly supported by the Commission of the European Union under ESPRIT BRA 6810 (Compulog 2), by the German Ministry of Research and Technology under grant ITW 92-01 (TACOS), and by the CNR (Ital- ian Research Council) under Progetto Finalizzato Sistemi Informatici e Cakolo Parallelo, LdR “Ibridi.” Given this setting, there are three parameters that characterize a terminological system: (i) the language for concept descriptions, (ii) the form of the state- ments allowed, and (iii) the semantics given to con- cepts and statements. Research tried to improve sys- tems by modifying these three parameters. But in all existing systems and almost all theoretical studies lan- guage and semantics have been kept unif0rm.l The results of these studies were unsatisfactory in at least two respects. First, it seems that tractable inferences are only possible for languages with little expressivity. Second, no consensus has been reached about the semantics of terminological cycles, although in applications the need to model cyclic dependencies between classes of objects arises constantly. Based on an ongoing study of applications of termi- nological systems, we suggest to refine the two-layered architecture consisting of TBox and ABox. Our goal is twofold: on the one hand we want to achieve more con- ceptual clarity about the role of primitive and defined concepts and the semantics of terminological cycles; on the other hand, we want to improve the tradeoff between expressivity and worst case complexity. Since our changes are not primarily motivated by mathemat- ical considerations but by the way systems are used, we expect to come up with a more practical system design. In the applications studied we found that the TBox has to meet two functions at a time. One is to declare frame-like structures by introducing primitive concepts and roles together with typing information like isa- relationships between concepts, or range restrictions and number restrictions of roles. E.g., suppose we want to model a company environment. Then we may introduce the concept Employee as a specialization of Person, having exactly one name of type Name and at least one affiliation of type Department. This is similar to class declarations in object-oriented systems. For this purpose, a simple language is sufficient. Cycles occur naturally in modeling tasks, e.g., the boss of an Employee is also an Employee. Such declarations have lIn (Lenzerini & Schaerf 1991) a combination of a weak language for ABoxes and a strong language for queries has been investigated. Description Logic 19Y From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. no definitional import, they just restrict the set of pos- sible interpretations. The second function of a TBox is to define new con- cepts in terms of primitive ones by specifying neces- sary and sufficient conditions for concept membership. This can be seen as defining abstractions or views on the objects in the knowledge base. Defined concepts are important for querying the knowledge base and as left-hand sides of trigger rules. For this purpose we need more expressive languages. If cycles occur in this part they must have definitional import. As a consequence of our analysis we propose to split the TBox into two components: one for declaring frame structures and one for defining views. By analogy to the structure of databases we call the first component the schema and the second the view part. We envision the two parts to differ with respect to the language, the form of statements, and the semantics of cycles. The schema consists of a set of primitive concept in- troductions, formulated in the schema language, and the view part by a set of concept definitions, formu- lated in the view language. In general, the schema lan- guage will be less expressive than the view language. Since the role of statements in the schema is to re- strict the interpretations we want to admit, first order semantics, which is also called descriptive semantics in this context (see Nebel 1991), is adequate for cycles occurring in the schema. For cycles in the view part, we propose to choose a semantics that defines concepts uniquely, e.g., least or greatest fixpoint semantics. The purpose of this work is not to present the full- fledged design of a new system but to explore the op- tions that arise from the separation of TBoxes into schema and views. Among the benefits to be gained from this refinement are the following three. First, the new architecture has more parameters for improv- ing systems, since language, form of statements, and semantics can be specified differently for schema and views. So we found a combination of schema and view language with polynomial inference procedures whereas merging the two languages into one would have led to intractability. Second, we believe that one of the obstacles to a consensus about the semantics of terminological cycles has been precisely the fact that no distinction has been made between primitive and defined concepts. Moreover, intractability results for cycles mostly refer to inferences with defined concepts. We proved that reasoning with cycles is easier when only primitive concepts are considered. Third, the re- fined architecture allows for more differentiated com- plexity measures, as shown later in the paper. In the following section we outline our refined archi- tecture for a TKRS, which comprises three parts: the schema, the view taxonomy, and the world description, which comprise primitive concepts, defined concepts and assertions in traditional systems. In the third sec- tion we show by three case studies that adding a sim- ple schema with cycles to existing systems does not increase the complexity of reasoning. The Refined Architecture We start this section by a short reminder on concept languages. Then we discuss the form of statements and their semantics in the different components of a TKRS. Finally, we specify the reasoning services pro- vided by each component and introduce different com- plexity measures for analyzing them. Concept Languages In concept languages, complex concepts (ranged over by C, D) and complex roles (ranged over by Q, R) can be built up from simpler ones using concept and role forming constructs (see Tables 1 and 2 a set of com- mon constructs). The basic syntactic symbols are (i) concept names, which are divided into schema names (ranged over by A) and view names (ranged over by V), (ii) role names (ranged over by P), and (iii) indi- vidual names (ranged over by a, b). An interpretation Z = (AX, +I) consists of the domain AZ and the in- terpretation function .‘, a subset of A’, which maps every concept to every role to a subset of AZ x AZ, and every individual to an element of AZ such that a’ # bZ for different individuals a, b (Unique Name Assumption). Complex concepts and roles are inter- preted according to the semantics given in Tables 1 and 2, respectively. In our architecture, there are two different concept languages in a TKRS, a schema language for express- ing schema statements and a view language for formu- lating views and queries to the system. The view and schema languages in the case studies will be defined by restricting the set of concept and role forming con- structs to a subset of those in Tables 1 and 2. The Three Components Now we describe the three parts of a TKRS: the schema, the view taxonomy and the world description. We first focus our attention to the schema. The schema introduces concept and role names and states elementary type constraints. This can be achieved by inclusion axioms having one of the forms: A L D, I= !G 4 x A2, where A, Al, A2 are schema names, P is a role name, and D is a concept of the schema language. Intuitively, the first axiom states that all instances of A are also instances of D. The second axiom states that the role P has domain Al and range AZ. A schema S consists of a finite set of schema axioms. Inclusion axioms impose only necessary conditions for being an instance of the schema name on the left- hand side. For example, the axiom “Employee L Person” d ec ares 1 that every employee is a person, 200 Automated Reasoning .- Construct Name Syntax Semantics top z singleton set ii A intersection Cl-ID CZnDZ union CUD C=uD= negation -c AZ\C” universal quantification VR.C {dl IVdz:(dl,dz)cR=--+ d,EC=) . existential quantification 3R.C (4 1% : (d&,) E R=Ad2 EC=) . existential agreement 3& t R (4 1 %@dz) E Q= A (dl,&) E R=} number restrictions (2 nR) @I 1 #id2 1 @I, d2) E R’} L n} (I nR) (dl 1 #id2 1 (4, d2) E Rx} 5 n} Table 1: Syntax and semantics of concept forming constructs. Construct Name inverse role role restriction role chain self Syntax Semantics r 1 {b-U2) I (ddl) E P") (Rp: C) {(ddz) 1 (dwh) E Rz A 4 E C'} Q 0 R ((dl, d3) 1 W+(dl, 4) E Q= A (dz, d3) E R=} . E ((4 > 4) I 4 E A=} Table 2: Syntax and semantics of role forming constructs. but does not give a sufficient condition for employee. 2 being an A schema may contain cycles through inclusion axioms (see Nebel 1991 for a formal definition). So one may state that the bosses of an employee are themselves employees, writing “Employee E Vboss.Employee.” In general, existing systems do not allow for terminological cycles, which is a serious re- striction, since cycles are ubiquitous in domain models. new classes of objects in terms of the concept and role names introduced in the schema. We refer to “V I C” as the definition of V. The distinction between schema and view names is crucial for our architecture. It ensures the separation between schema and views. A view taxonomy V is a finite set of view definitions such that (i) for each view name there is at most one definition, and (ii) each view name occurring on the right hand side of a definition has a definition in Y. There are two questions related to cycles: the first is to fix the semantics and the second, based on this, to come up with a proper inference procedure. As to the semantics, we argue that axioms in the schema have the role of narrowing down the models we consider possible. Therefore, they should be interpreted under descriptive semantics, i.e., like in first order logic: an interpretation Z satisfies an axiom A C D if AZ C Dz, and it satisfies P & A1 x A2 if Pz C A: x A$. The interpretation Z is a model of the schema S if it satisfies all axioms in S. The problem of inferences will be dealt with in the next section. The view part contains view definitions of the form v G c, where V is a view name and C is a concept in the view language. Views provide abstractions by defining 21t gives, though, a sufficient condition for being a per- son: If an individual is asserted to be an Employee we can deduce that it is a Person, too. Differently from schema axioms, view definitions give necessary and sufficient conditions. As an ex- ample of a view, one can describe the bosses of the employee Bill as the instances of “BillsBosses k 3boss-of.{BILL}.” Whether or not to allow cycles in view definitions is a delicate design decision. Differently from the schema, the role of cycles in the view part is to state recursive definitions. For example, if we want to describe the group of individuals that are above Bill in the hierarchy of bosses we can use the definition “BillsSuperBosses t BillsBosses LI ‘Jboss-of.BillsSuperBosses.” But note that this does not yield a definition if we assume descrip- tive semantics because for a fixed interpretation of BILL and of the role boss-of there may be several ways to in- terpret BillsSuperBosses in such a way that the above equality holds. In this example, we only obtain the intended meaning if we assume least fixpoint seman- tics. This observation holds more generally: if cycles are intended to uniquely define concepts then descrip- Description Logic 201 tive semantics is not suitable. However, least or great- est fixpoint semantics or, more generally, a semantics based on the p-calculus yield unique definitions (see Schild 1994). Unfortunately, algorithms for subsump- tion of views under such semantics are known only for fragments of the concept language defined in Tables 1 and 2. In this paper, we only deal with acyclic view tax- onomies . In this case, the semantics of view defini- tions is straightforward. An interpretation Z satisfies the definition V G C if Vz = C’, and it is a model for a view taxonomy Y if Z satisfies all definitions in Y. A state of affairs in the world is described by asser- tions of the form C(a) 3 R(a, 9, where C and R are concept and role descriptions in the view language. Assertions of the form A(a) or P(a, b), where A and P are names in the schema, resemble basic facts in a database. Assertions involving complex concepts are comparable to view updates. A world description W is a finite set of assertions. The semantics is as usual: an interpretation Z satisfies C(a) if az E AZ and it satisfies R(a, b) if (a=, b’) E RX; it is a model of W if it satisfies every assertion in W. Summarizing, a knowledge base is a triple C = (S, V, W), where S is a schema, V a view taxonomy, and W a world description. An interpretation Z is a model of a knowledge base if it is a model of all three components. Reasoning Services For each component, there is a prototypical reasoning service to which the other services can be reduced. Schema Validation: Given a schema S, check whether there exists a model of S that interprets every schema name as a nonempty set. View Subsumption: Given a schema S, a view taxon- omy V, and view names VI and V2, check whether Vt E V$ for every model Z of S and V; Instance Checking: Given a knowledge base C, an indi- vidual a, and a view name V, check whether a’ E Vz holds in every model Z of C. Schema validation supports the knowledge engineer by checking whether the skeleton of his domain model is consistent. Instance checking is the basic operation in querying a knowledge base. View subsumption helps in organizing and optimizing queries (see e.g. Buch- heit et al. 1994). Note that the schema S has to be taken into account in all three services and that the view taxonomy V is relevant not only for view sub- sumption, but also for instance checking. In systems that forbid cycles, one can get rid of S and Y by ex- panding definitions. This is not possible when S and Y are cyclic. Complexity Measures The separation of the core of a TKRS into three com- ponents allows us to introduce refined complexity mea- sures for analyzing the difficulty of inferences. The complexity of a problem is generally measured with respect to the size of the whole input. However, with regard to our setting, three different pieces of in- put are given, namely the schema, the view taxonomy, and the world description. For this reason, different kinds of complexity measures may be defined, similarly to what has been suggested in (Vardi 1982) for queries over relational databases. We consider the following measures (where 1x1 denotes the size of X): Schema Complexity: the complexity as a function of PI; View Complexity: the complexity as a function of IVl; World Description Complexity: the complexity as a function of IWI; Combined Complexity: the complexity as a function of PI + IV + IW Combined complexity takes into account the whole input. The other three instead consider only a part of the input, so they are meaningful only when it is reasonable to suppose that the size of the other parts is negligible. For instance, it is sensible to analyze the schema complexity of view subsumption because usually the schema is much bigger than the two views which are compared. Similarly, one might be inter- ested in the world description complexity of instance checking whenever one can expect W to be much larger than the schema and the view part. It is worth noticing that for every problem combined complexity, taking into account the whole input, is at least as high as the other three. For example, if the complexity of a problem is O(lS( . IYl . /WI), its com- bined complexity is cubic, whereas the other ones are linear. Similarly, if the complexity of a given prob- lem is O(ISllvl), both its combined complexity and its view complexity are exponential, its schema complex- ity is polynomial, and its world description complexity is constant. In this paper, we use combined complexity to com- pare the complexity of reasoning in our architecture with the traditional one. Moreover, we use schema complexity to show how the presence of a large schema affects the complexity of the reasoning services pre- viously defined. View and world description com- plexity have been investigated (under different names) in (Nebel 1990; Baader 1990) and (Schaerf 1993; Donini et al. 1994), respectively. For a general description of the complexity classes we use see (Johnson 1990) Case Studies In this section, we study some illustrative examples that show the advantages of the architecture we pro- 202 Automated Reasoning pose. We extend three systems by a language for cyclic schemas and analyze their computational properties. As argued before, a schema language should be ex- pressive enough to declare isa-relationships, restric the range of roles, and specify roles to be necessary (at least one value) or functional (at most one value). These requirements are met by the language St (see Buchheit et al. 1994), which is defined by the following syntax rule: D w A I VP.A I(? 1 P) I(< 1 P). Obviously, it is impossible to express in SIC that a con- cept is empty. Therefore, schema validation in S,C is trivial. Also, subsumption of schema names is decid- able in polynomial time. We proved that inferences become harder for exten- sions of SL. If we add inverse roles, schema validation remains trivial, but subsumption of schema names be- comes NP-hard. If we add any constructs by which one can express the empty concept-like disjointness axioms-schema validation becomes NP-hard. How- ever, in our opinion this does not mean that exten- sions of SE are not feasible. For some extensions, we came up with natural restrictions on the form of schemas that decrease the complexity. Also, it is not clear whether realistic schemas will contain structures that require complex computations. In all three case studies, the schema language is SL. As view language, we investigate three dif- ferent languages derived from three actual systems described in the literature, namely CONCEPTBASE (Jarke 1992), KRIS (Baader & Hollunder 1991), and CLASSIC (Borgida et al. 1989). For the extended sys- tems, we study the complexity of the reasoning ser- vices, where, in particular, we aim at showing two re- sults: (i) reasoning with respect to schema complexity is always tractable, (ii) combined complexity is not in- creased by the presence of terminological cycles in the schema. In all three cases, we assume that the view taxon- omy is acyclic. For this reason, from this point on we assume that no view names occur in view definitions or in the world description. This can be achieved by itera- tively substituting every view name with its definition, which is possible because of our acyclicity assumption (see Nebel 1990 f or a discussion of this substitution and its complexity). The Language of CONCEPTBASE as View Language In (Buchheit et al. 1994) the query language &L was defined, which is derived from the deductive object- oriented database system CONCEPTBASE under devel- opment at the University of Aachen. In &L roles are formed with all the constructs of Table 2, and concepts are formed according to the syntax rule: C,D---, AITI{a}ICnDI3R.C~3Q-R. Note that all concepts in QZ correspond to existen- tially quantified formulas. We feel that most practical queries are of this *form and do not involve universal quantification. In (Buchheit et al. 1994) it has been shown that view subsumption in &L can be computed in polynomial time w.r.t. combined complexity. We generalized this result. Theorem 1 With SL as schema language and QL as view language, instance checking is in PTIME w.r.t. combined complexity. This result illustrates the benefits of the new ar- chitecture because by restricting universal quantifica- tion to the schema and existential quantification to views we can have both without losing tractability. We proved that for the extension of SL by the con- struct 3P.A, the combined complexity of view sub- sumption becomes NP-hard (whereas the schema com- plexity remains PTIME). From the results in (Donini et al. 1992a) it follows that adding universal quantifi- cation to Q,C would make view subsumption NP-hard. The Language of KRIS as View Language The system KRIS, under development at DFKI, pro- vides as its core the expressive language AL&V, which is defined by the following syntax rule: C, D + AJCi-IDICUDI+YI VP.C I3P.C I(> n P) I (5 n P). The complexity of reasoning with d&ZN is known: Subsumption between dCCN-concepts has been proved PSPACE-complete in (Hollunder, Nutt, & Schmidt-Schau8 1990) and instance checking w.r.t. an acyclic TBox and an ABox has recently been proved PSPACE-complete too in (Hollunder 1993). For the combination of SL and AICCN in our architecture, we have the following result: Theorem 2 With SL as schema language and dlCN as view language, view subsumption and instance checking are PSPACE-complete problems w.r.t. com- bined complexity and PTIME problems w.r.t. schema complexity. We conclude that a simple schema with cycles can be added to systems like KRIS without changing the complexity of reasoning. However, if AECN is also used as the schema language, then schema complexity alone is EXPTIME-hard (Buchheit, Donini, & Schaerf 1993). The Language of CLASSIC as View Language Finally, we study the concept language of the CLASSIC system as view language. CLASSIC has been developed at Bell Labs and is used in several applications. We -- refer to this language as CC. In CLASSIC individuals are treated in a special wa;~ (see Borgida & Patel-Schneider 1993), which we cap- ture by the following syntax and conventions: Individ- uals are represented by individual concepts B1, . . . , B, Description Logic 203 that appear neither in the schema nor in the left-hand side of a definition and that are interpreted as mutually disjoint sets. Then the construct (one-of & . . . &) of CLASSIC can be modeled by a disjunction Bi Ll . . . l-l B, of individual concepts. The construct (fills P B) can be interpreted as a particular case of existential quantification, which we write as 3P.B. The same-as construct p ./. g, which expresses agreement of chains p, g of functional roles, can be modeled by a combina- tion of SL schema axioms and our existential agree- ment. Now, the syntax of CL is the following: C, D d AICnDI~P.CI(LnP)I(LnP)) Bl Ll * - 4JBk:I3P.B/p~g. Theorem 3 With SL as schema language and CL as view language, view subsumption and instance checking are problems in PTIME w.r.t. combined complexity. This shows that adding cyclic schema information does not endanger the tractability of reasoning with CLASSIC, which was one of the main concerns of the CLASSIC designers (Borgida et al. 1989). Note that adding the same-as construct to SIC makes view sub- sumption undecidable (Nebel 199 1). Conclusion We have proposed to replace the traditional TBox in a terminological system by two components: a schema, where primitive concepts describing frame-like struc- tures are introduced, and a view part that contains defined concepts. We feel that this architecture re- flects adequately the way terminological systems are used in most applications. We also think that this distinction can clarify the discussion about the semantics of cycles. Given the different functionalities of the schema and view part, we propose that cycles in the schema are interpreted with descriptive semantics while for cycles in the view part a definitional semantics should be adopted. In three case studies we have shown that the revised architecture yields a better tradeoff between expressiv- ity and the complexity of reasoning. The schema language we have introduced might be sufficient in many cases. Sometimes, however, one might want to impose more integrity constraints on primitive concepts than those which can be expressed in it. We see two solutions to this problem: either en- rich the language and have to pay by a more costly reasoning process, or treat such constraints in a pas- sive way by only verifying them for the objects in the knowledge base. The second alternative can be given a logical semantics in terms of epistemic operators (see Donini et al. 199213). References Baader, F., and Hollunder, B. 1991. A terminolog- ical knowledge representation system with complete inference algorithm. Proc. PDK-91, LNAI, 67-86. 204 Automated Reasoning Baader, F. 1990. Terminological cycles in KL-ONE- based knowledge representation languages. Proc. AAAI-90, 621-626. Borgida, A., and Patel-Schneider, P. F. 1993. A se- mantics and complete algorithm for subsumption in the CLASSIC description logic. Submitted. Borgida, A.; Brachman, R. J.; McGuinness, D. L.; and Alperin Resnick, L. 1989. CLASSIC: A structural data model for objects. Proc. ACM SIGMOD, 59-67. Buchheit, M.; Jeusfeld, M. A.; Nutt, W.; and Staudt, M. 1994. Subsumption between queries to object- oriented databases. Information Systems 19( 1):33-54. Buchheit, M.; Donini, F. M.; and Schaerf, A. 1993. Decidable reasoning in terminological knowledge rep- resentation systems. Journal of Artificial Intelligence Research 1:109-138. Donini, F. M.; Hollunder, B.; Lenzerini, M.; Marchetti Spaccamela, A.; Nardi, D.; and Nutt, W. 1992a. The complexity of existential quantification in concept languages. Artificial Intelligence 53:309-327. Donini, F. M.; Lenzerini, M.; Nardi, D.; Nutt, W.; and Schaerf, A. 1992b. Adding epistemic operators to concept languages. Proc. KR-92, 342-353. Donini, F. M.; Lenzerini, M.; Nardi, D.; and Schaerf, A. 1994. Deduction in concept languages: From sub- sumption to instance checking. Journal of Logic and Computation 4(92-93):1-30. Hollunder, B.; Nutt, W.; and Schmidt-Schaufi, M. 1990. Subsumption algorithms for concept description languages. Proc. ECAI-90, 348-353. Hollunder, B. 1993. How to reduce reasoning to sat- isfiability checking of concepts in the terminological system KRIS. Submitted. Jarke, M. 1992. ConceptBase V3.1 User Manual. Aachener Informatik-Berichte 92-17, RWTH Aachen. D. S. Johnson. 1990. A catalog of complexity classes. Handbook of Theoretical Computer Science, volume A, chapter 2. Lenzerini, M., and Schaerf, A. 1991. Concept lan- guages as query languages. Proc. AAAI-91, 471-476. Nebel, B. 1990. Terminological reasoning is inherently intractable. Artificial Intelligence 43:235-249. Nebel, B. 1991. Terminological cycles: Semantics and computational properties. In Sowa, J. F., &d., Principles of Semantic Networks. Morgan Kaufmann, Los Altos. 331-361. Schaerf, A. 1993. On the complexity of the instance checking problem in concept languages with existen- tial quantification. Journal of Intelligent Information Systems 2~265-278. Schild, K. 1994. Terminological cycles and the propo- sitional p-calculus. Proc. KR-94. Vardi, M. 1982. The complexity of relational query languages. Proc. STOC-82, 137-146. | 1994 | 11 |
1,443 | Classification of noun phrases into concepts or individuals Saliha Azzam CRIL IngCnierie - CAMS - Paris-Sorbonne 174, rue de la Rdpublique 92817 Puteaux France e-mail : azzamQcril-ingfr Abstract We tackle here the problem of discrimination be- tween instances of the language representation and concepts. This procedure is necessary ac- cording to the aim of the application that uses the conceptual structures. We propose linguistic rules for doing this discrimination inside natural language texts, and indicate how these rules are combined to build an accurate procedure. Introduction The problem we address here deals with the auto- matic discrimination inside a natural language (NL) text between instances (i.e., individuals) and concepts. The procedure is integrated into a semantic parser and implemented in the context of COBALT project (CEC/LRE project). The terms that are substituted by concepts are noun phrases (NPs) and the problem is that of classifying the NPs as instances or concepts. The necessity of this classification (of NPs), depends on the aim of the application that uses the conceptual structures. It may be worth noticing that, even if this problem is very general and not restricted to the natu- ral language (NL) processing domain, it is very difficult to find useful suggestions in the literature, at least in a knowledge engineering context. We will only men- tion here a well-known paper (Brachman et aZ. 1991) about the CLASSIC system, in which the differenti- ation about concepts and individuals is essential. In this paper, indications about the rules we suggest for a systematic classification are given (Assam 1993). How rules are applied The rules are ordered and exclusive, i.e., given a rule numbered i: if condition i then <classification- i>, means if (not condition i-l and condition i) then <classification-i>. If a given rule succeeds on a NP, the latter is suppressed from the list of NPs to be clas- sified and the next NP is considered. The algorithm has an empirical nature and some rules are incremen- tal, i.e., as soon as a given NP is classified, the result is taken into account for the next NP. The procedure is then reapplied . at the end of a rules session, on the un- classified NPs and stops when no rule can be applied. The cllassification rules The main features taken into account and involved in the rule conditions, are: 1) The NP type, e.g. if it corresponds to a proper noun it is an instance. In order to find the location of the concept which subsumes this instance, semantic and syntactic patterns are checked out 2) The type of the recognized concept: for ex- ample the concepts of ‘abstract qualities’ sub-hierarchy do not have any instances 3) The syntactic struc- ture of the NB 4) The determiner type of the NP (possessive , demonstrative, . ..) 5) The mark of punctual situation in the sentence, as the temporal adverbs that favor instances 6) The expressions in- troducing general context that favor concepts, e.g., n in general” or conditional assertions like ‘“in default of’ 7) The semantic category of the verb 8) The syntactic role of other classified NPs. Conclusion The rules use linguistics knowledge, without involving any additional world knowledge. There are expressed under the form of syntactic and semantic constraints which are tested on the syntactic tree and using the associative knowledge of the representation language. Applied on COBALT corpus extracted from Reuters news, the rules succeed in 90 percent of cases, i.e., rules classify 90 percent of NPs in a correct way. The cases of failure are cases of unclassified NPs and not incor- rect classification”. Therefore, future works address the extension of rules to process more cases and also, the rules refinement to avoid any hazardous results. References Azzam, S. 1993. Classification of NPs. Technical report, COBALT report ST/06/93 STEP:Paris. Brachman, R.; McGuinness, D.; Patel-Schneider; P.F., R. L.; and Borgida. 1991. Living with clas- sic : When and how to use a kl-one-like language. In Sowa, J., ed., Principles of Semantic Networks. San Mateo (CA): Morgan Kaufmann. Student Abstracts 1425 From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. | 1994 | 110 |
1,444 | Case-Based Introspection* Michael T. Cox College of Computing Georgia Institute of Technology Atlanta, GA 30332-0280 cox@cc.gatech.edu To effectively reason about one’s own knowledge, goals, and reasoning requires an ability to explicitly introspect. A computational model of introspection is a second-order theory that contains a formal language for representing first-order processes and that processes instances of this representation. The reasoning algorithm used to perform such processing is similar to the algorithm used to reason about events and processes represented in the original domain: case-based reasoning. Case-based understanding 1) takes as input some event in its domain along with its context, 2) based on salient cues in the input, retrieves a prior case to interpret the input, then 3) adapts the old solution to fit the current situation, and finally 4) outputs the result as its understanding of the domain. Similarly, case-based introspection 1’) takes as input a representation of some prior reasoning [e.g., an instance of case-based understanding] 2’) based on salient cues in the input, retrieves a prior case of reflection to interpret the input, then 3’) adapts the old case to fit the current situation, and finally 4’) outputs the result as its self-understanding. Here, the system’s domain is itself. We have extended the notion of an explanation pattern (XP) from Schank (1986) and Ram (1991). A meta-expla- nation pattern (Meta-XP) is an explanation of how and why an explanation goes awry in a reasoning system. We have developed two classes of Meta-XPs that facilitate a system’s ability to reason about itself and to assist in selecting a learning algorithm or strategy. A Trace Meta- XP (TMXP) explains how a system generates an explana- tion about the world or itself, and an Introspective Meta- XP (IMXP) explains why the reasoning captured in a TMXP fails. The TMXP records the structure of reason- ing tasks and the reasons for decisions taken in processing in a chain of decide-compute nodes. The IMXP is a causal structure composed of primitive, network struc- tures that represent various failure types from a failure taxonomy. They are retrieved and applied to instances of reasoning captured in TMXPs and guide learning-goal formation after failure occurs. Case-based introspection has proved useful during blame- assignment in a multistrategy learner called Meta-AQUA. Failure analysis cannot always look to the external world *This research was done with the author’s advisor, Ashwin Ram. for causes. Often the assignment of blame is with the knowledge and reasoning of the system itself. Therefore, when Meta-AQUA encounters a reasoning failure while reading drug-smuggling stories, it uses case-based intro- spection to explain why it failed at its reasoning task. The system uses this analysis as a basis to form learning goals and subsequently to construct a learning plan to repair its memory. Figure 1 specifies the algorithm in some detail. 0. Perform and Record Reasoning in TMXP 1. Failure Detection on Reasoning Trace 2. If Failure Then Learn from Mistake: 0 Blame Assignment Compute index as characterization of failure Retrieve Introspective Meta-XP Apply IMXP to trace of reasoning in TMXP If Successful XP-Application then Check XP-ASSERTED-NODES If one or more nodes not believed then Introspective questioning GOT0 step 0 Else GOT0 step 0 0 Create Learning Goals Compute tentative goal priorities 0 Choose Learning Algorithm(s) Expand subgoals Build learning plan Compute data dependencies Order plans e Apply Learning Algorithm(s) Figure 1: Introspective Multistrategy Learning Algorithm (from Ram et al. 1993) eferences Schank, R. C. 1986. Explanation Patterns: Understanding Mechanically and Creatively. Hillsdale, NJ: LEA. Ram, A. 199 1. A Theory of Questions and Question Ask- ing. The Journal of the Learning Sciences, 1(3&4), 273- 318. Ram, A., Cox, M. T., & Narayanan, S. 1993. Goal-Driven Learning in Multistrategy Reasoning and Learning Sys- tems. In A. Ram & D. Leake (eds.), Goal-Driven Learn- ing, Cambridge, MA: MIT Press. Forthcoming. Student Abstracts 1435 From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. | 1994 | 111 |
1,445 | Empirical knowledge representation generation using n-gram clustering Robin Collier Department of Computer Science, University of Sheffield Regent Court, 211 Portobello Street, Sheffield England, S14DP r.collier@dcs.shef.ac.uk Background The work discussed below enables the automatic genera- tion of structures similar to the key templates which are predefined in information extraction/retrieval conferences - this would be a significant development. The motivation is similar to that of AutoSlog (Riloff 1993) which generates a domain-specific dictionary of concepts, although the approach is quite different. System Overview The approach acquires a domain-specific semantic repre- sentation by carrying out stochastic analysis of a corpus. Sets of conceptually similar paragraphs are utilised. The corpus and semantic representation are used to generate schematic structures. These are used to concisely store the knowledge contained within existing texts. New texts are processed to dynamically update the knowledge base. Any novel concepts encountered are analysed and a new structure added to the representation. A more comprehensive explanation of this system and references to related work are presented in (Collier 1994). Paragraph Clustering The fundamental stage is the representation generation. The approach identifies useful (i.e. frequently occurring and widely distributed) clusters of n-grams within para- graphs, which correlate with other paragraphs within the corpus. Six steps are carried out, utilising five structures. Structures The first structure is an array containing a unique numeric entry for each unique word in the corpus. The remaining structures have the same format; identi- cal words are grouped together in an array and ordered by group size, this causes the loss of the word order. The second structure defines the word order, it contains pointers to the next word in the text. The third structure contains the unique integer repre- senting the next word pointed to in the text. The fourth structure contains the length of the phrase associated with each word. The final structure is related to the fourth. Each corre 1434 Student Abstracts sponding entry is a pointer to the next identical phrase. Algorithm The six steps of the algorithm are: 1. Word/integer generation: creates an associative array containing a numeric entry for each unique word. 2. Integer conversion: translates the text into a numeric representation, and generates structures two and three. 3. Generate phrase lengths: each of the groups of simi- lar words are processed and the longest phrases which occur a multiple number of times are identified. This information is stored in structures four and five. 4. Identify useful n-grams: sets of phrases with similar lengths are ordered by their frequency of occurrence and the n-best are identified amending structures four and five. 5. Paragraph weight parse: each paragraph is assigned a weight representing the number, size, frequency, distribu- tion, etc. of n-grams existing in that paragraph. 6. Identify useful paragraph clusters: sets of paragraphs containing correlating n-grams are identified, and the n- best extracted by considering the quantity of paragraphs which they exist in, and quality of the n-grams. Conclusions An application developed using this process has the potential to be invaluable for domain specialists who wish to identify documents containing similar conceptual infor- mation within extremely large knowledge bases. It is necessary to evaluate the scope and quality of the representations generated. One possibility is to compare, using an identical corpus, the representation generated by a group of experts with that of the system. References Collier, R. 1994. N-gram cluster identification during empirical knowledge representation generation. In Pro- ceedings of the Fifteenth International Conference on Computational Linguistics. Kyoto, Japan: Forthcoming. Riloff, E. 1993. Automatically constructing a dictionary for information extraction tasks. In Proceedings of the Eleventh National Conference of Artificial Intelligence. Washington, DC.: MIT Press, Cambridge, MA. From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. | 1994 | 112 |
1,446 | SodaBot: A Software Agent Enviro struction Syste Michael H. Coen* MIT Artificial Intelligence Laboratory 545 Technology Sq. NE43-823 Cambridge, MA 02139 mhcoen@ai.mit.edu Much of the work done in the area of software agents can be placed into one of two categories: (1) highly theoretical treatment of agents’ intentions and capa- bilities; and (2) applied construction of specific agents. However, determining for what (and if) software agents are actually useful requires building many of them, and the agent construction process poses difficult technical challenges. Building agents generally involves a multi-layered approach ranging from low-level “system-hacking” (e.g. of mailers, networks, etc.) to high-level appli- cation development (e.g., a meeting scheduler) and ev- erything in between. Each of these layers can require a substantial amount of independent implementation and debugging time. Additionally, it can be difficult to distribute new agents; they tend to be site-specific in intricate ways and disconnecting them from their local dependencies can be technically involved; for similar reasons, they can be difficult to install. This abstract describes SodaBot, a general-purpose software agent user-environment and construction sys- tem. In SodaBot, each user is given a personal basic software agent (BSA) which typically runs in the back- ground on her home workstation. The BSA is an agent operating system. By this, we mean that it is a generic (in the sense of universal) computational framework for implementing and running specific agent applica- tions. The BSA is programmed in the SodaBot agent programming language (SodaBotL).’ As a quick sanity check, see if the following (rough) analogy makes sense: SodaBotL is to SodaBot the way C Language is to Unix. A BSA simultaneously (via a time-sharing algo- rithm) runs multiple SodaBotL programs provided both by its owner and by other people. The BSA ar- chitecture disconnects agent programs from the specific computational environment in which they run. They *The research described here was conducted at the Arti- ficial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory’s artificial intel- ligence research is provided in part by the Advanced Re- search Projects Agency of the Department of Defense under Office of Naval Research contract N00014-85-K-0124. ‘Pronounced “Soda-Bottle.” Figure 1: The SodaBot Basic Software Agent architec- ture no longer need to be “hard-coded” with specific pa- rameters for particular activities, The SodaBot agent programming language (Sod- aBot L) offers high-level primitives and control- structures designed around human-level descriptions of agent activity. In SodaBoiL, users can easily im- plement a wide-range of typical soflware agent appli- cations, e.g. personal on-line assistants and meet- ing scheduling agents. SodaBotL abstracts out the low-level details of agent implementation. In a typi- cal Unix environment, for example, agent creators are freed from the bother of dealing with system calls, mail servers, sockets, and X-windows. It is therefore much easier, for example, to have an agent: e Interact with the user: Get Response {prompt “And what’s your opinion?“; timeout in 10 minutes) .$pollster,query; 07- o Handle time: Wait until Tuesday before $date: { Display ‘LRenzinder, you have an appointment with $person on $date”;) Additional features of SodaBot include automatic distribution of user-created agents and a graphical user-interface. SodaBot has been built and tested, and it is in current development and use at the MIT AI Lab. Student Abstracts 1433 From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. | 1994 | 113 |
1,447 | Abstract of the Forest Management Advisory Systems Yousong Chang and Donald Nute Artificial Intelligence Programs University of Georgia Athens, GA 30602 E-mail:ychang@ai.uga.edu Expert system technology is a powerful tool for en- hancing the decision making capabilities of nonexperts with reasonable knowledge of a domain to expert level in that domain. U.S.D.A. Forest Service has been working on forest management expert systems for sev- eral years. However, building different expert systems for each kind of forest is a demanding task. To develop a complete expert system in a high level language, we think the best approach to take is the toolkit approach. The idea is to develop separate mod- ules for different kinds of inferencing, different kinds of user interaction, and different kinds of explanatory fa- cilities. So we developed a toolkit mostly in Prolog for building expert systems for forest management. The first components of the toolkit were developed in Vi- sual Basic, Hypertxt for Windows, Windows Notepad, and LPA Prolog for Windows to support development of a management system for red pine forests. This first system is called Red Pine Forest Management Advi- sory System (RPFMAS). The same tools used in RPF- MAS were then used to develop a system for aspen forests . Our toolkit architecture includes three logical levels: a domain level, a tactical level, and a strategic level. The domain level should support as many different knowledge representation schemes as possible. We now support three structures. (1) facts and rules with or without MYCIN-like cer- tainty factors (2) Prolog databases (3) procedures The tactical level includes the inference engines and the user interface. We now have: (1) backward chaining (2) forward chaining (3) mixed backward and forward chaining Backward and forward chaining will support reasoning with incomplete or uncertain information using either MYCIN-like certainty factors or defeasible rules. The RPFMAS supports incomplete but certain informa- tion. The user interface provides a variety of methods for collecting task-specific information from the user and for communicating conclusions to the user. The user interface of RPFMAS allows reasonable opportu- nity for the user to review and to change responses without the need to restart the consultation. The ex- planatory facility, controlled by Visual Basic through DDE to Hypertxt for Windows, provides explanations for questions asked and for conclusions offered. The strategic level includes tools combining different components of the tactical level to produce-a consul- tation driver suitable for a particular application. It is at this level that the control structure for an entire system is developed. This level includes a variety of tools to help the developer test and tune systems at the domain, the tactical, and the strategic levels. The basic architecture for our toolkit is a blackboard system implemented in Prolog. Each module reads the blackboard and becomes active when appropriate. Non-Prolog modules are activated by Prolog demons which read the blackboard for them. The major modules in the RPFMAS are shown be- low. All the modules are written in Prolog except “Growth simulator” in Visual Basic, “Explanatory fa- cilities” in Visual Basic and Hypertxt, “Trace” in Win- dows Notepad. Figure 1: RPFMAS architecture References Rauscher, H. M., and Benzie, J.W. (1990) A Red Pine Management Advisory System: Knowledge Model and Implementation. AI AppZications 4(3):27-43. 1432 Student Abstracts From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. | 1994 | 114 |
1,448 | Simplifying Bayesian Belief Nets while Preserving MPE or M-i>GE Ordering YaLing Chang Computer Science Dept., Graduate Center, City Univ. of New York, 33 W. 42 St., NYC 10036 Email: lay@cunyvmsl.gc.cuny.edu The abstraction of probability inference is a pro- cess of searching for a representation in which only the desirable properties of the solutions are preserved. Simplification is one of such abstraction which reduces the size of large databases and speeds transmission and processing of probabilistic information[Sy & Sher, 941. Given a set of evidence S,, human beings are often interested in finding the few Most Probable Explana- tions (MPE) or Most Probable General Explanations (MPGE) in a Bayesian belief network, i.e., to identify and order a set of His or Lis (for MPGE), P(Hr]S,) 2 . . . > P(HnlSe), or P(Li]$) 2 . . . 2 P(Ln]Se), where Hi z an instantiation of all non-evidence variables and Li is an instantiation of a subset of all non-evidence variables. Furthermore, the ordering is often more im- portant than the quantitative probability values in cer- tain domain such as in diagnosis and prediction. The complexity of deriving MPEs or MPGEs is exponential if straight forward computation is employ- ed. Various algorithms have been developed to reduce the computational complexity. Yet, so far no attemp has been made to explore the idea of simplification which is based on searching an alternative representa- tion, i.e., a different Bayesian belief network which pre- serves the MPE or MPGE ordering relevant to queries of particular interests. This approach will try to re- duce the connectivity of the network so that it is more sparse but has a probability distribution which pre- serves the orders of MPEs or MPGEs. Our idea is to relax the probability constraints so that only the orders of MPEs or MPGEs are preserved. Since many probability distributions may preserve the desired orders, different belief networks may be real- ized because a belief network is uniquely defined by its topological structure and probability distribution. Ideally, we hope to find a network whose structure has only few connections (i.e., a network which manifests many independency relations). If such a network can be found, one can discard irrelevant information and reduce the size of the Bayesian belief network. Conse- quently, the complexity of deriving MPEs and MPGEs is reduced. One can conceptualize the abstraction process as a search which attempts to find appropriate network structures and probability distributions. In finding the structure of a simplified network, the MDL (minimum description length) principle can be used. The idea is to define a cost function whose value is proportional to the sparsity of a network [Lam and Bacchus, 931. In finding the probability distributions, one may use a measure which can quantify the independency rela- tions represented by the network. Cross entropy is one such measure proposed by Chow and Liu. So far all the research for automated learning or construction of the Bayesian belief network are tak- ing the approach that attempts to recover the original probability distribution as much as possible. Our ap- proach is different in the sense that deviation is allowed and encouraged as long as the probability distribution preserves the orders of MPEs or MPGEs and a sim- ple structure of a network can be obtained. A heuris- tic method, which is reported elsewhere [Sy 94a], has been developed to achieve the construction of a net- work with a simple structure. An experimental study will be conducted on a set of multiply connected Bayesian belief networks. Each of this network consists of eight variables and has a high interconnectivities among the variables. After the new network with a simpler structure is generated, comparisons will be made between the orders gener- ated by using the original belief network and the s&n- plified network generated by the heuristic algorithm. The particular questions to be addressed in this ex- preimental study are listed below: (1) Although it is theoretically possible to have networks which preserve the orderings of all possible MPE and MPGE, it is still unknown the level of com- plexity involved in finding these networks. We would like to know the possible ways of characterizing the complexity. (2) If the networks mentioned in (1) are found, we would like to know whether there are any net- works whose structure are simple such as singly con- nected configuration. If the singly connected config- uration does not exist, we would like to know which one, among these multiply connected networks, is the best in terms of the fators affecting the computation and also, which one is more spare and preserving more independency assumptions ? (3) If we can get a belief network which preserves all possible MPE and MPGE ordering, then this new Bayesian belief network is a new representation of the underlying probability inference system. This network is considered as a summary of the original network within the context of the abstraction theory proposed by Bon and Sher [94b]. Otherwise this network is con- sidered as a simplification. When a network is only a simplification, we would like to know the percentage of the orderings being preserved. Acknowledgments This work is part of the on going Ph.D research under the supervision of Pf. Bon K. Sy. References [Lam and Bacchus, 19931 “Using Causal Information and Local Measure to Learn Bayesian Networks,” Proc. of the 9th Conference of Uncertainty in AI, 1993. [Sy B.K. 1994a] “Abstract Belief Networks with Pre- served Probabilistic Ordering,” submitted to the 10th Conf. of Uncertainty in Artificial Intelligence, 1994. [Sy B.K. and Sher D.B. 1994103 “An Abstraction The- ory for Probabilistic Inference,” submitted to the Jour- nal of Artificial Intelligence Research. Student Abstracts 1431 From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. | 1994 | 115 |
1,449 | Decidability of Contextual Reasoning Vanja BuvaE HB 455, Dartmouth College Hanover, New Hampshire 03755. vanja@dartmouth.edu. Contexts were first suggested in McCarthy’s Turing Award Paper, (McCarthy 1987), as a possible solution to the problem of generality in AI. McCarthy’s con- cern with the existing AI systems has been that they can reason only about some particular, predetermined task. When faced with slightly different circumstances they need to be completely rewritten. In other words, AI systems lack generality. Cyc (Guha & Lenat 1990), a large common-sense knowledge-base currently being developed at MCC, is one example of where contexts have already been put to use in attempt to solve the problem of generality. Because of the complexity of the problem of generality, it has been speculated that any reasoning system which would be able to solve this problem would itself be computationally unacceptable. The purpose of this paper is to show that propositional contextual reasoning is decidable. Propositional logic of context extends classical propositional logic with a new modality, ist (c, $), used to express that the sentence, 4, is true in the context c. We first give a short sketch the syntax and the semantics of the language of context, as proposed in (McCarthy 1993) and formalized in (BuvaE & Mason 1993). To define the syntax, we begin with two distinct countable sets: K the set of all contexts, and p the set of propositional atoms. The set, W, of well-formed formulas is built up from the propositional atoms, IfD, using the usual propositional connectives (nega- tion and implication) together with the ist modality: W=PU(~W)U(W+W)Uist(IK,W). To define the semantics we first need to introduce some mathematical notation. If X is a set then P(X) is the set of subsets of X. X” is the set of all finite sequences, and we let 3 = [%I,. . . , a,] range over X*. The infix operator * is used for appending sequences. Drawing on the intuition that a context describes a state of affairs, and that the nature of the context may itself be context dependent, i.e. that a context may appear different when viewed from different contexts, a model, ZY& is defined to be a function which maps a context sequence to a set of truth assignments. For- mally, !?R : IK? + P (p + 2). Satisfaction is a relation on <E?R, Y, C, @, written as %R, ZJ b=, 4, and defined in- ductively by: The clauses for 1 and --+ are defined in the usual way. We write m +F d, iff V’v E m(E) Z?R, u b, 4; we say that 4 is valid in ;E iff V!YlZ tl7l bz 4. We proceed to define some notation, needed for the decidability results. The vocabulary of a sentence 4 in given in i!, Vocab(c, +), is a relation on a context sequence and the atoms which occur in the scope of that context sequence: - Vocab(E, 4) = t- ~~o~[~‘*‘~)dO) 4 is 140 Vocab(C, 4oj U Vocab(C, 41) 4 is ist(c,+o) 4 is 4. + 41 The restriction of a truth assignment, v, with respect to Vocab(Ze,d) is defined to be the unique truth as- signment L’ such that u’(p) = { ;@) <;E, p> E Vocab(Eo,4) <C,p> $ Vocab(Eo, 4). The definition extends in the natural way to mVocab(&,,+) 9 the restriction of the model m with re- spect to the vocabulary Vocab(Ee, 4). Theorem (Finite Model Property): 9Jl b+ 4 iff mVocab(E&) k, 4. The theorem is proved by induction on the structure of the formula 4. Corollary (Decidability): There is an effective pro- cedure which will determine whether or not a formula given in some context is valid. References BuvaE, S., and Mason, I. 1993. Propositional logic of context. In AAAI 93. Guha, R. V., and Lenat, D. B. 1990. Cyc: A midterm report. AI Magazine 11(3):32-59. McCarthy, J. 1987. Generality in artificial intelli- gence. Comm. of ACM 30(12):1030-1035. McCarthy, J. 1993. Notes on formalizing context. In IJCAI 93. 1430 Student Abstracts From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. | 1994 | 116 |
1,450 | Decision-Theoretic Plan Failure Debugging and Repair Lisa J. Burnell’ Department of Computer Science Engineering The University of Texas at Arlington, Box 190 15, Arlington, Texas 760 19 burnell@cse.uta.edu A number of strategies exist for the recovery from execution-time plan failures. One manner in which these strategies difFer is the degree of dependence on the reliability and availability of the planner’s knowledge. The best strategy, however, may be dependent on a number of considerations, including the type of plan failure, the criticality of the failure, the availability of resources, and the reliability and availability of the knowledge involved in a given plan failure instance. We are examining a decision-theoretic approach to diagnose plan fakes and to dynamically select fkom multiple failure recovery strategies when an execution-time plan failure occurs. Existing failure recovery strategies generally classify, with assumed or proven certainty, the type of error that occurred during plan execution and then select a fixed strategy to recover from that error. Assumptions regarding the accuracy and completeness of the planner’s domain model vary. On one end are approaches that use deterministic heuristics or purely syntactic analyses to debug and repair. These approaches are efficient and require limited knowledge, but generally are limited in the level of diagnosis and repair they can perform. On the other end are logic-based approaches, that are robust, but are knowledge and resource intensive. Such approaches are not always feasible. Incomplete or uncertain knowledge of previous planning actions, as in multi-agent planners, prechrdes a complete logical analysis. Even when possible, the costs of collecting and reasoning with complete information may be intractable, especially given time pressures and other resource constraints. The goal of our work is the development of an approach that can intelligently select and apply ftilure recovery strategies that are appropriate to the situation and that can cope with uncertainty. There are three primary components of our research: diagnosis of plan failures, plan repair and planner modification. When a failure is detected, we use a ‘This work is supported in part by the Advanced Research Projects Agency, State of Texas Advanced Technology Projects and the National Science Foundation. probabilistic method for determining the error class and the source(s) of the error. This method has already proven effective in debugging programs (Burnell 4% Horvitz 1993), and is being adapted for debugging generated plans. Plan repair strategies are selected using a decision theoretic approach similar to (Howe & Cohen 1991), with the added feature of dealing with potential uncertainties in the error classification and resource constraints. Finally, machine learning methods are employed, when warranted, to correct and refine the planner itself. Our approach uses probabilistic models, represented as belief networks, to construct an ordering over classes of errors, to identify the likely source(s) of the error and to recommeud an appropriate repair strategy. The belief networks model the uncertain relationships about the nature and structure of planning actions and the likelihood of types of errors. Value of information calculations recommend which computationally complex logical analyses are worth undertaking collect evidence. Also modeled is the decision problem of selecting a preferred repair strategy, which may include planner modification, based on the likely error class, kihue criticality and resource availability, as in (Horvitz 1988). For example, in a multi-agent planner, repair strategies may include local selection of a reactive failure-recovery action or requesting replanning from a more sophisticated planner. Fences Burnell, L. J. and Horvitz, E. J. 1993. A Synthesis of Logical and Probabilistic Reasoning for Program Understanding and Debugging Proc. of the 9th Conf. on Uncertainty in AI, 285-291. San Mateo, CA: Morgan Kaufknann. Horvitz, E.J. 1988. Reasoning under varying and uncertain resource constraints. Proc. of the 7th National Conf. on Artificial Intelligence, 11 l-l 16. San Mateo, CA:Morgan Kaufinann. Howe, A. and Cohen, P. 1991. Failure Recovery: A model and Experiments. Proc. of the 9th National Conf. on Artificial Intelligence, 80 l-808. Menlo Park, CA: AAAI. Student Abstracts 1429 From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. | 1994 | 117 |
1,451 | Lr Integrating Case-Based and Rule-Based Reasoning in Law Stefanie Briininghaus Lehrstuhl Air F%alctische Informatik I Universitit Mannheim D - 68131 Mannheim, GERMANY steffi @pi1 .informatilc.uni-mannhehnde Motivation This paper introduces DANlEL,l an architecture for the in- tegration of case-based reasoning and rule-based reasoning for legal interpretation. Rather than interleaving the rea- soners and assuming their complementarity, like in previ- ous approaches, they are applied concurrently. Conflict- ing interpretations are handled explicitly, based on domain knowledge and on the notion of redundancy. The principal problems of legal interpretation are the lack of deep models for legal reasoning, the existence of inher- ently ill-defined predicates and the frequent use of open- textured concepts, as pointed out in (Rissland & Skalak 1991). A hybrid approach to representing the legal sources and the use of meta-knowledge seems to be appropriate to solve these problems. The scope of DANIEL is not limited to this particular domain, since the noted difficulties do not occur exclusively, but prototypically in the law. Architecture and Function of DANIEL The main knowledge sources in the legal domain are legis- lation and case law. It has been shown comprehensively in (Rissland & Skalak 1991) that case law and legislation are best mapped on a case base and a rule base, respectively. In the proposed architecture, the reasoners are integrated via a blackboard, which allows an easy exchange of data and a hierarchical integration of multiple statutes. Since in a given case each knowledge source is likely to contribute to the solution and can be assumed to obtain a result on its own, the problem solvers are applied concurrently and autonomously. Their local results are evaluated by a rule- based coordination component, whose meta-knowledge is derived from general legal doctrine, and from the capa- bilities of the problem solvers. In case of differing local results, the most probable solution is chosen according to the legally determined binding force of the respective legal source, the degree of open-texturedness of the predicates, and the similarity between the given and the retrieved case. The function of Dm can be illustrated by a simplified example: From the statutes, it can be derived that built-in laundry facilities belong to a building, and that therefore the of ‘Distributed Architecture for the Expert systems in Law kNowledge-based Integration respective expenses must be manufacturing cost. The Ger- man Supreme Tax Court, however, decided that a washing machine fixed to the concrete is an extra asset not included in the manufacturing cost of the building. Since the defini- tion of the manufacturing cost is rather open-textured, while the cited case is an exact match, case law has to be applied. Related Work and For space limitations, only the most prominent and obvi- ously very similar system, CABARET (Rissland & Skalak 1991) will be discussed here. It is a domain-independent reasoning shell that incorporates a case-based and a rule- based reasoner via a blackboard. While CABARET uses con- trol heuristics to interleave the two reasoners, the reasoners in DANIEL are applied concurrently. Also, the coordination component does not work heuristically, but rather it con- tains domain specific knowledge, in order to overcome the lack of a deep domain model. In other approaches, from various domains, cases are generally considered comple- mentary to rules, and applied accordingly. Even though it can be demonstrated that cases cannot be transformed to rules and vice versa without loss of information, the mutual influence of rules and cases is not explicitly modeled. Merging the reasoning chains of a case-based and a rule- based reasoner is generally not advisable for the following reasons: 6 different binding force/validity of the knowledge sources, Q incompatible partial results of the problem solvers due to their different internal representations and semantics, a redundant and contradictory knowledge in the problem solvers (only part of it is complementary and disjunct), e interdependency of the legal sources (existing case law influences future legislation and vice versa). Apart from avoiding these problems by separating the reasoners, the concurrent application enables their mutual control and takes advantage of their complementarity. In this way, the solution quality can be increased considerably. References Rissland, E., and Skalak, D. 1991. CABARET: Rule Inter- pretation in a Hybrid Architecture. International Journal on Man-Machine Studies 34(1):839-887. 1428 Student Abstracts From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. | 1994 | 118 |
1,452 | Probabilistic Knowledge of External Events in Planning Jim Blythe School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 blythe@cs.cmu.edu My research tries to improve the robustness of plans by using limited knowledge about external events. These are events that are not directly caused by the planning agent. I use a discrete-time model and assume that the probability of occurrence for a particular type of event in a given situation is known, but the specific occurrence of such an event cannot be predicted with certainty. For example, when a bicycle is left outside a building, there is some probability p that it will be stolen at each time point. The probability that the bicycle is still outside the building after n time units is then (1 - p)“, neglecting the effects of other possible events. “Reactive” planners, that create plans in response to ex- ceptions during execution, are subject to severe time con- straints that limit their performance. My aim is to create contingent plans off-line that have a higher probability of success than could be achieved under these constraints and without considering external events. Stochastic techniques such as policy iteration are not yet tractable for realistic planning problems, so I concentrate on classical planning systems. Most other planning systems that reason about uncer- tainty deal with non-deterministic effects and/or incomplete knowledge of the state of the world, e.g. (Kushmerick, Hanks, & Weld 1993; Pryor & Collins 1993). However, these models cannot easily be used to represent uncertainty about external events. There are two main reasons for this, both related to the frame problem and stemming from the fact that uncertainty about events depends on the actions being performed only through the state. Firstly, uncertainty about the occurrence of an event while an action is per- formed would have to be modelled by probabilistic effects for every action that could be performed at the same time as the event. For instance, every action that can be performed while the bicycle is outside would have to include the bicy- cle being stolen as a possible effect. Thus, the number of distinct possible outcomes for each action is exponential in the number of event types that could simultaneously occur. This is a misleading representation, as well as very cum- bersome. Secondly, consider a planning agent that leaves a bicycle outside a building, performs some actions inside the building and then returns to cycle away. The probability that the bicycle is outside the building, as required, upon the agent’s return depends on the length of time spent inside the building, but this fact could not be modelled in the situation calculus using non-deterministic actions alone. I have designed a planner based on Prodigy 4.0 that con- siders external events in order to increase the expected utility of a plan, and applied it to a transportation domain. I rep- resent events in a STRIPS-like fashion, with preconditions specifying when events are possible, add and delete lists specifying their effects, and with a probability attached to each event (Blythe 1994). When the preconditions are sat- isfied, the event may occur with the given probability. The planner first produces a plan without considering external events, and may then use three different routines to find pos- sible sequences of events that would cause the plan to fail: a Monte Carlo simulation, a Markov chain analysis and an exhaustive search for single event instances that can defeat the plan. There is no exhaustive search for sequences of events that would defeat the plan, as this would in general take far longer than the planning phase. Once sequences of events are found that can cause the plan to fail, the system considers three different ways to repair each one: (1) Conditional steps can be added that address the “bad” situations that arise, for example hailing a cab if the bicycle is stolen. (2) Steps can be added to make bad events inapplicable or less likely, for example by locking the bicycle. (3) The existing steps can be re-ordered or moved along a timeline to reduce the probability of bad events, for example re-ordering the plan to spend as little time as possible inside the building. Since some events have zero duration, this method can sometimes eliminate prob- lems entirely. In experiments, these techniques improve the probability of plan success greatly. I plan to improve plan projection, using techniques such as Bayes nets to relax independence assumptions, and study various domains. I also aim to prove convergence of the algorithm to plans of maximal utility. eferences Blythe, J. 1994. Decision-theoretic subgoaling in goal- directed search. AAAI Spring Symposium on Decision- Theoretic Planning. Kushmerick, N.; Hanks, S.; and Weld, D. 1993. An algorithm for probabilistic planning. Technical Report 93- 06-03, Department of Computer Science and Engineering, University of Washington. Pryor, L., and Collins, G. 1993. Cassandra: Planning for contingencies. Technical Report 41, The Institute for the Learning Sciences. Student Abstracts 1427 From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. | 1994 | 119 |
1,453 | Boosting the correspondence between description logics and proposit ional ynamic logics Giuseppe De Giacomo and Maurizio Lenzerini Dipartimento di Informatica e Sistemistica Universita di Roma “La Sapienza” Via Salaria 113, 00198 Roma, Italia {degiacom,lenzerini}@assi.dis.uniromal.it Abstract One of the main themes in the area of Terminologi- cal Reasoning has been to identify description logics (DLs) that are both very expressive and decidable. A recent paper by Schild showed that this issue can be profitably addressed by relying on a correspondence between DLs and propositional dynamic logics (PDL). However Schild left open three important problems, related to the translation into PDLs of functional re- strictions on roles (both direct and inverse), number restrictions, and assertions on individuals. The work reported in this paper presents a solution to these problems. The results have a twofold significance. From the standpoint of DLs, we derive decidability and complexity results for some of the most expres- sive logics appeared in the literature, and from the standpoint of PDLs, we derive a general methodol- ogy for the representation of several forms of program determinism and for the specification of partial com- putations. Introduction The research in Artificial Intelligence and Computer Science has always paid special attention to formalisms for the structured representation of information. In Artificial Intelligence, the investigation of such for- malisms began with semantic networks and frames, which have been influential for many formalisms pro- posed in the areas of knowledge representation, data bases, and programming languages, and developed to- wards formal logic-based languages, that will be called here description logics’ (DLs). Generally speaking, DLs represent knowledge in terms of objects (individ- uals) grouped into classes (concepts), and offer struc- turing mechanisms for both characterizing the relevant properties of classes in terms of relations (roles), and establishing several interdependencies among classes (e.g. is-a). Two main advantages in using structured formalisms for knowledge representation were advocated, namely, ‘Terminological logics, and concept languages are other possible names. epistemological adequacy, and computational effective- ness. In the last decade, many efforts have been de- voted to an analysis of these two aspects. In partic- ular, starting with (Brachman & Levesque 1984), the research on the computational complexity of the rea- soning tasks associated with DLs has shown that in order to ensure decidability and/or efficiency of rea- soning in all cases, one must renounce to some of the expressive power (Levesque & Brachman 1987, Nebel 1988, Nebel 1990a, Donini et al. 1991a, Donini et al. 1991b, Donini et al. 1992). These results have led to a debate on the trade-off between expressive power of representation formalisms and worst-case efficiency of the associated reasoning tasks. This issue has been one of the main themes in the area of DLs, and has led to at least four different approaches to the design of knowledge representation systems. e In the first approach, the main goal of a DL is to offer powerful mechanisms for structuring knowl- edge, as well as sound and complete reasoning pro- cedures, while little attention has to be paid to the (worst-case) computational complexity of the rea- soning procedures. Systems like OMEGA (Attardi & Simi 1981), LOOM (MacGregor 1991) and KL- ONE (Brachman & Schmolze 1985), can be consid- ered as following this approach. e The second approach advocates a careful design of the DLs so as to offer as much expressive power as possible while retaining the possibility of sound, complete, and efficient (often polynomial in the worst case) inference procedures. Much of the re- search on CLASSIC (B rachman et al. 1991) follows this approach. The third approach, similarly to the first one, ad- vocates very expressive languages, but, in order to achieve efficiency, accepts incomplete reasoning pro- cedures. No general consensus exists on what kind of incompleteness is acceptable. Perhaps, the most interesting attempts are those resorting to a non- standard semantics for characterizing the form of incompleteness (Patel-Schneider 1987, Borgida & Patel-Schneider 1993, Donini et al. 1992). Description Logic 205 From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. a Finally, the fourth approach is based on what we can call “the expressiveness and decidability thesis”, and aims at defining DLs that are both very expres- sive and decidable, i.e. designed in such a way that sound, complete, and terminating procedures exist for the associated reasoning tasks. Great attention is given in this approach to the complexity analy- sis for the various sublogics, so as to devise suitable optimization techniques and to single out tractable subcases. This approach is the one followed in the design of KRIS (Baader & Hollunder 1991). The work presented in this paper adheres to the fourth approach, and aims at both identifying the most ex- pressive DLs with decidable associated decision prob- lems, and characterizing the computational complexity of reasoning in powerful DLs. In order to clearly de- scribe this approach, let us point out that by “very expressive DL” we mean: 1. The logic offers powerful constructs in order to form concept and role descriptions. Besides the con- structs corresponding to the usual boolean connec- tives (union, intersection, complement), and exis- tential and universal quantification on roles, three important types of construct must be mentioned, namely, those for building complex role descriptions, those for expressing functional restrictions (i.e. that a role is functional for a given concept), and those for expressing number restrictions (a generalization of functional restrictions stating the minimum and the maximum number of links between instances of classes and instances of roles). 2. Besides the possibility of building sophisticated class descriptions, the logic provides suitable mechanisms for stating necessary and/or sufficient conditions for the objects to belong to the extensions of the classes. The basic mechanism for this feature is the so-called inclusion assertion, stating that every instance of a class is also an instance of another class. Much of the work done in DLs assumes that all the knowl- edge on classes is expressed through the use of class descriptions, and rules out the possibility of using this kind of assertions (note the power of assertions vanishes with the usual assumption of acyclicity of class definitions). 3 . The logic allows one to assert properties of single in- dividuals, in terms of the so-called membership as- sertions. Two membership assertions are taken into account, one for stating that an object is an instance of a given class, and another one for stating that two objects are related to by means of a given role. Note that, among the constructs for role description, the one for inverse of roles has a special importance, in particular because it makes DLs powerful enough to subsume most frame-based representation systems, se- mantic data models and object-oriented database mod- els proposed in the literature. Also, functional restric- tions on atomic roles and their inverse are essential for real world modeling, specially because the combined use of functional restrictions and inverse of atomic roles allows n-ary relations to be correctly represented. Two main approaches have been developed following the “expressiveness and decidability thesis”. The first approach relies on the tableau-based technique pro- posed in (Schmidt-SchaufJ & Smolka 1991, Donini et al. 1991a), and led to the identification of a decision procedure for a logic which fully covers points (2) and (3) above, and only partially point (1) in that it does not include the construct for inverse roles (Buchheit, Donini, & Schaerf 1993). The second approach is based on the work by Schild, which singled out an interest- ing correspondence between DLs and several propo- sitional dynamic logics (PDL), which are modal log- its specifically designed for reasoning about program schemes. The correspondence is based on the simi- larity between the interpretation structures of the two logics: at the extensional level, objects in DLs corre- spond to states in PDLs, whereas connections between two objects correspond to state transitions. At the intensional level, classes correspond to propositions, and roles correspond to programs. The correspondence is extremely useful for at least two reasons. On one hand, it makes clear that reasoning about assertions on classes is equivalent to reasoning about dynamic logic formulae. On the other hand, the large body of research on decision procedures in PDL (see, for ex- ample, Kozen & Tiuryn 1990) can be exploited in the setting of DLs, and, on the converse, the various works on tractability/intractability of DLs (see, for example, Donini et al. 1991b) can be used in the setting of PDL. However, in order to fully exploit this correspon- dence, we need to solve at least three problems left open in (Schild 1991), concerning how to fit functional restrictions (on both atomic roles and their inverse), number restrictions, and assertions on individuals, re- spectively, into the correspondence. Note that these problems refer to points (1) and (3) above. In this paper we present a solution to each of the three problems, for several very expressive DLs. The solution is based on a particular methodology, which we believe has its own value: the inference in DLs is formulated in the setting of PDL, and in order to repre- sent functional restrictions, number restrictions and as- sertions on individuals, special “constraints” are added to the PDL formulae. The results have a twofold sig- nificance. From the standpoint of DLs, we derive de- cidability and complexity results for some of the most expressive languages appeared in the literature (the only language which is not subsumed by ours is the one studied in (Buchheit, Donini, & Schaerf 1993), whose expressive power is incomparable with respect to the DLs studied here), and from the standpoint of PDLs, we derive a general methodology for the representation of several forms of program determinism correspond- ing to functional2 and number restrictions, and for the 2Note that no decidability results were known for a PDL 206 4utomated Reasonmg specification of partial computations (assertions on in- dividuals) . The paper is organized as follows. In Section 2, we recall the basic notions of both DLs and PDLs. In Section 3, we present the result on functional restric- tions, showing that Converse PDL is powerful enough to allow the representation of functional restrictions on both atomic roles and their inverse. In Section 4, we outline the generalization to the case of number re- strictions, and in Section 5 we deal with the problem of representing assertions on individuals. In particu- lar, we analyze two languages and show that reason- ing in knowledge bases consisting on both assertions on classes and assertions on individuals in these two languages can be again reduced to satisfiability check- ing of particular PDL formulae. Finally, in Section 6, we present examples of modeling with the powerful and decidable DLs introduced in the paper, and out- line possible extensions of our work. For the sake of brevity all proofs are omitted. Preliminaries We base our work on two logics, namely the DL C, and the PDL D, whose basic characteristics are recalled in this section. The formation rules of C are specified by the follow- ing abstract syntax c - TIIIAIlCIClnC21C1UC2( Cl 3 C2 I3R.C I VR.C R - P 1 RI u R2 I R1 0 R2 1 R* 1 id(C) where A denotes an atomic concept, C (possibly with subscript) denotes a concept , P denotes an atomic role, and R (possibly with subscript) denotes a role. The se- mantics of concepts is the usual one: an interpretation Z with domain AZ interprets concepts as subsets of AZ and roles as binary relations over A*, in such a way that the meaning of the constructs is nreserved (for ex- ample, (Cl + 62)” = {d E AZ I d $ Cf or d $ C,“}, where C’ denotes the set of elements of AZ assigned to C by Z). Note that C is a very expressive language, comprising the constructs for union of roles RI I-I R2, chaining of roles R1 o R2, transitive closure of roles R* , and the identity role id(C) projected on C. A C-intensional knowledge base (C-TBox) is defined as a finite set K of inclusion assertions of the form Cr E C2, where Cl, C2 are C-concepts. The assertion Cr & C2 is satisfied by an interpretation Z if CT C Cg, and Z is a model of K if every assertion of K is satisfied by Z. A TBox K logically implies an assertion Cr C (72, written K b Cl 5 C2, if Cl E C2 is satisfied by every model of K. As pointed out in (Schild 1991), there is a direct correspondence between C and a PDL, here called 27, where both atomic programs and their converse can be made (locally) deterministic. whose syntax is as follows: 4 - true I false I A I l# I 4~1 A 42 I 41 V 42 I 41 * 42 I < r > 4 I H4 r - PlrlUr2 Irl;r2 lr* I4? where A denotes a propositional letter, 4 (possibly with subscript) denotes a formula, P denotes an atomic program, and r (possibly with subscript) denotes a program. The semantics of V is based on the no- tion of structure, which is defined as a triple M = (S, {RP}, II), where S denotes a set of states, {Rp} is a family of binary relations over S, such that each atomic program P is given a meaning through Rp, and II is a mapping from S to propositional letters such that II(s) d t e ermines the letters that are true in the state s. Given M, the family {RP} can be extended in the obvious way so as to include, for every program r, the corresponding relation R, (for example, R,.l;TZ is the composition of R,, and R,, ) . For this reason, we often denote a structure by (S, {R,.}, II), where {R,.} includes a binary relations for every program (atomic or non-atomic). A structure M is called a model of a formula 4 if there exists a state s in M such that M, s b 4. A formula 4 is satisfiable if there exists a model of 4, unsatisfiable otherwise. The correspondence between C and 2) is realized through a mapping S from C-concepts to ZXformulae, and from C-roles to D-programs. The mapping S maps the constructs of C in the obvious way. For example: S(A) = A S(3R.C) =< S(R) > S(C) S(P) = P S(Rl u R2) = S(h) u S(R2) S(X) = lb(C) S(Rlo R2) = S(Rl); S(R2) S(R*) = S(R)* @d(C)) = S(C)? In the rest of this section, we introduce several no- tions and notations that will be used in the sequel. Some of them are concerned with extensions of ZJ that include the construct r-, denoting the converse of a program r (see Section 3). The Fisher-Ladner closure of a in-formula @, de- noted CL(@), is the least set F such that @ E F and such that (we assume V, a, [a] to be expressed by means of 1, A, < . > as usual): 41 A 42 E F * 4142 E F, 14 E F =+ $~f’, <r>g)EF * 4-K < r1 ; r2 > 4 E F * <rl><r2>4EF, <rlUr2>q%F + <rl>$,<q>g5EF, <r*>q5EF * <r><r*>qSEF, <qh-q%F s- 4’ E F. Note that, the size of CL(@) is linear with respect to the size of a. The notion of Fisher-Ladner closure can be easily extended to formulae of other PDLs. We introduce the notion of path in a structure M, which extends the one of trajectory defined in (Ben- Ari, Halpern, & Pnueli 1982) in order to deal with the Description Logic 207 converse of an atomic programs. A path in a struc- ture M is a sequence (&, .T . , So) of states of M, such that (si-1, si) E R, for some a = P I P-, where i = 1,. . . , q. The length of (so, . . . , sq) is q. We induc- tively define the set of paths Paths(r) of a program T in a structure 44, as follows (we assume, without loss of generality, that in P all occurrences operator are moved all the way in): of the converse Paths(a) = 72, (a = P I P-), Paths(rl U r2) = Paths(rl) U Puths(ra), Puths(rl; r2) = {(so,. . .,su,. . .rsq) I cso >“‘> sU) E Puths(rl) and (S Uj”‘9 sq) E Paths(n)}, Put hs( r*) = {(s) 1 s E S} U (Ui,o Path@)), Paths@‘?) = Us) I M, s l= 4’1. We say that a path (so) in M satisfies a formula 4 which is not of the form < r > #, if M, SO b 4. We say that a path (so,. . . , sg) in M satisfies a formula 95 of the form < $1 > . . . < rl > CJS, where 4’ is not of the form < r’ > #‘, if M,sq b 4’ and (se,. . .sq) E Puths(rl; . . .; rl). Finally, if a denotes the atomic program P (resp. the inverse of an atomic program P-), then we write a- to denote P- (resp. P). Functional restrictions In this section, we study an extension of C, called CZF, which is obtained from C by adding both the role con- struct R- and the concept construct (5 1 a), where a = P 1 P-. The meaning of the two constructs in an interpretation Z is as follows: CR-)= = {k-h,&,) 1 (da 6) E R=), (5 1 u>= = {d E A= I there exists at most one d’ such that(d, d’) E uz 1. The corresponding PDL will be called VZF, and is obtained from V by adding the programs of the form r-9 and the formulae of the form (5 1 a), where, again, =PIP-. The meaning of the two constructs in &Y can be easily derived by the semantics of CZF. Observe that the r- construct allows one to denote the converse of a program, and the (5 1 a) construct allows the notion of local determinism for both atomic programs and their converse to be represented in PDL. With the latter construct, we can denote states from which the running of an atomic program (symmetri- cally, the converse of an atomic program) is determin- istic, i.e., it leads to at most one state. It is easy to see that this possibility allows one to impose the so- called global determinism too, i.e., that certain atomic programs and converse of atomic programs are glob- ally deterministic. Therefore, VZF subsumes the logic studied in (Vardi & Wolper 1986), called Converse De- terministic PDL, in which atomic programs (but not their converse) are globally deterministic. From the point of view of DLs, as mentioned in the Introduction, the presence of inverse roles and of functional restrictions on both atomic roles and their inverse, makes CZF one of the most expressive DLs among those studied in the literature. The correspondence between CZF and VZF is re- alized through the mapping S described in Section 2, suitably extended in order to deal with inverse roles and functional restrictions. From S we easily obtain the mapping S+ from CZJ=TBoxes to VZF-formulae. In particular, if K = { Ki , . . . , Kn } is a TBox in CZF, and PI,.. . , Pm are all atomic roles appearing in X: then (we abbreviate (PI U - - - U Pm U PT U - - - U P;)* by u, for notational convenience) b+(K) = [u] 6+({K1}) A - - - A 6+({&)), 6+((cl E C2)) = (S(G) * S(C2)). Observe that 6+(K) exploits the power of program con- structs (union, converse, and transitive closure) and the “connected model property” of PDLs in order to represent inclusion assertions of DLs. Based on this correspondence, we can state the following: if X: is a TBox, then K + Ci E C2 (where atomic concepts and roles in Cl, C2 are also in K) iff the VDZF-formula b+(K) A b&l) A 6(4’2) is unsatisfiable. Note that the size of the above formula is polynomial with respect to the size of K, Cl, and C2. Let VDZ be the PDL obtained from V by adding the r- construct only. We are going to show that, for any VZF-formula @, there is a VZ-formula, denoted ~((a), whose size is polynomial with respect to the size of @, and such that @ is satisfiable iff y(Q) is satisfiable. Since satisfiability in VZ is EXPTIME-complete, this ensures us that satisfiability in VDZF, and therefore logical implication for CZF-TBoxes, are EXPTIME- complete too. 3 In what follows, we assume without loss of generality that @ is in negation normal form (i.e., negation is pushed inside as much as possible). We define the VI-counterpart ~((a) of a VZF-formula de as t,lwihe;znjunction of two formulae, y(Q) = -yl(@)Ay2(@), . . yi(Q) is obtained from the original formula Q by replacing each (5 1 a) with a new propositional letter A(< 1 a), and each ~(5 1 a) with (< a > H(s 1 a)) A (< a > lH(< 1 .I), where H(5 1 a) is, again, a new propositional letter. y2(@) = 7; A --- A $, with one conjunct +y$ of the form (we use the abbreviation u for (PI U - - - U Pm U p; . . . U Pi)*, where PI, . . . , Pm are all the atomic roles appearing in a): [ul((& 1 a)A < a > 4) * M#J) for every A(< 1 a) occurring in 71((a) and every 4 E CJqYl (W 31ndeed $6+(K) A S(Cl) A S(4&)) is the DZF-formula corresponding to the implication problem K + Cl C C2 for CZF-TBoxes. 208 Automated Reasoning Intuitively y=~ (a) constrains the models M of y(Q) so that: for every state s of M, if A(< 1 a) holds in s, and there is an u-transition from s to tl and an a- transition from s to t2, then tl and t2 are equivalent with respect to the formulae in CL(yi(@)). We show that this allows us to actually collapse ti and tz into a single state. Note that the size of ~(@a) is polynomial with respect to the size of @. To prove that a 2X2=-formula is satisfiable iff its 2X- counterpart is, we proceed as follows. Given a model M = (S, {R,.}, II) of y(a), we build a tree-like struc- ture Mt = (St, {RF}, II’) such that Mt, root + ~((a) (root E St is the root of the tree-structure), and the lo- cal determinism requirements are satisfied. From such Mt, one can easily derive a model M& of a. In order to construct Mt we make use of the following notion. For each state s in M, we call by ES(s) the smallest set of states in M such that e s E ES(s), and e if s’ E ES(s), then for every s” such that (s’ , s”) E R a;Ats 1 a-J?;a- > ES(s”) c ES(s). The set ES( s is the set of states of M that are to ) be collapsed into a single state of Mt. Note that, by 72(Q), all th e s a t t es in ES(s) satisfy the same formulae in CL(yl(@)). Th e construction of Mt is done in three stages. Stage 1. Let < al > $1, . . . , < ah > $h be all the formulas of the form < a > 4’ included in CL(@).4 We consider an infinite h-ary tree 7 whose root is‘root and such that every node x has h children childi( one for each formula < ua > $i (we write father(x) to denote the father of a node x). We define two partial mappings m and d: m maps nodes of 7 to states of M, and 1 is used to label the arcs of 7 by atomic programs, converse of atomic programs, or a special symbol ‘un- defined’. For the definition of m and I, we proceed level by level. Let s E S be any state such that M, s b y(Q). We put m(root) = s, and for all arcs corresponding to a formula < CL~ > $i such that M, s b< ui > J& we put l((root , childi(root))) = ui. Suppose we have defined m and d up to level L, let 2 be a node at level k + 1, and let l((futher(x), 2)) = aj. Then M, m(futher(x)) b< oj > +j, and therefore, there exists a path (so, ~1,. . . , sq), with s, = m(futher(x)) satisfying < Uj > $j. Among the states in ES(Q) we choose a state t such that there exists a minimal path (i.e., a path with minimal length) from t satisfying $j. We put m(x) = t and for every < ui > & E CL(Q) such that M, t +< ui > & we put a((~:, childi(x = Ui. Stage 2. We change the labelling d, proceeding again level by level. If M, m(root) + A(5 1 Q), then for each arc (root, chil&(root)) labelled a, except for one randomly chosen, we put I( (root, childi (root)) = 4Notice that the formulas & may be of the form < T > 4, and that -z,l~i E CL(@). ‘undefined’. Assume we have modified I up to level k, and let x be a node at level k + 1. Suppose M, m(x) + A(< 1 a). Then if Z((father(x),x)) = a-, for each arc(2, childi( labelled a, we put [((x, childi( = ‘undefined’, otherwise (i.e. /((father(x), x)) # a-) we put Z( (x, chid& (2)) = ‘undefined’ for every arc (x, childi( 1 a e e a, except for one randomly cho- b 11 d sen. Stage 3. For each P, let R’ q(x, Y)) = p or qY> 4) = P } ‘w d !iE %e’,t,‘,c’ - - e e ture Mt = (9, {‘Ri}, II”) as follows: 69 = {x E 7 j (root, 2) E (Up(R:, lJR’p))*}, 72; = nl, n (9 x St), and IIt = II(m(x)) for all x E St. From (72;) we get all {RF} as usual. The basic property of Mt is stated in the following Lemma 1 Let <p be a DDZF-formula, M a model of y(Q), and Mt a structure derived from M as specified above. Then, for every formula q5 E CL(yl(Q)) and every x E St, Mt, x + 4 in M,m(x) j= r$. Once we have obtained Mt. we can define a new structure A!$ = (Sk, (R&,.j,II$) where, Ss = St, (725,) = {Ri}, and II&(x) = IIt - {A(< 1 a), H(s 1 =I) for each x E S$. The structure M$ has the following property. Lemma 2 Let Q, be a DZF-formula, and let Mt, M& be derived from a model M of y(Q) as specified above. Then Mt, root b yl(@) implies M&, root /= XD. Considering that every model of @ can be easily transformed in a model of ~((a) we can state the main result of this section. Theorem 3 A DZF-formula @ is satisfiable i# its VZ-counterpart y(Q) is satisfiable. Corollary 4 Satisfiability in DDZF and logical impdi- cation for CZF-TBoxes are EXPTIME-complete prob- lems. Nun-her restrictions In this section, we briefly outline a method that al- lows us to polynomially encode number restrictions into CZF. Let us call CZn/ the language obtained from CZJ= by adding the constructs (2 n a) and (5 n a) for number restrictions, where n is a non-negative integer, and a := P 1 P-. The meaning of (2 n a) (resp. (5 n a)) in an interpretation Z -is given by ‘the set of individuals that are related to at least (at most b instances of a. Let K be a CZn/-TBox. We, first, introduce for each atomic role P in K a new primitive concept Ap and two atomic roles Fp and Gp, imposing that each in- dividual in the class Ap is related to exactly one in- stance of FF and GP. In this way the original P can be represented by means of the role Fp o id(Ap) o GP. Then we replace Fp by fp o id(Ap) o (fh o id(Ap))* and Gp by gp o id(Ap) o (g$ o id(Ap))* , making the Description Logic 209 atomic roles fp, fh, gp, gb and their inverse, globally functional, and requiring that no individual is linked to others by means of both f; and f’p, or gP and g’p. In this way the concept (5 n P) can be obtained simply by imposing that there are at most n states in the chain fp o id(Ap) o (fk o id(Ap))*, and the concept (5 n P-) can be obtained by imposing that there are at most n states in the chain gp o id(Ap) o (gb o id(Ap))*. These constraints are easily expressible in CZF. Analogous considerations hold both for (2 n u) and for quali- fied number restrictions, where a qualified number re- striction is a concept of the form (5 n u.C) (resp. (2 n u.C)), which is interpreted as the set of indi- viduals that are related to at most (resp. at least) n instances of C by means of a. Membership assertions In this section, we study reasoning involving knowledge on single individuals expressed in terms of membership assertions. Given an alphabet 0 of symbols for individ- uals, a membership assertion is of one of the following forms: C(m), R(a1, a2) where C is a concept, R is a role, and or, a2 belong to 0. The semantics of such assertions is stated as follows. An interpretation Z is extended so as to assign to each a E 0 an element Q’ E AZ in such a way that different elements are assigned to different symbols in 0. Then, Z satisfies C(cy R(al , a2) if (a?, a$) E 1 if oz E Cz, and Z satisfies R . An extensional knowledge base (ABox) M is a finite set of membership assertions, and an interpretation Z is called a model of M if Z satisfies every assertion in M. A knowledge base is a pair a = (K, M), where K: is a TBox, and M is a ABox. An interpretation Z is called a model of a if it is a model of both X: and M. a is satisfiable if it has a model, and a logically implies an assertion p (a + /3), where ,8 is either an inclusion or a membership assertion, if every model of # satisfies ,8. Since logical implication can be reformulated in terms of unsatisfiability (e.g. if p = C(o), then a b ,6 iff BlJ {4’(a)} is unsatisfiable), we only need a procedure for checking satisfiability of a knowledge base. It is worth noting that, from the point of view of PDLs, an ABox is a sort of specification of partial com- putations, and that no technique is known for integrat- ing such a form of specification with PDLs’ formulae. We study the satisfiability problem for knowledge bases expressed in two extensions of the basic lan- guage C. The first extension regards the language CF, obtained from C by adding the construct (5 1 P). We show that satisfiability of a U-knowledge base B can be polynomially reduced to satisfiability of a DF- formula (p(a), where VF is the PDL obtained from ZJ by including the construct (5 1 P). We start by defining cpe(@ to be the ZJF-formulare- sulting from the conjunction of the following formulae (there is a new letter Ai in cpc(a) for each individ- ual cyi in a): for every individual CY.~, Ai a /\j#ilAj ; for every membership assertion of the form C(czri), A; + S(C) (6 is th e mapping introduced in Section 2); for every membership assertion of the form R(cua, aj), Ai j< R > Aj; for every inclusion assertion Cr C C’s in Ic, S(Cr) * S(C2). Let create be a new atomic program, and u an ab- breviation for (PI U. . .U Pm)*, where PI, . . . , P, are all the atomic roles in B. We define the DF-counterpart of a as (p(a) = (pi(a) A (pa(n), where: e p1(B) = p;(B) A *-* A Pm> A [cre~t4(blPom, with one p\(S) =< create > Ai for each individual ai in a. e cpz(B) is the conjunction of the following formulae: - For all Ai, for all 4 E CL([u](po(B)): [creute](< u > (Aa A 4) a [u](Aa j 4)). (1) - For all Aa, for all 4 E CL([u]cpo(B)), for all pro- grams r E CL([u]cp~(@): [creute](<u> (AiA < r-i& > 4) 3 [u](& *< r+nd > 4)), (2) where r,ind denotes the program obtained from the program r by chaining the test (Aj+iAj)? after each atomic program in r. - For all Ai, Aj , for all programs r’ E Pre(r), r E cJNulcpou3): [creute](<u> (Ail\ < r!,i& > Aj) 3 [uI(A *< blind > Aj)), (3) where Pre(r) for a program r, is defined induc- tively as follows (e is the empty sequence of pro- grams): Pre(P) = {E}; Pre(rl;rz) = {rl; ri 1 ri E Pre(r2)); Pre(rl U r2) = Pre(q) U Pre(r2); Pre(r*) = {r*; r’ 1 r’ E Pre(r)}; Pre(q5?) = {a}.5 The role of (1) ,(2) and (3) is to allow us to collapse all the states where a certain Ai holds, so as to be able to transform them into a single state corresponding to the individual oi. In the following we call states t of a model M of cp(@, individual-aliases of an individual ai iff M, t + Ai. The formulae (2) and (3) allow us to prove the technical lemma below. Lemma 5 Let M be a model of p(B), let t be an individual-alias of ~i, and let < r > q5 E wblPom. If th ere is a path from t that satisfies < r > 4, containing N individual-aliases tl, . . . ,tN Of~l,..., ck!N respectively, then from every individual- alias t’ of ai in M, there is a path that satisfies < r > 4, containing N individual-aliases t{, . . t’ -7 N foral,..., ON (in the same order as tl, . . . , tN). 5Notice that < e > 4 E q5 and [c]qS E 4. 210 Automated Reasoning Given a model M = (S, (R,}, H) of (p(B), we can obtain a new model M’ = (S’, (R’,},Q’) of cp(a) in which there is exactly one individual-alias, for each in- dividual in B. Let s E S be such that M, s b (p(a). For every individual c~li, we randomly choose, among its individual-aliases x such that (s, x) E Rcreate, a distinguished one denoted by sai. We define a set of relations (72;) U {RFreate} as follows: RFreate = {(‘> ‘OJi) E RCrl?Ute ] o!i is an individual}, and Rlf, = @P - {(x,9) E RP 1 M,y + Aj for some Aj)) U {(x,saj) I (x, y) E Rp and M, y b Aj for some Aj}. The structure M’ is defined as: S’ = {x E S I (s, X) E ($Jp R$) u R’,!,,,,)* }, R$, = 72; n (S’ x S’) and create = WLeate n (S’ x S), and II’(x) = II(x), for each state x E S’ (from {Rl,} and RLreate we get (72:) as usual). Observe that the transformation from M to M’ does not change the number of “out-going edges” for those states of M which are also states of M’. The following two lemmas concern M’. Lemma 6 Let M be a mode! of cp(l?), and M’ a struc- ture derived from M as specified above. Then for ev- ery formula 4 E CL((pl(l?)), for every state z of M’: Md=4 i.8 M’,xbb. Lemma 7 Let M be a model of (p(a) such that M, s b (p(a), and let M’ be a structure derived from M as specified above. Then M’,s b cp(l3). We can now state the main theorem on reasoning in CF-knowledge bases. Theorem 8 A CF-knowledge base l? is satisfiable iff its VF-counterpart cp(a) is satisfiable. Corollary 9 Satisfiability and logical implication for CF-knowledge bases (TBox and ABox,) are EXPTIME- complete problems. The second extension regards the language CZ, ob- tained from C by adding the construct for inverse of roles. Analogously to the case of CF, satisfiability of a (Z-knowledge base B can be polynomially reduced to satisfiability of a 2X-formula q(B), where ZJZZ is the PDL obtained from V by allowing converse programs. Let TIO(~) b e a ‘DZ-formula defined similarly to (PO(B) in the case of Cr, create a new atomic program, and u an abbreviation for (PI U . . . U Pm U PC U . . . U Pi)*, where PI,..., Pm are all the atomic roles in 8. We de- fi,“h”,,“eie VZ-counterpart of B as ~(a) = VI(~) A qz(B), . . e q,(B) T qt(S)A- - -A?$(a)A[creute]([u]qo(#)), with each q:(B) =< create > Ai for each individual oi in a. q2(B) = qi(L?)A- - -As(a), where we have one &(a) of the form [creute](< u > (Ai A 4) =+- [u](Ai * $)), for each Ai, and for each 4 E CL([u]qc(a)). (4) Again, the role of (4) is to make all the states where a certain A holds, equivalent, so as to be able to collapse them into a single state corresponding to the individual od. By reasoning similarly to the case of CF, we derive the result below.6 Theorem 10 A CT-knowledge base l3 is satisfiable ifl its VZ-counterpart q(8) is satisfiable. Corollary 11 Satisfiabidity and logical implication for CZ-knowledge bases (TBox and ABox) are EXPTIME- complete problems. We remark that, in establishing the satisfiability of CF-knowledge bases, the satisfiability of CZ-knowledge bases, and the satisfiability of a CZF concepts, we re- sorted to a transformation of their models. Unfortu- nately the kind of transformation used in the first two cases cannot be composed with the one used in the latter. This results in the impossibility of extending the constructions carried out in this section to CZF- knowledge bases. iscussion and conclusion The work by Schild on the correspondence between DLs and PDLs provides an invaluable tool for devis- ing decision procedures for very expressive DLs. In this paper we included into this correspondence, no- tions such as functional restrictions on both atomic roles and their converse, number restrictions, and as- sertions on individuals, that typically arise in modeling structured knowledge. We made use of the correspon- dence to determine decision procedures and establish the decidability and the complexity of some of the most expressive DLs appeared in the literature. It is worth noticing that the PDLs defined in this paper are novel and of interest in their own right. Space limitations have prevented us to demonstrate the full power of the results presented. We mention here that they form the basis to derive suitable decision procedures both for extensions of CZF that include n- ary relation and qualified number restrictions, and for knowledge bases (TBox and ABox) based on CF ex- tended with qualified number restrictions. Moreover, some of these results can also be formulated in the set- ting of the p-calculus, that has been used to model in single framework terminological cycles interpreted according to Least and Greatest Fixpoint Semantics (Nebel 1991, Schild 1994, De Giacomo & Lenzerini 1994). In concluding the paper, we would like to show two salient examples of use of the powerful DLs introduced here. They concern the definition of concepts for the representation of lists, and n-ary trees. Consider the following inductive definition of list: nil is a list; a node that has exactly one successor that is a list, is a list; nothing else in a list. This is equivalent to define a list as a chain (of any finite length) of nodes that termi- nates with nil. Assuming node and nib to be concepts ‘The proof is much simpler in this case, witness absence of constraints analogous to (2) and (3) above. the Description Logic 211 of our language, we can denote the concept Zist as (we use Ci k C’s as a shorthand for Cr & Cz, C2 C Cl): list i 3(id(node ll (2 1 succ)) 0 succ)*.nil Similarly we can denote the class of (possibly infi- nite) n-ary trees as: n-tree k Vchild-.I IT Vchild* .(node 1’7 (5 1 child-) IT (5 n child)) which defines a ndree as a node having no father and at most n children, and such that all descendents are nodes having one father and at most n children. Observe that, in order to fully capture the above concepts, we make use of inverse roles, functional re- strictions on both atomic and inverse roles, and num- ber restrictions. References Attardi, G., and Simi, M. 1981. Consistency and com- pleteness of omega, a logic for knowledge representa- tion. In Proceedings of the International Joint Con- ference on Artificial Intelligence, 504-510. Baader, F., and Hollunder, B. 1991. A terminological knowledge representation system with complete infer- ence algorithm. In Proceedings of the Workshop on Processing Declarative Knowledge, Lecture Notes in Artificial Intelligence, pages 67-86: Springer-Verlag. Ben-Ari, M.; Halpern, J. Y.; and Pnueli, A. 1982. De- terministic propositional dynamic logic: Finite mod- els, complexity, and completeness. Journal of Com- puter and System Sciences, 25~402-43.7. Borgida, A., and Patel-Schneider, P. F. 1993. A se- mantics and complete algorithm for subsumption in the CLASSIC description logic. Forthcoming. Brachman, R. J., and Levesque, H. J. 1984. The tractability of subsumption in frame-based descrip- tion languages. In Proceedings of the Fourth National Conference on Artificial Intelligence, 34-37. Brachman, R. J.; McGuinness, D. L.; Patel- Schneider, P. F.; Alperin Resnick, L.; and Borgida, A. 1991. Living with CLASSIC: when and how to use a KL-ONE-like language. In John F. Sowa, editor, Principles of Semantic Networks, 401-456: Morgan Kaufmann. Brachman, R. J., and Schmolze J. G. 1985. An overview of the KL-ONE knowledge representation system. Cognitive Science, 9(2):171-216. Buchheit M.; Donini F. M.; and Schaerf, A. 1993. Decidable reasoning in terminological knowledge rep- resentation systems. In Proceedings of the Thirteenth International Joint Conference on Artificial Intelli- gence, 704-709. De Giacomo, G., and Lenzerini, M. 1994. Concept language with number restrictions and fixpoints, and its relationship with mu-calculus. In Proceedings of Eleventh European Conference on Artificial Intelli- gence. Donini, F. M.; Hollunder, B.; Lenzerini M., Marchetti Spaccamela A.; Nardi, D.; and Nutt, W. 1992. The complexity of existential quantification in concept languages. Artificial Intelligence, 2-3:309- 327. Donini, F. M.; Lenzerini, M.; Nardi, D.; and Nutt, W. 1991a. The complexity of concept languages. In Proceedings of the Second International Conference on Principles of Knowledge Representation and Rea- soning, 151-162. Donini, F. M.; Lenzerini, M.; Nardi, D.; and Nutt, W. 1991b. Tractable concept languages. In Proceed- ings of the Twelfth International Joint Conference on Artificial Intelligence, 458-463. Donini, F. M.; Lenzerini, M.; Nardi, D.; Nutt, W; and Schaerf, A. 1992. Adding epistemic operators to concept languages. In Proceedings of the Third International Conference on Principles of Knowledge Representation and Reasoning, 342-353. Kozen, D., and Tiuryn, J. 1990. Logics of programs. In Handbook of Theoretical Computer Science - For- mal Models and Semantics, 789-840: Elsevier. Levesque, H. J., and Brachman, R. J. 1987. Expres- siveness and tractability in knowledge representation and reasoning. Computational Intelligence, 3:78-93. MacGregor, R. 1991. Inside the LOOM description classifier. SIGART Bulletin, 2(3):88-92. Nebel, B. 1988. Computational complexity of termi- nological reasoning in BACK. Artificial Intelligence, 34(3):371-383. Nebel, B. 1990. Terminological reasoning is inherently intractable. Artificial Intelligence, 43:235-249. Nebel, B. 199 1. Terminological cycles: Semantics and computational properties. In John F. Sowa, editor, Principles of Semantic Networks, 331-361: Morgan Kaufmann. Patel-Schneider, P. F. 1987. A hybrid, decidable, logic-based knowledge representation system. Com- putational Intelligence, 3(2):64-77. Schild, K. 1991. A correspondence theory for termi- nological logics: Preliminary report. In Proceedings of the Twelfth International Joint Conference on Ar- tificial Intelligence, 466-471. Schild, K. 1994. Terminological cycles and the propo- sitional p-calculus. In Proceedings of the Fourth In- ternational Conference on Knowledge Representation and Reasoning. Schmidt-Schaul3, M., and Smolka, G. 1991. Attribu- tive concept descriptions with complements. Artificial Intelligence, 48( l):l-26. Vardi, M. Y., and Wolper, P. 1986. Automata- theoretic techniques for modal logics of programs. Journal of Computer and System Sciences, 32:183- 221. 212 Automated Reasoning | 1994 | 12 |
1,454 | Regression Based Causal Induction With Latent Variable Models Lisa A. Ballesteros Experimental Knowledge Systems Laboratory University of Massachusetts/Amherst Box 34610 Amherst, MA 01003-4610 ballesteQcs.umass.edu (413) 545-3616 Scientists largely explain observations by inferring causal relationships among measured variables. Many algorithms with various theoretical foundations have been developed for causal induction e.g., (Spirtes, Glymour, & Scheines 1993; Pearl & Verma 1991), but it is widely believed that regression is ill-suited to the task of causal induction. Mul- tiple regression techniques attempt to estimate the influ- ence that regressors have on a dependent variable using the standardized regression coefficient, p. Assuming the relationship among variables is linear, pyx measures the expected change in Y produced by a unit change in X with all other predictor variables held constant. Arguments against using regression methods for causal induction rest on the fact that the error in estimating pyx can be large, particularly when unmeasured or latent variables account for the relationship between X and Y, or when X is a common cause of Y and another predic- tor (Mosteller & Tukey 1977; Spirtes, Glymour, & Scheines 1993). In fact, p may suggest X has a strong influence on Y when it has little or none. We have developed a regression- based causal induction algorithm called FBD (Cohen et ol. 1994) which performs well in these situations. The heuristic that is primarily responsible for making FBD less sensitive to the above problems is the w score. Let rx y be the correlation between X and Y, and w = (ryx - @yx)/ryx. w measures the proportion of ryx not due to the direct effect of X on Y. If wyx exceeds a threshold, X is pruned from the set of candidate predictors. This threshold is set arbitrarily by the user, but we are exploring the use of clustering algorithms to set it by partitioning the w values of the predictor variables. Spirtes et al. describe four causal models (1993, p. 240) for which their studies showed regression methods per- formed poorly by always choosing predictors whose rela- tionship to the dependent variable is mediated by latent variables or common causes. One model is reproduced in Figure 1. The difficulty with this model is that the error in the estimate for px, y may be large due to X2’s relationship to X3 via the latent variable 2’1. To determine the susceptibility of FBD to latent vari- able effects, we tested the performance of FBD’ on latent variable models, and ran stepwise regressions as a control. Twelve sets of coefficients for the structural equations for ‘In comparison studies among FBD, Pearl’s IC, and Spirtes’s PC, FBD performed better on all of our mea- sures of performance (Cohen et osl. 1994; Gregory & Cohen 1994). Figure 1: Latent Variable Model each of Spirtes’s models were generated, as were data sets for each, having 100 to 1000 variates. Each sample was given to FBD and to MINITAB’s stepwise procedure. Per- formance was measured by the number of times the algo- rithm incorrectly chose predictors related via latent vari- ables and the number of times it chose correctly. FBD chose 88% of the correct predictors, while MINITAB chose 93% of them. On the other hand, FBD rejected variables whose relationships to the dependent variable were due to latent or common causes 82% of the time, while MINITAB rejected them only 25% of the time. Thirty nine percent of FBD’s rejections were due to w. Although MINITAB got a slightly higher hit rate for correct predictors than did FBD, FBD got fewer false positives. These results suggest that w makes FBD less susceptible to latent variable ef- fects than standard regression techniques. FBD’s ability to avoid the problems described above make it a promisng causal induction algorithm. eferences Cohen, P. R.; Ballesteros, L.; Gregory, D.; and St.Amant, R. 1994. Regression can build predictive causal mod- els. Submitted to the Tenth Annual Conference on Uncer- tainty in AI. Technical Report 94-15, Dept. of Computer Science, University of Massachusetts/Amherst. Gregory, D., and Cohen, P. R. 1994. Two algorithms for inducing causal models from data. Submitted to Knowl- edge Discovery in Dat.abases Workshop, Twelfth National Conference on Artificial Intelligence. Mosteller, F., and Tukey, J. W. 1977. Dota Analysis and Regression, A Second Course in Statistics. Addison- Wesley Publishing Company. Pearl, J., and Verma, T. 1991. A statistical semantics for causation. Statistics and Computing 2:91-95. Spirtes, P.; Glymour, C.; and Scheines, R. 1993. Causca- tion, Prediction, arnd Search. Springer-Verlag. 1426 Student Abstracts From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. | 1994 | 120 |
1,455 | Guardian: A Prototype Intelligent Agent for Intensive-Care Monitoring Barbara Hayes-Roth 1 Serdar Uckunl Jan Eric Larssonl David Gaba Juliana Barr2 Jane Chien2 1 Knowledge Systems Laboratory, Stanford University 701 Welch Road Bldg. , Palo Alto, CA 94304 2 Stanford University School of Medicine and Department of Veterans Affairs 3801 Miranda Ave., Palo Alto, CA 94304 (hayes-roth, uckun, larsson, gaba} @KSL.Stanford.Edu A surgical intensive care unit (ICU) is a challenging monitoring environment. The multitude of monitored variables, the high frequency of alarms, and the severity of likely complications and emergencies can overload the cognitive skills of even experienced clinicians. ICU moni- toring is also complicated by changes in clinical context. Over the course of a few days, a patient may evolve from a high-vigilance immediate post-operative state to a con- valescent state that involves entirely different sets of mon- itoring principles, problems, and treatments. Guardian is an experimental intelligent agent for moni- toring patients in a surgical ICU (Hayes-Roth et al. 1992). Guardian is based on the BB 1 blackboard control architec - ture and is under development in a laboratory environment using simulated and recorded patient data. Guardian has a number of advantages over existing real-time intelligent monitoring architectures. These include multiple reason - ing skills, configuration of available knowledge and skills based on context, data reduction based on the availability of computational resources, and dynamic selection of rea- soning skills under time pressure. Guardian is composed of a variety of software modules organized in two levels. At the lower level, Guardian has modules that perform data reduction and abstraction tasks. At the higher level, various reasoning skills exist and cooperate under the guidance of BB 1. Domain knowledge bases for Guardian are coordinated through a shared ontology for intelligent monitoring and control. These knowledge bases are avail- able to any of Guardian’s problem solving components that wish to use them. Incremental knowledge acquisition continues in parallel with the development for Guardian. This videotape focuses on the dynamic and context-sen- sitive aspects of reasoning in Guardian. In the demonstra- tion, Guardian starts monitoring a simulated patient who has had open heart surgery and has just been taken from the operating room to the surgical ICU. This situation is defined as the “early postoperative” situation. Later in the demonstration, the patient improves to a more stable sit- uation. Since the domain knowledge bases are extensive, Guardian selects different subsets of problems for which The Guardian project is sponsored by ARPA/NASA grant NAG2-581 under ARPA order 8607. to prepare short-latency reactions as appropriate for these different contexts. Its choices reflect consideration of several features of known contingencies, such as critical- ity, side effects, and likelihood - in the given context (Dabija 1994). During normal operation, over a hundred channels of data flow into Guardian at regular intervals or on demand. A data reduction and dynamic filtering component reduces the incoming data rate based on dynamic attention focus- ing decisions and also on the dynamic global rate at which Guardian can process incoming data. A temporal fuzzy pattern recognition component classifies incoming data in terms of clinically-relevant signs and symptoms. Two component reasoning skills are demonstrated. In the immediate postoperative situation, a reactive diagnosis skill (Ash et al. 1993) utilizes action-based hierarchies to provide short-latency diagnosis and therapeutic response to one of the selected subset of context-relevant clinical problems. Later, a probabilistic causal reasoning skill (Peng & Reggia 1990) performs associative diagnosis based on clinical signs and symptoms for a non-time- stressed problem. These examples illustrate Guardian’s ability to make runtime choices among alternative reason - ing methods based on problem characteristics and the availability of data, knowledge, and real-time computa- tional resources. References Hayes-Roth, B., et al. 1992. Guardian: a prototype in- telligent agent for intensive-care monitoring. Artificial Intelligence in Medicine 4 (2): 165185. Dabija, V. 1994. Deciding whether to plan to react. Ph.D. diss., Dept. of Computer Science, Stanford Uni- versity. Ash, D.; Gold, G.; Seiver, A.; and Hayes-Roth, B. 1993. Guaranteeing real-time performance with limited resources. Artificial Intelligence in Medicine S(1): 49-66. Peng, Y.; and Reggia, J. 1990. Abductive inference models for diagnostic problem-solving. New York, NY: Springer-Verlag. Video Program 1503 From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. | 1994 | 121 |
1,456 | Dynamic Generation of Complex Behavior Randolph M. Jones Artificial Intelligence Laboratory University of Michigan 1101 Beal Avenue Ann Arbor, Michigan 48109-2110 rjones@eecs.umich.edu Simulation can be an effective training method if the simulation environment is as realistic as possible. An ‘important part of the training for Navy pilots involves flying against computer-controlled agents in simulated tactical scenarios. In order for such a situation to be realistic, the computer-controlled agents must be in- distinguishable from human-piloted agents within the simulated environment. The primary goal of the Soar- IFOR project (Jones et al. 1993; Rosenbloom et al. 1994) is to provide such believable agents for flight training simulations. To achieve this goal, we have constructed the TACAIR-SOAR system.’ Developing this system re- quires us to address a number of core research issues within artificial intelligence, including reasoning about interacting goals, situation interpretation, communi- cation, explanation, planning, learning, natural lan- guage understanding and generation, temporal reason- ing, and plan recognition. This report focuses on two particular issues required to function reasonably within the tactical air domain: a system must be able to generate behavior in response to complex goals and situations, and it must be able to do so dynamically, in response to extremely rapid changes in the agent’s situation. On the surface, these two capabilities seem to be at odds to each other. Ap- proaches to real-time or reactive behavior have gener- ally not been used within complex domains, and sys- tems that focus on complex goals do not usually do so in real time. Our solution has been to encode knowl- edge within the Soar production architecture (Rosen- bloom et al. 1991) in order to take advantage of state- ‘This research involves the efforts of John E. Laird, Randolph M. Jones, Paul E. Nielsen, and Frank Koss at the University of Michigan; Paul S. Rosenbloom, Milind Tambe, W. Lewis Johnson, and Karl B. Schwamb at the University of Southern California, Information Sciences Institute; and Jill E. Lehman and Robert Rubinoff at Carnegie Mellon University. The members of BMH, Inc. have also provided invaluable assistance as subject-matter experts. The research is supported by contract NOOOl4- 02-K-2015 from the Advanced Systems Technology Office of the Advanced Research Projects Agency and the Naval Research Laboratory. 1504 Video Program of-the-art matching algorithms to provide real-time, reactive behavior. In addition, the knowledge is rep- resented at a fine grain size, capturing a deep repre- sentation of the first principles involved in the tactical flight domain. This representation allows reactive rules to combine in a fashion that leads to appropriate re- sponses to complex goal and situation combinations. The current version of TAGAIR-SOAR has been flown in simulated exercises against other computer- controlled agents, as well as human-controlled flight simulators. The agent exhibits a wide variety of com- plex behaviors, and it meets the real-time requirements of the task. In addition, the agent provides realistic, human-like behavior in a number of tactical scenarios. References Jones, R. M., Tambe, M., Laird, J. E., & Rosenbloom, P. S. 1993. Intelligent automated agents for flight training simulators. In Proceedings of the Third Con- ference on Computer Generated Forces and Behavioral Representation (pp. 33-42). Orlando, FL. Rosenbloom, P. S., Johnson, W. L., Jones, R. M., KOSS, F., Laird, J. E., Lehman, J. F., Rubinoff, R., Schwamb, K. B., & Tambe, M. 1994. Intelligent au- tomated agents for tactical air simulation: A progress report. In Proceedings of the Fourth Conference on Computer Generated Forces and Behavioral Represen- tation. Orlando, FL. Rosenbloom, P. S., Laird, J. E., Newell, A., & Mc- Carl, R. 1991. A preliminary analysis of the Soar ar- chitecture as a basis for general intelligence. Artificial Intelligence, 42 289-325. From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. | 1994 | 122 |
1,457 | IPAI : Interactive Mecha esig Using Configuration &aces 3. Leo Joskowicz IBM T. J. Watson Research Center P.O. Box 704 Yorktown Heights, NY 10598 E-mail: josko@watson.ibm.com We present an interactive problem solving environ- ment for reasoning about shape and motion in mecha- nism design. Reasoning about shape and motion plays a central role in mechanism design because mechanisms perform functions by transforming motions via part in- teractions. The input motion, the part shapes, and the part contacts determine the output motion. Designers must reason about the interplay between shape and motion at every step of the design cycle. Reasoning about shape and motion is difficult and time consuming even for experienced designers. The designer must determine which features of which parts interact at each stage of the mechanism work cycle, must compute the effects of the interactions, must identify contact transitions, and must infer the over- all behavior from this information. The designer must then infer shape modifications that eliminate design flaws, such as part interference and jamming, and that optimize performance. The difficulty in these tasks lies in the large number of potential contacts, in the complexity of the contact relations, and in the discon- tinuities induced by contact transitions. Current computer-aided design programs support only a few aspects of reasoning about shape and mo- tion. Drafting programs provide interactive environ- ments for the design of part shapes, but do not support reasoning about motion. Simulation programs, which compute and animate the motions of the parts of mech- anisms, reveal only one of many possible behaviors. Commercial simulators only handle linkages: mech- anisms whose parts interact through permanent sur- face contacts, such as hinges and screws. Other pack- ages handle specialized mechanisms, such as cams and gears. They cannot handle mechanisms whose parts interact intermittently or via point or curve contacts. Yet these higher pairs play a central role in mechanism design. Our survey of 2500 mechanisms in an engineer- ing encyclopedia shows that 66% contain higher pairs and that 18% involve intermittent contacts. We have developed a problem solving environment, called HIPAIR, for reasoning about shape and motion in mechanisms. The core of the environment is a mod- ule that automates the kinematic analysis of mecha- Elisha Sacks Computer Science Department Princeton University Princeton, NJ 08544 E-mail: eps@cs.princeton.edu nisms composed of linkages and higher pairs. This module provides the computational engine for a range of tasks, including simulation, behavior description, and parametric design. It is comprehensive, robust, and fast. HIPAIR handles higher pairs with two de- grees of freedom, including ones with intermittent and simultaneous contacts. This class contains 90% of 2.5D pairs and 80% of all higher pairs according to our sur- vey. HIPAIR computes and manipulates configuration spaces. The configuration space of a mechanism is a geometric representation of the configurations (po- sitions and orientations) of its parts. Configuration spaces encode the relations among part shapes, part motions, and overall behavior in a concise, complete and explicit format. They simplify and systematize reasoning about shape and motion by mapping it into a uniform geometrical framework. The videotape explains configuration spaces and illustrates how HIPAIR supports mechanism design and analysis. HIPAIR has been tested on over 100 parametric variations of 25 kinematic pairs and on dozen multipart mechanisms, including a Fuji dispos- able camera with ten moving parts. 1. 2. 3. 4. 5. References “Mechanism Comparison and Classification for De- sign”, L. Joskowicz, in Research in Engineering De- sign, Springer-Verlag, Vol 1. No. 2, 1990. “Computational Kinematics”, L. Joskowicz and E. Sacks, Artificial Intelligence, Vol. 51, Nos. 1-3, North-Holland, 1991. “Automated Modeling and Kinematic Simulation of Mechanisms”, E. Sacks and L. Joskowicz, Computer- Aided Design, Vol. 25, No. 2, 1993. “Configuration Space Computation for Mechanism Design”, E. Sacks and L. Joskowicz, Proc. of the IEEE Int. Conference on Robotics and Automation, IEEE Computer Society Press, 1994. “Mechanism Analysis and Design Using Configura- tion Spaces”, E. Sacks and L. Joskowicz, submitted, Communications of the ACM, 1994. Video Program 1505 From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. | 1994 | 123 |
1,458 | ALIVE: Artificial Life Interactive Video Environment Pattie Maes, Trevor Darrell, Bruce Blumberg, Sandy Pentland MIT Media-Laboratory 20 Ames St. Cambridge Ma. 02139 pattie/trevor/bruce/sandy@media.mit.edu Abstract References In this video we demonstrate a novel system which allows wireless full-body interaction between a ‘human participant and a graphical world inhabited by autonomous agents. The system is called “ALIVE”, an acronym for Artificial Life Interactive Video Environment. The goal of ALIVE is to present a virtual environment in which a user can interact, in natural and believable ways, with autonomous semi- intelligent agents whose behavior is equally natural and believable. In ALIVE, a single CCD camera is used to obtain a color image of a person which is composited into a 3D graphical world. The composite world is projected onto a large video wall in a world-centered reference frame, which faces the user and acts as a type of “magic mirror”. No goggles, gloves, or wires are needed for interaction with the world: agents and objects in the graphical world can be acted upon by the human participant through the use of domain-specific computer vision techniques that analyze the silhouette and gestures of the person. The agents inhabiting the world are modeled as self- contained autonomous systems with internal needs and motivations which are embodied in a dynamic world: they sense the world via sensors, and move in, and act on the world in real time in response to the user’s gestures and actions. As a result of the presence of these semi-intelligent entities, the system does not just allow for the obvious direct-manipulation style of interaction, but also a more powerful, indirect style of interaction in which gestures can have more complex meanings, which may vary according to the situation in which the agents and user find themselves. The video presents a specific implementation of the ALIVE system which was demonstrated as part of SIGGRAPH-93’s Tomorrow’s Realities show. Ap- proximately 500 attendees interacted with the ALIVE system over the course of 5 days. The video footage was taken during that time. More information on the ALIVE system in general may be found in [Maes93] and [Darrell94]. Information on the details of the behavior and agent model used in ALIVE may be found in [Blumberg94]. More information on details of the visual routines may be found in [Darrell94]. Blumberg, B. 1994. Action-Selection in Hamsterdam: Lessons from Ethology. In: The Proceedings of the Third International Conference on the Simulation of Adaptive Behavior, Brighton. Forthcoming Darrell, T., Maes, P., Blumberg, B. and Pentland, S. 1994. Situated Vision and Behavior for Interactive Environments, Technical Note No. 261. M.I.T. Media Laboratory Perceptual Computing , M.I.T. Maes, P. 1993. ALIVE: An Artificial Life Interactive Video Environment. In: Visual Proceedings, The Art and Interdisciplinary Programs of Siggraph 93. ACM, NY. 1506 Video Program From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. | 1994 | 124 |
1,459 | A Reading Coat that Listens: (Edited) Video Transcript Jack Mosto auptmann, Steven F. Roth, att Kane, Adam Swift, Lin Chase, Project LISTEN, Carnegie Mellon Robotics Institute, 215 Cyert Hall, 4910 Forbes Avenue, Pittsburgh, PA 15213-3890 At Carnegie Mellon University, Project LISTEN’ is t‘aking a novel approach to the problem of illiteracy. We have developed a prototype automated reading coach that listens to a child read aloud, and helps when needed. The coach provides a combination of reading and listening, in which the child reads wherever possible, and the coach helps wherever necessary -- a bit like training wheels on a bicycle. Let’s see how the automated coach responds to various things a child might do. The output of the automatic speech recognizer is displayed at the bottom of the screen. Help when needed: The coach recues a misread word by rereading the words that lead up to it, just like the expert reading teachers whom the coach is modelled after. This context often helps the reader correct the word on the second try: Text: The cow lives on the farm. Reader: “The cow lives on the farm.” Text: She eats grass all day long. Reader: “She eats good all day long.” Coach: SHE EATS Reader: “grass” Coat h: GRASS. PLEASE CONTINUE. Support comprehension: The coach is designed to emph‘asize comprehension (and ignore minor mistakes, such as repeated words. However, the word “very” is important to the meaning of the sentence, so the coach asks the reader to reread it. The coach rereads the sentence to help the reader comprehend it: Text: At night she is very tired. Reader: “At night . . . at night she is tired.” Coach: READ THIS WORD AGAIN. [flashes "very"] Reader: “&d?” Coat h: VERY. AT NIGHT SHE IS VERY TIRED. Maintain flow: When the reader gets stuck, the coach jumps in, enabling the reader to complete the sentence: Text: Then she slowly comes home. Reader: “Then she then she s . . . s . . . ” Coat h: SLOWLY Reader: “slowly comes home.” ‘Note: For acknowledgements, further references, and technical details, please see (Mostow et al, 1994). This research was supported primarily by the National Science Foundation under Grant Number MDR-9154059 and by the Advanced Research Projects Agency, DOD, through DARPA Order 5167, monitored by the Air Force Avionics Laboratory under contract NOO039-85-C-0163. Lin Chase is supported by a Howard Hughes Doctoral Fellowship from the Hughes Research Laboratory. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the sponsors or of the United States Government. Minimize disruption: Since short function words like “to” and “be” do not usually affect comprehension, the coach refrains from interrupting the reader to correct this omission: Text: I want to be milked, she says. Reader: “I want milked she says” We’re having children try out this prototype coach to help us improve it. [Show children using coach]. Clicking for help: To get help with a word, the child c(an click on it. [Coach speaks word.] This feature is very useful, but children often don’t realize when they need help. [Child misreads "democratic" as “dr,amatic,” even ‘after coach recues it.] Tolerate recognizer inaccuracy: We’re working to m,ake the coach recognize children’s speech more accurately, and behave reasonably even when the speech recognizer is inaccurate. [Coach misrecognizes “slowly,” causing it to reread the sentence.] In <an earlier study we compared how well second graders read with and without similar assistance. Without assistance, they missed one word in eight, which is considered overly frustrating. With assistance, they missed fewer th‘an one word in forty, enabling them to read and comprehend material more than six months beyond their independent reading level. Children c‘an’t read to le‘arn until they le‘arn to read -- whether it’s a science p(assage or anything else. We need to find out how the coach can help children le‘arn over time, and explore how the coach CM help in ways that human teachers cannot -- for example, by modifying the text dynamically, and by tapping into the motivational power of computers. [Child comments on coach.] Project LISTEN builds on years of previous government-funded research in basic speech technology. It has the potential to pay back many times over for the cost of that research, since illiteracy costs the United States over 225 billion dollars every year (Her-rick, 1990). Moreover, this work applies to several important <arecas in addition to children’s reading instruction, including adult literacy, English as a second language, <and foreign hanguage learning. It opens the door to a new generation of intelligent tutoring systems that can listen to their students. References E. Her-rick. (1990). Literacy Questions and Answers. Pamphlet. P. 0. 81826, Lincoln, NE 68501: Contact Center, Inc. J. Mostow, S. Roth, A. G. Hauptmann, and M. Kane. (August 1994). A Prototype Reading Coach that Listens. Proceedings of the Twelfth National Conference on Artificial Intelligence (AAAI94). Seattle, WA, American Association for Artificial Intelligence. Video Program 1507 From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. | 1994 | 125 |
1,460 | Machine Rhythm David Rosenthal International Media Research Foundation TOHMA Building B 1,2- 14- 1 Nishi-Waseda Shinjuku-ku,Tokyo, 169, JAPAN dfr@media.mit.edu The video discusses Machine Rhythm, a program which emulates human rhythm perception. Given a musical performance represented as a MIDI stream, the program determines the program’s rhythm - that is, it decides the meter of the performance, the rhythmic value of each note, and the location of barlines. The basic orientation of the video is to demonstrate applications of the program; more theoretical aspects are treated in (Rosenthal 1992). Machine rhythm-finding is a difficult problem for several reasons. First, the timing of downbeats, which are heard by human listeners as a regular pulse, actually varies quite wildly; in certain cases the period can vary by a factor of two in adjacent beats. This can happen even when tempo variation is not being used as an expressive device. Second, not all downbeats correspond to actual musical events - in other words, one sometimes taps to a beat that doesn’t correspond to a note in the music. Syncopations are particularly difficult instances of this, but the problem occurs even in unsyncopated music. Third, there is no straightforward method by which the location of downbeats can be extracted from a performance; downbeats are not reliably louder than other notes, for example. Finally, the rhythm of a piece of music is not a single pulse, but rather an interlocking hierarchy or pulses with periods of different sizes; hence the nomenclature of “measure,” “half-note,” “quarter-note,” and so on. Despite these apparent difficulties, human listeners have little trouble arriving at an unambiguous interpretation; in this sense rhythm finding is as well-defined a problem as, say, speech understanding. Humans apparently compensate for the problems mentioned in the last paragraph by integrating a variety of acoustic and musical cues, such as texture, melodic pattern, relative time between onsets, length of notes, and others. Humans also have the ability to fluently change their interpretations, either because of an initial interpretation is incorrect or because of a change in the music. Humans apparently track the various-size periods simultaneously, and use them to confirm each other. In an effort to build a machine rhythm tracker with performance approaching that of a human, we integrated many of these methods into our system: The system integrates evidence from a variety of cues, some of which involve subtle analyses of the music from the MIDI data. The system achieves the flexibility to handle difficult situations by maintaining a hierarchy of hypotheses. The 1508 Video Program system tracks several “levels,” or different-sized periods simultaneously, using them as checks on each other. The first section of the video uses an animation to explain the problem of rhythm parsing, and briefly outlines our approach. The next section shows the program parsing various live performances, and demonstrates an application - automatic transcription. This section also demonstrates the program’s ability to handle somewhat more challenging situations, such as varying tempo and triplets. The final section of the video shows another kind application, which we call automatic synchronization. In this section we take a piece written for four hands and record each pianist’s part separately. We then demonstrate that the two parts need to be synchronized in order to be played together. Furthermore, they cannot be synchronized by globally altering their tempos; local adjustment of the tempo is necessary. To do this, the program must make use of the rhythmic parsing that the Machine Rhythm program produces. When the two parts are synchronized in this way, the piece sounds as it should. References Rosenthal, D. 1992. Machine Rhythm: Computer Emulation of Human Rhythm Perception. Ph.D. diss., The Media Laboratory, Massachusetts Institute of Technology. Acknowledgments This video was produced while I was a doctoral student at the MIT Media Lab. I’d like to thank Stuart Cody and Greg Tucker, both of the Media Lab, for their help. From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. | 1994 | 126 |
1,461 | Forming beliefs about a changing world” Fahiem Bacchus Department of Computer Science University of Waterloo Waterloo, Ontario Canada, N2L 3Gl fbacchus@logos.uwaterloo.ca Adam J. Grove NEC Research Institute 4 Independence Way Princeton, NJ 08540 grove@research.nj.nec.com Abstract The situation calculus is a popular technique for reasoning about action and change. However, its restriction to a first- order syntax and pure deductive reasoning makes it unsuitable in many contexts. In particular, we often face uncertainty, due either to lack of knowledge or to some probabilistic aspects of the world. While attempts have been made to address aspects of this problem, most notably using nonmonotonic reasoning formalisms, the general problem of uncertainty in reasoning about action has not been fully dealt with in a logical frame- work. In this paper we present a theory of action that extends the situation calculus to deal with uncertainty. Our frame- work is based on applying the random-worlds approach of [BGHK94] to a situation calculus ontology, enriched to allow the expression of probabilistic action effects. Our approach is able to solve many of the problems imposed by incomplete and probabilistic knowledge within a unified framework. In particular, we obtain a default Markov property for chains of actions, a derivation of conditional independence from irrele- vance, and a simple solution to the frame problem. Introduction The situation calculus is a well-known logical technique for reasoning about action and change [MH69]. Calculi of this sort provide a useful mechanism for dealing with simple temporal phenomena, and serve as a foundation for work in planning. Nevertheless, the many restrictions inherent in the situation calculus have inspired continuing work on extending its scope. An important source of these restrictions is that the situa- tion calculus is simply a first-order theory. Hence, it is only able to represent “known facts” and can make only valid deductions from those facts. It is unable to represent prob- abilistic knowledge; it is also ill-suited for reasoning with incomplete information. These restrictions make it imprac- tical in a world where little is definite, yet where intelligent, reasoned decisions must nevertheless be made. There has *Some of this research was performed while Daphne Koller was at Stanford University and at the IBM Almaden Research Center. Work supported in part by the Canadian Government through their NSERC and IRIS programs, by the Air Force Office of Scientific Research (AFSC) under Contract F49620-91 -C-0080, and by a University of California President’s Postdoctoral Fellowship. Joseph U. Halpem aphne Koller IBM Almaden Research Center Computer Science Division 650 Harry Road University of California, Berkeley San Jose, CA 95 120-6099 Berkeley, CA 94720 halpern@almaden.ibm.com daphne@cs.berkeley.edu been much work extending the basic situation calculus us- ing various nonmonotonic theories. Although interesting, these theories address only a certain limited type of uncer- tainty; in particular, they do not allow us to represent actions whose effects are probabilistic. This latter issue seems to be addressed almost entirely in a non-logical fashion. In partic- ular, we are not aware of any work extending the situation calculus to deal with probabilistic information. This is per- haps understandable: until recently, it was quite common to regard approaches to reasoning based on logic as being irrec- oncilably distinct from those using probability. Recent work has shown that such pessimism is unjustified. In this paper we use a new theory of probabilistic reasoning called the random-worlds method [BGHK94] which naturally extends first-order logic. We show that this method can be success- fully applied to temporal reasoning, yielding a natural and powerful extension of the situation calculus. The outline of this paper is as follows. First, we briefly describe the situation calculus, discussing in more detail some of its problems, and some of the related work address- ing these problems. We then describe our own approach. We begin by summarizing the random-worlds method. Al- though the application of this method to temporal reasoning is not complicated, an appropriate representation of tempo- ral events turns out to be crucial. The solution, based on counter$actuaZs, seems to be central to many disciplines in which time and uncertainty are linked. After these preliminaries, we turn to some of the results obtained from our approach. As we said, our goal is to go beyond deductive conclusions. Hence, our reasoning proce- dure assigns degrees of belief (probabilities) to the various possible scenarios. We show that the probabilities derived using our approach satisfy certain important desiderata. In particular, we reason correctly with both probabilistic and nondeterministic actions (the distinction between the two being clearly and naturally expressed in our language). Fur- thermore, we obtain a default Markov property for reasoning about sequences of actions. That is, unless we know other- wise, the outcome of an action at a state is independent of previous states. We note that the Markov property is not an externally imposed assumption, but rather is a naturally de- rived consequence of the semantics of our approach. More- over, it can be overridden by information in the knowledge 222 Causal Reasoning From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. base. The Markov property facilitates a natural mechanism of temporal projection, and is a natural generalization of an intuitive mechanism of projection in deterministic domains in which we consider action effects sequentially. In general, when actions have deterministic effects (whether in fact, or only by default, which is another easily made distinction) then our approach achieves most standard desiderata. Finally, we turn to examining one of the most famous issues that arise when reasoning about action: the Frame Problem and the associated Yale Shooting Problem (YSP) [MH69, HM87]. We show that our approach can solve the former problem, without suffering from the latter, almost automatically. Writing down a very natural expression of a frame axiom almost immediately gives the desired behavior. We state a theorem, based on the criterion of Kartha [Kar93], showing the general correctness of our approach’s solution to the frame problem. We also compare our solution to one given by Baker [Bak91]. Preliminaries The situation calculus We assume some familiarity with the situation calculus and associated issues. In brief, by situation calculus we refer to a method of reasoning about temporal phenomena using$rst- order logic and a sorted ontology consisting of actions and situations. A situation is a “snapshot” of the world; its prop- erties are given by predicates called fluents. For example, consider a simple version of the well-known Yale Shooting problem. To represent an initial situation 5’0 where Fred is alive and there is an unloaded gun we can use the formula Alive(&) A 4oaded(So). The effects actions have on situ- ation can be encoded using a Result function. For instance, we can write ‘V’S (Loaded(s) j lAZive(ResuZt( Shoot, s))), to assert that if a loaded gun is fired it will kill Fred.’ We can then ask what would happen if, starting in So, we load the gun, wait for a moment, and then shoot: AZive(Result( Shoot, Result( Wait, Result(Load, So)))) ? The most obvious approach for deciding whether this is true is to use first-order deduction. However, for this to work, we must provide many other facts in addition to the two above. In fact, to answer questions using deduction we would in general have to provide a complete theory of the domain, including a full specification of the initial situation and explicit formulas describing which fluents do and do not change whenever any action is taken. For instance, we would need to say that after a Wait action, a loaded gun continues to be loaded, if Fred was alive before he will be alive afterwards, and so on. The issue of stating the non- effects of actions is known as the frame problem [MH69]: how do we avoid having to represent the numerous axioms required to describe non-effects? We would like to omit or abbreviate these axioms somehow. The frame problem is only one aspect of the problem of completeness; generally our knowledge will be deficient in ‘In general, variables. we use upper case for constants and lower case for other ways as well. For example, we may not know the truth value of every fluent in the initial situation. we may know the situation after some sequence of ac- tions has been performed, but not know precisely which actions were taken. (This leads to one type of explanation problem.) we may not know precisely what effects an action has. This may be due to a simple lack of information, or to the fact that the action’s effects are probabilistic (e.g., we might believe that there is a small chance that Fred could survive being shot). Note that even if we know the probabilities of the various action outcomes, the situation calculus’s first-order language is too weak to express them. In all such cases, it is unlikely that deductive reasoning will reach any interesting conclusions. For instance, if we leave open the logical possibility that the gun becomes unloaded while we wait, then there is nothing we can say with certainty about whether Fred lives. Our strategy for investigating these issues is to exam- ine a generalized notion of inference that not only re- ports certain conclusions (in those rare cases where our knowledge supports them), but also assigns degrees of belief (i.e., probabilities) to other conclusions. For in- stance, suppose KB is some knowledge base stating what we know about actions’ effects, the initial situation, and so on, and we are interested in a query such as cp = AZive(Result(Shoot, Result( Wait, Result(Load, So)))). The next section shows how we define Pr(cp]KB), the degree of belief in cp (which is a number between 0 and 1) given our knowledge KB. It is entirely possible for KB to be such that Pr( cp] KB) = 0.1, which would mean we should have high but not complete confidence that Fred would be dead after this sequence of actions. To a large extent, it is the freedom to assign intermediate probabilities (other than 0 or 1) that relieves us of traditional situation calculus’ demand for com- plete knowledge. A related important feature is our ability to make use of statistical knowledge (for instance, an assertion that shooting only succeeds 90% of the time). Of course, the real success of our approach depends crucially on the details and behavior of the particular method we have for comput- ing probabilities. Examining this method, and justifying its successes, is the goal of the rest of this paper. Before continuing, we remark that the importance of the issues we have raised is well known. There have been numer- ous attempts to augment deductive reasoning with the abil- ity to ‘jump to conclusions”, i.e., nonmonotonic reasoning (e.g., [HM87, Kau86, Lif87]), often in an attempt to solve the frame problem. The idea of reasoning to “plausible” conclusions, rather than only the deductively certain ones, clearly shares some motivation with our decision to evalu- ate numeric probabilities. The connection is in fact quite deep; see [BGHK94]. However, the application of pure non- monotonic logics to reasoning about actions has proven to be surprisingly difficult and, in any event, these approaches are not capable of dealing with probabilistic actions or with the quantitative assessment of probabilities. Causal Reasoning 223 There has also been work addressing the issue of probabil- ities in the context of actions. The propositional approaches to the problem (e.g., [Han90, DK89]) do not incorporate the full expressive power of the situation calculus. Furthermore, even those that are able to deal with abductive queries typ- ically cannot handle explanation problems (since they do not place a prior probability distribution over the space of actions). [Ten911 achieves a first-order ontology by apply- ing the reference-class approach of [Kyb74] to this problem. His approach, however, has a somewhat “procedural” rather than a purely logical (semantic) character. Hence, although it specifies how to do forward projection-assessing probabil- ities for outcomes given knowledge of an initial situation-it does not support arbitrary queries from arbitrary knowledge bases. This flexibility is important, particularly for explana- tion and diagnosis. Finally, none of these works subsume all the issues addressed by advocates of nonmonotonic reason- ing. Our approach provides a framework for dealing with these issues in a uniform fashion. Random-worlds We now turn to a summary of the random-worlds method; see [BGHK94] and the references therein for full details. We emphasize that this is a general technique for comput- ing probabilities, given arbitrary knowledge expressed in a very rich language; it was not developed specifically for the problem of reasoning about action and change. As a general reasoning method, random-worlds has been shown to possess many attractive features [BGHK94], including a preference for more specific information and the ability to ignore ir- relevant information. In a precise sense, it generalizes both the powerful theory of default reasoning of [GMP90] and (as shown in [GHK92]) the principle of maximum entropy [Jay78]; it can also be used to do reference class reasoning from statistics in the spirit of [Kyb74]. The two basic ideas underlying the random-worlds method are the provision of a general language for expressing statisti- cal information, and a mechanism for probabilistic reasoning from such information. The language we use extends full first-order logic with sta- tistical information, as in [Bac90]), by allowing proportion expressions of the form ]]~(x)]$(x)]]~. This is interpreted as denoting the proportion of domain elements satisfying cp, among those satisfying +.2 (Actually, an arbitrary set of variables is allowed in the subscript.) A simple propor- tion formula has the form I]~(z)l+(~)]]~ M 0.6 where “sy” stands for “approximately equal.” Approximate equality is required since, if we make a statement like “90% of birds can fly”, we almost certainly do not intend this to mean that ex- actly 90% of birds fly1 Among other things, this would imply that the number of birds is a multiple of ten! Approximate equality is also important because it allows us to capture defaults. For example, we can express “Birds typically fly” as IIFZy(z)lBird(z)ll, = 1. We omit a description of the formal semantics, noting that the main subtlety concerns the 21f $(z) is identically true, we generally omit it. interpretation of approximate comparisons, and that the spe- cial case of M 1 is related to the well-known c-semantics [Pea89]. The second aspect of the method is, of course, the spe- cific way in which degrees of belief are computed. Before reviewing these, we remark that for the purposes of most of this paper, the random-worlds method can be regarded as a black box which, given any knowledge base KB and a query cp, assesses a degree of belief (i.e., a probability) Pr: (‘pi KB). Very briefly, and ignoring the subtlety of approximate equality, the method is as follows. For any domain size N, we consider all the worlds (first-order structures) of size N consistent with KB. Let #worZdsN (KB) be the number of size N worlds that satisfy KB. Appealing to the principle of indifference, we regard all such worlds as being equally plausible. It then follows that, given a domain size N, we should define Pr& (cp I KB) = “~~O~~>~~(A~~) . Typically, all that is known about N is that it is “large”. Thus, the degree of belief in cp given KB is taken to be limN-+oo PrK(cpl KB). Applying random-worlds in a temporal context is mostly a problem of choosing an appropriate representation scheme. Here we are guided mostly by the standard ontology of sit- uation calculus, and reason about situations and actions. Indeed, since our language includes that of first-order logic, it would be possible to use the language of standard situation calculus without change. However, we want to do more than this. In particular, we want to allow probabilistic actions and statistical knowledge. To do this, we need to allow for ac- tions that can have several effects (even relative to the same preconditions). For this purpose, it is useful to conceptually divide a situation into two components: the state and the environment. The state is the visible part of the situation; it corresponds to the truth values of the fluents. The envi- ronment is intended to stand for all aspects of the situation not determined by the fluents (such as the time, or other properties of the situation that we might not wish to express explicitly within our language). So what is a world in this context? Our worlds have a three-sorted domain, consisting of states, environments, and actions. Situations are simply state-environment pairs. Each world provides an interpretation of the symbols in our lan- guage over this domain, in the standard manner. For the purposes of this paper, fluents are taken to be unary predi- cates over the set of states.3 Actions map situations to new situations via a Result function; hence, each world also pro- vides, via the denotation of Result, a complete specification of the effect of an action on every situation. Each state in the world’s domain can be viewed as a truth assignment to the fluents. If we have L fluents in the lan- guage, say PI, . . . , Pk, we require that there be at most one state for each of the 21 possible truth values of the fluents.4 3We observe that we can easily extend our ontology to allow complex fluents (e.g., On(A,B) in the blocks world), and/or reified fluents. 4This restriction was also used by Baker [Bak91] in his solution to the frame problem. It does not postulate the existence of a state for all possible assignments of truth values, and hence allows a 224 Causal Reasoning Counterfactuals We do this by adding the following formula to the KB: VW,W’((P~(W) E fqw’)A- - .AP~(W) - P&l’)) j 21 = w’). Because the set of states is bounded, when we take the domain size to infinity (as is required by random worlds), it is the set of actions and the set of possible environments that grow unboundedly. As stated above, action effects are represented using a Result function that maps an action and a situation to a situ- ation. In order to formally define, within first-order logic, a function whose range consists of pairs of domain elements, we actually define two functions-Result1 and ResuZt2--- that map actions and situations to states and environments respectively. We occasionally abuse notation and use Result directly in our formulas. Note that the mapping from an action and a situation to a situation is still a deterministic one. However, Result is not necessarily deterministic when we only look at states. Two situations can agree completely in terms of what we say about them (their state), and never- theless an action may have different outcomes. As promised, this new ontology allows us to express non- deterministic and probabilistic actions, as well as the deter- ministic actions of the standard situation calculus. For exam- ple, consider a simple variant of the Yale Shooting Problem (YSP), where we have only two fluents, Loaded and Alive, and three actions, Wait, Load, and Shoot. Each world will therefore have (at most) four states, corresponding to the four possible truth assignments to Loaded and Alive. We assume, for simplicity, that we have constants denoting these states: VAL , TJAE, I/AL, VAT. Each world will also have domain ele- ments corresponding to the three named actions, and possibly to other (unnamed) actions. The remaining domain elements correspond to different possible environments. The fluents are unary predicates over the states, and Result1 is a function that takes a triple-an action, a state, and an environment- and returns a new state.5 In the KB we can specify different constraints on Result1 . For example, Y’v (Loaded(v) -2j IIlAZive(Resultl(Shoot, U, e))jje M 0.9), (1) asserts that the Shoot action has probabilistic effects; it says that 90% of shootings (in a state where the gun is loaded) result in a state in which Fred is dead. On the other hand, VW, e (Loaded(v) j lAZive(Resultl (Shoot, v, e))), (2) asserts that Shoot has the deterministic effect of killing Fred when executed in any state where the gun is loaded. We might not know what happens is the gun is not loaded: Fred might still die of the shock. In such cases, we can simply leave this unspecified. Later in the paper, we discuss the different ways in which our language allows us to specify the effects of actions, and the conclusions these entail. correct treatment of ramifications. Baker then uses circumscription to ensure that there is exactly one state for each assignment of truth values consistent with the KB. In our framework, the combinatorial properties of random-worlds guarantee that this latter fact will hold in almost all worlds. 5Similarly, the Result2 function returns a new environment, but there is usually no need for the user to provide information about this function. While our basic ontology seems natural, there are other pos- sible representations. However, it turns out that the use of a Result function is crucial. Although the use of Result is quite standard in situation calculus, it is important to real- ize that its denotation in each world tells us the outcome of each action in all situations, including those situations that never actually occur. That is, in each world Result provides counte~actual information. This can best be understood using an example. Consider the YSP example, where for simplicity we ignore environ- ments and consider only a single action-Shoot-which is always taken at the initial state. We know that Fred is alive at the initial state, but nothing about the state of the gun-it could be loaded or not. Assume that, rather than having a Result function, we choose to have each world simple denote a single run (history) for this experiment. In this new ontol- ogy, we could use a constant VO denoting the initial state and another constant VI denoting the second state; each of these will necessarily be equal to one of the four states described above. In order to assert that shooting a loaded gun kills Fred, we would state that Loaded( Vo) j lAZive( VI ). Fur- thermore, assume that after being shot the gun is no longer loaded. It is easy to see that there are essentially three possi- ble worlds (up to renaming of states): if Loaded( Vo) (so that VO = I/AL), then necessarily VI = VAE, and if lLoaded( I/o) then either VI = VAL or VI = VAT. The random-worlds method, used with this new ontology, would give a degree of belief of 4 to the gun being loaded at VO, simply because Shoot has more possible outcomes if the gun is unloaded. Yet intuitively, since we know nothing about the initial sta- tus of the gun, the correct degree of belief for Loaded is i. This is the answer we get by using the ontology of situation calculus with the Result function. In this case, the different worlds correspond to the different denotations of Result and VO. Assuming that no action can revive Fred once he dies, there are only two possible denotations for Result: ResuZt(Shoot, VAe) is either I/AL or TJAE, while ResuZt(Shoot, V) = TJAL if V # VAT. Furthermore, VO is either VAL or VAe. Hence, there are four possible worlds. In exactly two of these, we have that Loaded( The key idea here is that, because our language includes Result, each world must specify not only the outcome of shooting a loaded gun, but also the outcome of shooting had the gun been un- loaded. Once this counterfactual information is taken into account, we get the answers we expect. We stress that the KB does not need to include any special information because of our use of counterfactuals. As is standard in the situation calculus, we put into the KB exactly what we know about the Result function (for example, that shooting a loaded gun necessarily kills Fred). The KB ad- mits a set of satisfying worlds, and in each of these worlds Result will have some counterfactual behavior. The random worlds method takes care of the rest by counting among these alternate behaviors. The example above and the results below show that ran- dom worlds works well with an ontology that has implicit Causal Reasoning 225 counter-factual information (like the situation calculus and its Result function). On the other hand, with other ontologies (such as the language used above that simply records what actually happens and nothing more) the combinatorics lead to unintuitive answers. Hence, it might seem that counterfac- tual ontologies are simply a technical requirement of random worlds. However, the issue of counterfactuals seems to arise over and over again in attempts to understand temporal and causal information. They have been used in both philoso- phy and statistics to give semantics to causal rules [Rub74]. In game theory [Sta94] the importance of counterfactuals (or strategies) has long been recognized. Baker’s approach [Bak91] to the frame problem is, in fact, also based on the use of counterfactuals. We have already mentioned that random-worlds subsumes the principle of maximum entropy. It has been argued [Pea881 that maximum entropy (and hence random-worlds) cannot deal appropriate with causal information. In fact, our example above is closely related, in a technical sense, to the problematic examples described by Pearl. But once again, an appropriate representation of causal rules using counter- factuals solves the problem [Hun89]. In fact, counterfactuals have been used recently to provide a formulation of Bayesian networks based on deterministic functions [Pea93]. All these applications of counterfactuals turn out to be closely linked to our own, even though none consider the random-worlds method. The ontology of this paper is, in some sense, the convergence of these technically diverse, but philosophically linked, frameworks. As our results suggest, the generality of the random-worlds approach may allow us to draw these lines of research together, and so expose the common core. RCXUMS As a minimal requirement, we would like our approach to be compatible with standard deductive reasoning, whenever the latter is appropriate. As shown in [BGHK94], this desidera- tum is automatically satisfied by random worlds: Proposition k If cp is a logical consequence of a knowledge base KB, then PrW,((p]KB) = 1. Hence, our approach supports all the conclusions that can be derived using ordinary situation calculus. However, as we now show, it can deal with much more. An important concept in reasoning about change is the idea of a state transition. In our context, a state transition takes us from one situation to the next via the Result func- tion. Since we can only observe the state component of a situation, we are particularly interested in the probabil- ity that an action takes us from a situation (V, .) to another (I/‘, .) (where the specific identity of the environment is irrel- evant). We are in fact interested in the transitionprobability Pr: (Result( A, V, E) = V’ I KB). As we show later on in this section, these transition probabilities can often be used to compute the cumulative effects of sequences of actions. We can use the properties of random worlds to derive tran- sition probabilities from our action descriptions. Consider a particular state V and action A. There are many ways in which we can express knowledge relevant to associated transition probabilities. One general scheme uses assertions of the form ‘de (cp(Resuh(A, V, e))), (3) where cp is a Boolean combination of fluents. Assertion (3) says that cp is true of all states that can result from taking A at state V. In general, when KB entails such a statement, then Proposition 1 can be used to show that our degree of belief in cp(Resultl (A, V, E)) = 1. For example, if KB consists of (2) only, then PrL (AZive(Resultl (Shoot, VAL, E)) I KB) = 0, as expected (here, cp is Alive). Assertion (2) describes a deterministic effect. However, even for nonprobabilistic statements such as (3), our ap- proach can go far beyond deductive reasoning. For instance, we might not always know the full outcome of every action in every state. A Load action might result in, say, between one and six bullets being placed in the gun. If we have no other information, our approach would assign a degree of belief of i to each of the possibilities. In general, we can formalize and prove the following result (where, as in our remaining results, E is a constant over environments not appearing anywhere in KB): Proposition 2 : Suppose KB contains (3), but no addi- tional information about the ejfects o{ A in V. Then, PrW,(Resulti(A, V, E) = V’IICB) = m, where m is the number of states satisfying cp, and V’ is one of these states. We note that we can prove a similar result in the case where our ignorance is due to incomplete information about the initial state (as illustrated in the previous section). As we discussed, our language can also express infor- mation about probabilistic actions (where we have statis- tical knowledge about the action’s outcomes). Our the- ory also derives many of the conclusions we would expect. For example, if KB contains (l), then we would conclude Pr: (lAZive( Result1 (Shoot, V, E)) I KB A Loaded(V)) = 0.9. In general, the direct inference property exhibited by random worlds allows us to prove the following: Proposition 3: If I(B entails Ilp(Result(A, V, e))lle w Q, then PrW, (cp(ResuZt(A, V, E)) IKB) = CL Nondeterminism due to ignorance on the one hand, and prob- abilistic actions on the other, are similar in that they both lead to intermediate degrees of belief between 0 and 1. Never- theless, there is an important conceptual difference between the two cases, and we consider it a significant feature of our approach that it can capture and reason about both. Given our statistical interpretation of defaults, the abil- ity to make statistical statements about the outcomes of actions also allows us to express a default assump- tion of determinism. For instance, ‘4’~ (Loaded(w) j I IlAZive(Resultl (Shoot, ZI, e)) I le x 1) states that shooting a loaded gun almost surely kills Fred. Even though a default resembles a deterministic rule in many ways, the distinction can be important. We would prefer to explain an unusual occurrence by finding a violated default, rather than by pos- tulating the invalidity of a law of nature (which would result in inconsistent beliefs). For example, if, after the shooting, we observe Fred walking away, then our approach would conclude that Fred survived the shooting, rather than that he 226 Causal Reasoning and is a zombie. This distinction between certain outcomes default outcomes is also easily made in our framework. In general, we may have many pieces of information de- scribing the behavior of a given action at a given state. For example, consider the YSP with an additional fluent Noisy, where our KB contains (1) and Vv (Loaded(v) + (INoisy(Resultl(Shoot, 21, e))lle M 0.8). Given all this information, we would like to compute the probability that shooting the gun in a state V where Alive(V) A Loaded(V) results in the state VALN (where N stands for Noisy). Unless we know otherwise, it seems intu- itive to assume that Fred’s health in the resulting state should be independent of the noise produced; that is, the answer should be 0.1 x 0.8 = 0.08. This is, in fact, the answer produced by our approach. This is an instance of a gen- eral result, asserting that transition probabilities can often be computed using maximum entropy. While, we do not have the space to fully describe the general result, we note that it entails a default assumption of independence. That is, unless we have reason to believe that Alive and Noisy are correlated, our approach will assume that they are not. We stress that this is only a default. We might know that Alive and Noisy are negatively correlated (perhaps because lack of noise is sometimes caused by a misfiring gun). In this case we can easily add to the KB, for example, that Vu (Loaded(v) + 11 Noisy(Resultl (Shoot, V, e)) A AZive( Result 1 (Shoot, V, e)) 11 e M 0.05). The resulting KB is not inconsistent; the default as- sumption of independence is dropped automatically. We now turn to the problem of reasoning about the effects of a sequence of actions. The Markov assumption, which is built into most systems that reason about probabilistic actions [Han90, DK89], asserts that the effects of an action depend only on the state in which it is taken. As the following result demonstrates, our approach derives this principle from the basic semantics. We note that the Markov assumption is only a default assumption in our framework; it fails if the KB con- tains assertions implying otherwise. Formally, it requires that our information about Result be expressed solely in terms of transition proportions, i.e., proportion expressions of the form IIp(ResuZtl(A, V, e))l le, where ‘p is a Boolean combination of fluents. Hence, if our KB contains infor- mation about I lResult(A1, Result(A2, V, e)) I le, the Markov property might no longer hold. Proposition 4: Suppose that the only occurrence of Result in KB is in the context of transition proportions, and that E and E’ do not appear in KB. Then PrW,(Result(A1, V, E) = (V’, E’)A Result1 (AT, V’, E’) = V” I KB) = PrW,(Resultr (Al, V, E) = V’IKB) x Prg(Resulti(A2, V’, E’) = V”(KB). Perhaps the best single illustration of the power of our ap- proach in the context of the situation-calculus is its ability to deal simply and naturally with the frame problem. Many people have an intuition about the frame problem which is, roughly speaking, that “fluents tend not to change value very often”. This suggests that if we could formalize this general principle (that change is unusual), it could serve as a substi- tute for the many explicit frame axioms that would otherwise be needed. However, as shown in [HM87], the most obvious formulations of this idea in standard nonmonotonic logics often fail. Suppose we use a formalism that, in some way, tries to minimize the number of changes in the world. In the YSP, after waiting and then shooting we expect there to be some change: we expect Fred to die. But there is another model which seems to have the “same amount” of change: the gun miraculously becomes unloaded as we wait, and thus Fred does not die. This seems to be the wrong model, but it turns out to be difficult capture this intuition formally. Sub- sequent to Hanks and McDermott’s paper, there was much research in this area before adequate solutions were found. How does our approach fare? It turns out that we can use our statistical language to directly translate the intuition we have about frame axioms, and the result gives us ex- actly the answers we expect in such cases as the YSP. We formalize the statement of minimal change for a fluent P by asserting that it changes in very few circumstances; that is, any action applied in any situation is unlikely to change P: IIP(Resultl(a, V, e)) $ P(v)]](,,,,,) M 0. Ofcourse, the statistical chance of such frame violations cannot be exactly zero, because some actions do cause change in the world. However, the “approximately equals” connective allows for this. Roughly speaking, the above axiom, an instance of which can be added for each fluent P for which we think the frame assumption applies, will cause us to have degree of belief 0 in a fluent changing value unless we have explicit knowledge to the contrary.6 There is one minor subtlety. Recall that in the random- worlds approach, we consider the limit as the domain tends to infinite size. As we observed, since the number of states is bounded, this means that the number of environments and actions must grow without bound. This does not necessarily mean that the number of actions grows without bound. How- ever, in the presence of the frame axioms (as given above), we need this stronger assumption. This need is quite easy to explain. If the only action is Shoot, then half the triples (a, v, e) (those where Loaded is true in V) would lead to a change in the fluent Alive. In this framework, it would be inconsistent to simultaneously suppose that there is only one way of changing the world (i.e., Shoot) and also that every fluent (and in particular, Alive) hardly ever changes. Making the quite reasonable assumption that there are many other ways of effecting change in the world (i.e., many other ac- Of course, it follows from the proposition that to compute Pr& (Result(A2, Result(Al , V, E)) = V”, we just sum over 6Note that having degree of belief 0 does not mean that we all intermediate states. This result generalizes to arbitrary believe something to be impossible, but only extremely unlikely. Hence, this representation does allow for unexpected change, a sequences of actions in the obvious way. useful feature in explanation problems. Causal Reasoning 227 tions in the domain), even though we may say nothing about them, removes the contradiction. Given this, if we add frame axioms as given above we get precisely the results we want. If we try to predict forward from one state to the next, we conclude (with degree of belief 1) that nothing changes except those fluents that the action is known to affect. If we consider a sequence of actions, we can predict the outcome by applying this rule for the first action with respect to the initial state, then applying the second action to the state just obtained, and so on. This is essentially a consequence of Proposition 4, combined with the properties of our frame axiom. In the YSP, for example, the Load action will cause the gun to be loaded, but will change nothing else. Wait will then leave the state completely unchanged. Finally, because the gun will still be loaded, performing Shoot will kill Fred as expected. The idea of a formal theory being faithful to this intuitive semantics (essentially, that in which we consider actions one at a time, assuming minimal change at each step) has recently been formalized by Kartha [Kar93]. Roughly speaking, he showed that a simple procedural language A [GL92] can be embedded into three approaches for dealing with the frame problem [Bak91, Ped89, Rei91], so that the answers pre- scribed by ,4’s semantics (which are the intuitively “right” answers) are also obtained by these formalisms. The follow- ing result shows that we also pass Kartha’s test. Specifically: Proposition 5 There is a sound and complete embedding of A into our language in which the frame axioms appear in the above form. Thus, the random-worlds approach succeeds in solving the frame problem as well as the above approaches, at least in this respect. However, as we mentioned above, our ap- proach is significantly more expressive, in that it can deal with quantitative information in a way that none of these other approaches can. Furthermore, our approach does not have difficulty with state constraints (i.e., ramifications), a problem encountered by a number of other solutions to the frame problem (e.g., those of Reiter and Pednault). Why does the random-worlds method work so easily? There are two reasons. First, the ability to say that propor- tions are very small lets us express, in a natural way within our language, the belief that frame violations are rare. Al- ternative approaches to the problem tend to use powerful minimization techniques, such as circumscription, to encode this. But much more important is our use of an ontology that includes counterfactuals. This turns out to be crucial in avoiding the YSP. Even if the gun does in fact become unloaded somehow, we do not escape the fact that shoot- ing with a loaded gun would have killed Fred. Baker and Ginsberg’s [BG89] solution to the frame problem (based on circumscription) relies on a similar notion of counter-factual situations. But while the solutions are related, they are not identical: for instance, we do not suffer from the problem concerning extraneous fluents that Baker [Bak89] mentions.7 7We also note that Baker and Ginsberg’s solution was con- structed especially to deal with the problem of minimizing frame violations. Our solution to the frame problem and the YSP arises Some solutions to the YSP work by augmenting a prin- ciple of minimal change with a requirement that we should prefer models in which change occurs as late as possible (e.g., [Kau86, Sho881). This solves the original YSP be- cause the model in which Fred dies violates the frame axiom (that Fred should remain alive) later than the model in which the gun miraculously becomes unloaded. However, it has been observed that such theories fail on certain explanation problems, such as Kautz’s [Kau86] stolen car example. Our approach deals well with explanation problems. In Kautz’s example, we park our car in the morning only to find when we return in the evening that it has been stolen. Theories that delay change lead to the conclusion that the car was stolen just prior to our return. A more reasonable answer is to be indifferent about exactly when the car was stolen. Our ap- proach assigns equal probability to the car being stolen over each time period of our absence. That is, if KB axiomatizes the domain in the natural way, and the only action that makes a car disappear from the parking lot is the StealCar action, then we would conclude that: PrL(Ai = StealCarlKB A lParked(Resultl(Ae, Rest&(. . . Result(A1, Result(ParkCar, &j, E)) . + a)))) = $. Conclusion As shown in [BGHK94], the random-worlds approach pro- vides a general framework for probabilistic and default first- order reasoning. The key to adapting random worlds to the domain of causal and temporal reasoning lies in the use of counter-factual ontologies to represent causal information. Our results show that the combination of random worlds and counterfactuals can be used to address many of the impor- tant issues in this domain. The ease with which the general random-worlds technique can be applied to yet another im- portant domain, and its success in dealing with the core problems encountered by other approaches, shows its ver- satility and broad applicability as a general framework for inductive reasoning. There is, however, one important issue which this ap- proach fails to handle appropriately: the qualificationprob- Zem. The reasons for this failure are subtle, and cannot be explained within the space limitations. However, as we dis- cuss in the full paper, the problem is closely related to the fact that random worlds does not learn statistics from samples. This aspect of random-worlds was discussed in [BGHK92], where we also presented an alternative method to comput- ing degrees of belief, the random-propensities approach, that does support learning. In future work, we hope to apply this alternative approach to the ontology described in this frame- work. We have reason to hope that this approach will main- tain the desirable properties described in this framework, and will also deal with the qualification problem. References [Bac90] F. Bacchus. Representing and Reasoning with Probabilistic Knowledge. MIT Press, 1990. naturally and almost directly from our general approach. 228 Causal Reasoning [Bak89] A. Baker. A simple solution to the Yale shoot- ing problem. In Proc. First International Con- ference on Principles of Knowledge Represen- tation and Reasoning (KR ‘89), pages 1 l-20. Morgan Kaufman, 1989. [BakB l] A. Baker. Nonmonotonic reasoning in the framework of the situation calculus. Artificial Intelligence, 49523, 1991. [BG89] A. Baker and M. Ginsberg. Temporal projection and explanation. In Proc. Eleventh International Joint Conference on ArtiJiciaZ Intelligence (IJ- CAI ‘89), pages 906-911,1989. [BGHK92] F. Bacchus, A. J. Grove, J. Y. Halpern, and D. Koller. From statistics to belief. In Proc. National Conference on ArtiJciaZ Intel- ligence (AAAI ‘92), pages 602-608,1992. [BGHK94] F. Bacchus, A. J. Grove, J. Y. Halpern, and D. Koller. Generating degrees of belief from statistical information. Technical report, 1994. Preliminary version in Proc. Thirteenth Interna- tional Joint Conference on ArtiJiciaZInteZZigence (IJCAI ‘93), 1993, pages 906-9 11. [DK89] T. Dean and K. Kanazawa. Persistence and probabilistic projection. IEEE Tran. on Systems, Man and Cybernetics, 19(2):574-85,1989. [GHK92] A. J. Grove, J. Y. Halpern, and D. Koller. Ran- dom worlds and maximum entropy. In Proc. 7th IEEE Symp. on Logic in Computer Science, pages 22-33,1992. [GL92] M. Gelfond and V. Lifschitz. Representing ac- tions in extended logic programming. In Logic Programming: Proc. Tenth Conference, pages 559-573,1992. [GMP90] M. Goldszmidt, P. Morris, and J. Pearl. A max- imum entropy approach to nonmonotonic rea- soning. In Proc. National Conference on Arti- ficial Intelligence (AAAI ‘90), pages 646-652, 1990. [Han901 S. J. Hanks. Projecting Plans for Uncertain Worlds. PhD thesis, Yale University, 1990. [HM87] S. Hanks and S. McDermott. Nonmonotonic logic and temporal projection. Artificial InteZZi- gence, 33(3):379-412,1987. [Hun891 D. Hunter. Causality and maximum entropy updating. International Journal of Approximate Reasoning, 3( 1):379-406,1989. [Jay781 E. T. Jaynes. Where do we stand on maximum entropy? In The Maximum Entropy Formalism, pages 15-118. MIT Press, 1978. [Kar93] G. Kartha. Soundness and completeness the- orems for three formalizations of action. In Proc. Thirteenth International Joint Conference on Artificial Intelligence (IJCAI ‘93), pages 724-729,1993. [Kau86] KybW [Lif87] [MH69] [Pea881 [Pea891 [Pea931 [Ped89 J [Rei9 1] [Rub741 [Sho88] [Sta94] [Ten9 l] H. Kautz. A logic of persistence. In Proc. Na- tional Conference on ArtiJciaZ Intelligence (AAAI ‘86), pages 401405,1986. H. E. Kyburg, Jr. The Logical Foundations of Statistical Inference. Reidel, 1974. V. Lifschitz. Formal theories of action: Prelim- inary report. In The Frame Problem in Artifi- cial Intelligence, pages 121-127. Morgan Kauf- mann, 1987. J. M. McCarthy and P. J. Hayes. Some philo- sophical problems from the standpoint of ar- tificial intelligence. In Machine Intelligence 4, pages 463-502. Edinburgh University Press, 1969. J. Pearl. Probabilistic Reasoning in Intelligent Systems. Morgan Kaufmann, 1988. J. Pearl. Probabilistic semantics for nonmono- tonic reasoning: A survey. In Proc. First Zn- temational Conference on Principles of Knowl- edge Representation and Reasoning (KR ‘89), pages 505-5 16,1989. J. Pearl. Aspects of graphical models connected with causality. In 49th Session of the Interna- tional Statistics Institute, 1993. E. Pednault. ADL: Exploring the middle ground between STRIPS and the situation calculus. In Proc. First International Conference on Prin- ciples of Knowledge Representation and Rea- soning (KR ‘89), pages 324-332. Morgan Kauf- mann, 1989. R. Reiter. The frame problem in the situation calculus: A simple solution (sometimes) and a completeness result for goal regression. In Arti- jkial Intelligence and Mathematical Theory of Computation, pages 359-380. Academic Press, 1991. D. B. Rubin. Estimating causal effects of treatments in randomized and nonrandomized studies. Journal of Educational Psychology, 66:688-701,1974. Y. Shoham. Chronological ingorance: experi- ments in nonmonotonic temporal reasoning. Ar- tificial Intelligence, 36:271-331,1988. R. C. Stalnaker. Knowledge, belief and coun- terfactual reasoning in games. In Proc. Second CastigZionceZZo Conference. Cambridge Univer- sity Press, 1994. To appear. J. D. Tenenberg. Abandoning the completeness assumptions: A statistical approach to the frame problem. International Journal of Expert Sys- tems, 3(4):383-408,1991. Causal Reasoning 229 | 1994 | 128 |
1,462 | Probabilistic evaluation Q count erfact ua Alexander Bake and Judea Cognitive Systems Laboratory Computer Science Department University of California, Los Angeles, CA 90024 < balke@cs. ucla. edu> and <judia@cs.ucla.edu> Abstract Evaluation of counterfactual queries (e.g., “If A were true, would C have been true?“) is important to fault diagnosis, planning, and determination of liability. We present a formalism that uses probabilistic causal net- works to evaluate one’s belief that the counterfactual consequent, C, would have been true if the antecedent, A, were true. The antecedent of the query is inter- preted as an external action that forces the propo- sition A to be true, which is consistent with Lewis’ Miraculous Analysis. This formalism offers a concrete embodiment of the “closest world” approach which (I) properly reflects common understanding of causal in- fluences, (2) deals with the uncertainties inherent in the world, and (3) is amenable to machine represen- tation. Introduction A counterfactual sentence has the form If A were true, then C would have been true where A, the counterfactual antecedent, specifies an event that is contrary to one’s real-world observations, and C, the counterfactual consequent, specifies a result that is expected to hold in the alternative world where the antecedent is true. A typical instance is “If Oswald were not to have shot Kennedy, then Kennedy would still be alive” which presumes the factual knowledge of Oswald’s shooting Kennedy, contrary to the antecedent of the sentence. The majority of the philosophers who have examined the semantics of counterfactual sentences (Goodman 1983; Harper, Stalnaker, & Pearce 1981; Nute 1980; Meyer & van der Hoek 1993) have resorted to some form of logic based on worlds that are “closest” to the real world yet consistent with the counterfactual’s an- tecedent. Ginsberg (Ginsberg 1986), following a simi- lar strategy, suggested that the logic of counterfactuals could be applied to problems in planning and diag- nosis in Artificial Intelligence. The few other papers in AI that have focussed on counterfactual sentences (e.g., (Jackson 1989; Pereira, Aparicio, & Alferes 1991; Boutilier 1992) have mostly adhered to logics based on the “closest world” approach. In the real world, we seldom have adequate informa- tion for verifying the truth of an indicative sentence, much less the truth of a counterfactual sentence. Ex- cept for the small set of relationships between vari- ables which can be modeled by physical laws, most of the relationships in one’s knowledge base are non- deterministic. Therefore, it is more practical to ask not for the truth or falsity of a counterfactual, but for one’s degree of belief in the counterfactual consequent given the antecedent. To account for such uncertain- ties, (Lewis 1976) has generalized the notion of “closest world” using the device of “imaging” ; namely, the clos- est worlds are assigned probability scores, and these scores are combined to compute the probability of the consequent. The drawback of the “closest world” approach is that it leaves the precise specification of the closeness mea- sure almost unconstrained. More specifically, it does not tell us how to encode distances in a way that would (1) conform to our perception of causal influences and (2) lend itself to economical machine representation. This paper can be viewed as a concrete explication of the closest world approach, one that satisfies the two requirements above. The target of our investigation are counterfactual queries of the form: If A were true, then what is the probability that C would have been true, given that we know B? The proposition B stands for the actual observations made in the real world (e.g., that Oswald did shoot Kennedy and that Kennedy is dead) which we make explicit to facilitate the analysis. Counterfactuals are intertwined with notions of causality: We do not typically express counterfactual sentences without assuming a causal relationship be- tween the counterfactual antecedent and the counter- factual consequent. For example, we can safely state “If the sprinkler were on, the grass would be wet”, but the contrapositive form of the same sentence in counterfactual form, “If the grass were dry, then the sprinkler would not be on”, strikes us as strange, be- cause we do not think the state of the grass has causal influence on the state of the sprinkler. Likewise, we 230 Causal Reasoning From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. do not state “All blocks on this table are green, hence, had this white block been on the table, it would have been green”. In fact, we could say that people’s use of counterfactual statements is aimed precisely at con- veying generic causal information, uncontaminated by specific, transitory observations, about the real world. Observed facts often do reflect strange combinations of rare eventualities (e.g., all blocks being green) that have nothing to do with general traits of influence and behavior. The counterfactual sentence, however, em- phasizes the law-like, necessary component of the re- lation considered. It is for this reason, we speculate, that we find such frequent use of counterfactuals in ordinary discourse. The importance of equipping machines with the ca- pability to answer counterfactual queries lies precisely in this causal reading. By making a counterfactual query, the user intends to extract the generic, necessary connection between the antecedent and consequent, re- gardless of the contingent factual information available at that moment. Because of the tight connection between counterfac- tuals and causal influences, any algorithm for comput- ing counterfactual queries must rely heavily on causal knowledge of the domain. This leads naturally to the use of probabilistic causal networks, since these net- works combine causal and probabilistic knowledge and permit reasoning from causes to effects as well as, con- versely, from effects to causes. To emphasize the causal character of counterfactu- als, we will adopt the interpretation in (Pearl 1993a), according to which a counterfactual sentence “If it were A, then B would have been” states that B would pre- vail if A were forced to be true by some unspecified action that is exogenous to the other relationships con- sidered in the analysis. This action-based interpreta- tion does not permit inferences from the counterfactual antecedent towards events that lie in its past. For ex- ample, the action-based interpretation would ratify the counterfactual If Kennedy were alive today, then would have been in a better shape the country but not the counterfactual If Kennedy were alive today, then Oswald would have been alive as well. The former is admitted because the causal influence of Kennedy on the country is presumed to remain valid even if Kennedy became alive by an act of God. The second sentence is disallowed because Kennedy being alive is not perceived as having causal influence on Os- wald being alive. The information intended in the sec- ond sentence is better expressed in an indicative mood: If Kennedy was alive today then he could not have been killed in Dallas, hence, Jack Ruby would not have had a reason to kill Oswald and Oswald would have been alive today. Our interpretation of counterfactual antecedents, which is similar to Lewis’ (Lewis 1979) Miraculous Analysis, contrasts with interpretations that require that the counterfactual antecedent be consistent with the world in which the analysis occurs. The set of closest worlds delineated by the action-based interpre- tation contains all those which coincide with the fac- tual world except on possible consequences of the ac- tion taken. The probabilities assigned to these worlds will be determined by the relative likelihood of those consequences as encoded by the causal network. We will show that causal theories specified in func- tional form (as in (Pearl & Verma 1991; Druzdzel & Simon 1993; Poole 1993)) are sufficient for evaluating counterfactual queries, whereas the causal information embedded in Bayesian networks is not sufficient for the task. Every Bayes network can be represented by several functional specifications, each yielding differ- ent evaluations of a counterfactual. The problem is that, deciding what factual information deserves undo- ing (by the antecedent of the query) requires a model of temporal persistence, and, as noted in (Pearl 1993c), such a model is not part of static Bayesian networks. Functional specification, however, implicitly contains the temporal persistence information needed. The next section introduces some useful notation for concisely expressing counterfactual sentences/queries. We then present an example demonstrating the plausi- bility of the external action interpretation adopted in this paper. We then demonstrate that Bayesian net- works are insufficient for uniquely evaluating counter- factual queries whereas the functional model is suffi- cient. A counterfactual query algorithm is then pre- sented, followed by a re-examination of the earlier ex- ample with a quantitative analysis using this algo- rithm. The final section contains concluding remarks. Notation Let the set of variables describing the world be desig- nated by X = {Xi, X2,. . . , Xn}. As part of the com- plete specification of a counterfactual query, there are real-world observations that make up the background context. These observed values will be represented in the standard form 21, x2, . . . , xn. In addition, we must represent the value of the variables in the counterfac- tual world. To distinguish between xi and the value of Xi in the counterfactual world, we will denote the latter with an asterisk; thus, the value of Xi in the counterfactual world will be represented by $. We will also need a notation to distinguish between events that might be true in the counterfactual world and those referenced explicitly in the counterfactual antecedent. The latter are interpreted as being forced to the coun- terfactual value by an external action, which will be denoted by a hat (e.g., 2). Thus, a typical counterfactual query will have the form “What is P(c* Iti*, a, b)?” to be read as “Given that we have observed A = a and B = b in the real Causal Reasoning 231 A Ann at party Bob at party B I/): C Carl at party S Scuffle Figure 1: Causal structure reflecting the influence that Ann’s attendance has on Bob and Carl’s attendance, and the influence that Bob and Carl’s attendance has on their scuffling. world, if A were &*, then what is the probability that C would have been c*?” Party example To illustrate the external-force interpretations of coun- terfactuals, consider the following interpersonal behav- iors of Ann, Bob, and Carl: o Ann sometimes goes to parties. o Bob likes Ann very much but is not into the party scene. Hence, save for rare circumstances, Bob is at the party if and only if Ann is there. o Carl tries to avoid contact with Ann since they broke up last month, but he really likes parties. Thus, save for rare occasions, Carl is at the party if and only if Ann is not at the party. o Bob and Carl truly hate each other and almost al- ways scuffle when they meet. This situation may be represented by the diamond structure in Figure 1. The four variables A, B, C, and S have the following domains: aE iy { G Ann is not at the party. E Ann is at the party. > { bo bE bl s Bob is not at the party. E Bob is at the party. > CE zy { E Carl is not at the party. E Carl is at the party. 1 SE 1; { z No scuffle between Bob and Carl. E Scuffle between Bob and Carl. > Now consider the following discussion between two friends (Laura and Scott) who did not go to the party but were called by Bob from his home (b = bo): Laura: Ann must not be at the party, or Bob would be there instead of at home. Scott: That must mean that Carl is at the party! Laura: Scott: If Bob were at the party, then Bob and Carl would surely scuffle. No. If Bob was there, then Carl would not be there, .because Ann would have been at the party. Laura: Scott: True. But if Bob were at the party even though Ann was not, then Bob and Carl would be scuffling. I agree. It’s good that Ann would not have been there to see it. In the fourth sentence, Scott tries to explain away Laura’s conclusion by claiming that Bob’s presence would be evidence that Ann was at the party which would imply that Carl was not at the party. Scott, though, analyzes Laura’s counterfactual statement as an indicative sentence by imagining that she had ob- served Bob’s presence at the party; this allows her to use the observation for abductive reasoning. But Laura’s subjunctive (counterfactual) statement should be interpreted as leaving everything in the past as it was (including conclusions obtained from abductive reasoning from real observations) while forcing vari- ables to their counterfactual values. This is the gist of her last statement. This example demonstrates the plausibility of inter- preting the counterfactual statement in terms of an external force causing Bob to be at the party, regard- less of all other prior circumstances. The only variables that we would expect to be impacted by the counter- factual assumption would be the descendants of the counterfactual variable; in other words, the counter- factual value of Bob’s attendance does not change the belief in Ann’s attendance from the belief prompted by the real-world observation. Probabilistic vs. functional specification In this section we will demonstrate that functionally modeled causal theories (Pearl & Verma 1991) are nec- essary for uniquely evaluating count&factual queries, while the conditional probabilities used in the standard specification of Bayesian networks are insufficient for i obtaining unique solutions. Reconsider the party example limited to the two variables A and B, representing Ann and Bob’s at- tendance, respectively. Assume that previous behavior shows P(biJai) = 0.9 and P(bolao) = 0.9. We observe that Bob and Ann are absent from the party and we wonder whether Bob would be there if Ann were there P(bT ItiT, ao, bo). The answer depends on the mecha- nism that accounts for the 10% exception in Bob’s behavior. If the reason Bob occasionally misses par- ties (when Ann goes) is that he is unable to attend ( e.g., being sick or having to finish a paper for AAAI), then the answer to our query would be 90%. How- ever, if the only reason for Bob’s occasional absence (when Ann goes) is that he becomes angry with Ann (in which case he does exactly the opposite of what she does), then the answer to our query is lOO%, because Ann and Bob’s current absence from the party proves that Bob is not angry. Thus, we see that the informa- tion contained in the conditional probabilities on the 232 Causal Reasoning observed variables is insufficient for answering coun- terfactual queries uniquely; some information about the mechanisms responsible for these probabilities is needed as well. The functional specification, which provides this in- formation, models the influence of A on B by a deter- ministic function where eb stands for all unknown factors that may in- fluence B and the prior probability distribution P(cb) quantifies the likelihood of such factors. For example, whether Bob has been grounded by his parents and whether Bob is angry at Ann could make up two pos- sible components of eb. Given a Specific value for Eb, B becomes a deterministic function of A; hence, each value in eb’s domain specifies a response function that maps each value of A to some value in B’s domain. In general, the domain for eb could contain many compo- nents, but it can always be replaced by an equivalent variable that is minimal, by partitioning the domain into equivalence regions, each corresponding to a sin- gle response function (Pearl 199313). Formally, these equivalence classes can be characterized as a function rb : dom(rb) + N, as follows: 0 if Fb(aO, Eb) = 0 & Fb(ai, Eb) = 0 rb(cb) = 1 if Fb(ae, eb) = 0 & Fb(ar, Eb) = I 2 if Fb(ac, Eb) = 1 & Fb(al,eb) = 0 3 if Fb(aO, Eb) = I & Fb(ai, Eb) = I Obviously, rb can be regarded as a random variable that takes on as many values as there are functions between A and B. We will refer to this domain- minimal variable as a response-function variable. rb is closely related to the potential response variables in Rubin’s model of counterfactuals (Rubin 1974), which was introduced to facilitate causal inference in statis- tical analysis (Balke & Pearl 1993). For this example, the response-function variable for B has a four-valued domain rb E (0, 1,2,3} with the following functional specification: b = fb(% rb) = hb,&) (1) where hb,O(a) = b0 (2) hb,l(a) = bo if a = a0 bl ifa=ar hb,2(a) = bl ifa=ac bo if a = al hb,3(a) = h (5) specify the mappings of the individual response func- tions. The prior probability on these response func- tions P(rb) in conjunction with fb(a, rb) fully parame- terizes the model. In practice, specifying a functional model is not as daunting as one might think from the example above. In fact, it could be argued that the subjective judg- ments needed for specifying Bayesian networks (i.e., judgments about conditional probabilities) are gen- erated mentally on the basis of a stored model of functional relationships. For example, in the noisy- OR mechanism, which is often used to model causal interactions, the conditional probabilities are deriva- tives of a functional model involving AND/OR gates, corrupted by independent binary disturbances. This model is used, in fact, to simplify the specification of conditional probabilities in Bayesian networks (Pearl 1988). Given P(rb), we can uniquely evaluate the counter- ‘An observation by D. Heckerman factual query “What is P(bi Iii:, ao, bo)?” (i.e., “Given (personal communication) A = a0 and B = bo, if A were al, then what is the probability that B would have been bl?“). The action- based interpretation of counterfactual antecedents im- plies that the disturbance cb, and hence the response- function rb, is unaffected by the actions that force the counterfactual values’; therefore, what we learn about the response-function from the observed evidence is ap- plicable to the evaluation of belief in the counterfac- tual consequent. If we observe (ao, bo), then we are certain that rb E (0, l}, an event having prior prob- ability P( rb = 0) + P(rb = 1). Hence, this evidence leads to an updated posterior probability for rb (let $(rb) = (P(rb=o), P(rb=l), P(r&), P(r&))) p(rb) = ?(rblaO,bO) = P( rb=o) P( rb=l) P(rb=o) -k P(rb=l) ’ P(rb=o) + P(rb=l) According to Eqs. 1-5, if A were forced to al, then B would have been bl if and only if rb E { 1,3}, which has probability P’(rb=I) -I- P’(rb=3) = P’(rb=I). This is exactly the solution to the counterfactual query, P(bTIiiT, ao, bo) = P’(rb=l) = P( rb=l) P(rb=o) + P(rb=l) * This analysis is consistent with the prior propensity account of (Skyrms 1980). What if we are provided only with the conditional probability (P(bla)) instead of a functional model (fb(% rb) and p(rb))? Th ese two specifications are re- lated by: P(hlao) = P(r&) + P(r&) P(hlal) = P(rb=l) + P(r&). which show that P(rb) is not, in general, uniquely de- termined by the conditional distribution P(bla). Hence, given a counterfactual query, a functional model always leads to a unique solution, while a Bayesian network seldom leads to a unique solution, depending on whether the conditional distributions of the Bayesian network sufficiently constrain the prior distributions of the response-function variables in the corresponding functional model. Causal Reasoning 233 Evaluating counterfactual queries From the last section, we see that the algorithm for evaluating counterfactual queries should consist of: (1) compute the posterior probabilities for the disturbance variables, given the observed evidence; (2) remove the observed evidence and enforce the value for the coun- terfactual antecedent; finally, (3) evaluate the proba- bility of the counterfactual consequent, given the’ con- ditions set in the first two steps. An important point to remember is that it is not enough to compute the posterior distribution of each disturbance variable (e) separately and treat those variables as independent quantities. Although the dis- turbance variables are initially independent, the evi- dence observed tends to create dependencies among the parents of the observed variables, and these dependen- cies need to be represented in the posterior distribu- tion. An efficient way to maintain these dependencies is through the structure of the causal network itself. Thus, we will represent the variables in the counter- factual world as distinct from the corresponding vari- ables in the real world, by using a separate network for each world. Evidence can then be instantiated on the real-world network, and the solution to the coun- terfactual query can be determined as the probability of the counterfactual consequent, as computed in the counterfactual network where the counterfactual an- tecedent is enforced. But, the reader may ask, and this is key, how are the networks for the real and coun- terfactual worlds linked? Because any exogenous vari- able, Ed, is not influenced by forcing the value of any endogenous variables in the model, the value of that disturbance will be identical in both the real and coun- terfactual worlds; therefore, a single variable can rep- resent the disturbance in both worlds. ca thus becomes a common causal influence of the variables represent- ing A in the real and counterfactual networks, respec- tively, which allows evidence in the real-world network to propagate to the counterfactual network. Assume that we are given a causal theory T = (O,O~) as defined in (Pearl & Verma 1991). D is a directed acyclic graph (DAG) that specifies the structure of causal influences over a set of variables x = {X1,X2,... , Xra}. 00 specifies a functional map- ping xi = fi(pa(xi), ei) (pa(xi) represents the value of Xi’s parents) and a prior probability distribution P(Q) for each disturbance ~a’ (we assume that Q’S domain is discrete; if not, we can always transform it to a dis- crete domain such as a response-function variable). A counterfactual query “What is P(c*Iti*, obs)?” is then posed, where c* specifies counterfactual values for a set of variables C c X, 6* specifies forced values for the set of variables in the counterfactual antecedent, and obs specifies observed evidence. The solution can be evaluated by the following algorithm: t 1. From the known causal theory T create a Bayesian network < G, P > that explicitly models the distur- bances as variables and distinguishes the real world 234 Causal Reasoning 2. 3. 4. variables from their counterparts in the counterfac- tual world. G is a DAG defined over the set of vari- ables V = XUX*Uc,whereX=(Xr,X2 ,..., Xn} is the original set of variables modeled by T, X* = {x~,x;,...,x;~ is their counterfactual world rep- resentation, and E = (~1, ~2, . . . , en) represents the set of disturbance variables that summarize the com- mon external causal influences acting on the mem- bers of X and X*. P is the set of conditional proba- bility distributions P( K Ipa( E)) that parameterizes the causal structure G. If Xj E pa(&) in D, then Xj E pa(Xi) and X! E pa(Xr) in G (pa(Xi) is the set of Xi’s par- ents). In addition, I E pa(Xi) and ~a E pa(Xr) in G. The conditional probability distributions for the Bayesian network are generated from the causal theory : P(xi Ipax (xi), fi) 1 = if xi = fi(pa,(xi), Q) 0 otherwise where pax(xi) is the set of values of the variables in X I7 pa(xi). P(xa* IpaX* (x:i*), Q) = P(xi Ipax (xi), I) whenever xi = xi* and pax*(xr) = pax(xi). P(Q) is the same as specified by the functional causal theory T. Observed evidence. The observed evidence obs is in- stantiated on the real world variables X correspond- ing to obs. Counterfactual antecedent. For every forced value in the counterfactual antecedent specification St E ii*, apply the action-based semantics of set(Xf = 2:) (see (Pearl 199313; Spirtes, Glymour, & Scheines 1993)), which amounts to severing all the causal edges from pa(X,*) to X,* for all x;;* E &* and in- stantiating X8? to the value specified in it*. Belief propagation. After instantiating the observa- tions and actions in the network, evaluate the belief in c* using the standard belief update methods for Bayesian networks (Pearl 1988). The result is the solution to the counterfactual query. In the last section, we noted that the conditional distribution P(xklpa(Xk)) for each variable Xk E X constrains, but does not uniquely determine, the prior distribution P(Q) of each disturbance variable. Al- though the composition of the external causal influ- ences are often not precisely known, a subjective dis- tribution over response functions may be assessable. If a reasonable distribution can be selected for each rel- evant disturbance variable, the implementation of the above algorithm is straightforward and the solution is unique; otherwise, bounds on the solution can be ob- tained using convex optimization techniques. (Balke & Pearl 1993) demonstrates this optimization task in deriving bounds on causal effects from partially con- trolled experiments. A network generated by the above algorithm may often be simplified. If a variable X* in the counter- factual world is not a causal descendant of any of the variables mentioned in the counterfactual antecedent &*, then Xj and XT will always have identical distri- butions, because the causal influences that functionally determine Xj and X.J are identical. Xj and XT may therefore be treated as the same variable. In this case, the conditional distribution P(xj Ipa( is sufficient, and the disturbance variable ej and its prior distribu- tion need not be specified. Party again Let us revisit the party example. Assuming we have observed that Bob is not at the party (6 = bo), we want to know whether Bob and Carl would have scuffled if Bob were at the party (i.e., “What is P(si l&i, bo)?“). Suppose that we are supplied with the following causal theory for the model in Figure 1: where a= fa @a) = ha,raO b= fb (a, rb) = hb,Pb(a) c = f&v,) = h,,.,(a) s= fs (b, c, 4 = hs,rs (h c) P(ra) 0.40 if ra = 0 = 0.60 if ra = 1 0.07 if rb = 0 p(rb) = 0.90 if r1, = I 0.03 if rb = 2 0 if rb = 3 ( 0.05 if rc = 0 ( 0.10 if rc = 3 0.05 if rs = 0 P(5) 0.90 = = 0.05 if rs = 8 if rs 9 0 otherwise and ha,o() = a0 ha,l() = al hs,o(h c) = so h,,s(h c) = h&b, c) = SO if (b, c) # (h, cl) ~1 if (b, c) = @I, cl) SO if (b, c) E {(h, CO), (bo, ~1)) ~1 if (b, c) E {(h co), (h,cl)l The response functions for B and C (ha,,, and hC,Tc both take the same form as that given in Eq. (5). Figure 2: Bayesian model for evaluating counterfactual queries in the party example. The variables marked with * make up the counterfactual world, while those without *, the factual world. The r variables index the response functions. Figure 3: To evaluate the query P(si I&i, bo), the net- work of Figure 2 is instantiated with observation bo and action &; (links pointing to bi are severed). These numbers reflect the authors’ understanding of the characters involved. For example, the choice for P(rb) represents our belief that Bob usually is at the party if and only if Ann is there (rb = 1). However, we believe that Bob is sometimes (- 7% of the time) unable to go to the party (e.g., sick or grounded by his parents); this exception is represented by rb = 0. In addition, Bob would sometimes (- 3% of the time) go to the party if and only if Ann is not there (e.g., Bob is in a spiteful mood); this exception is represented by rb = 2. Finally, P(rs) represents our understanding that there is a slight chance (5%) that Bob and Carl would not scuffle regardless of attendance (rs = 0), and the same chance (P(rs=9) = 5%) that a scuffle would take place either outside or inside the party (but not if only one of then shows up). Figure 2 shows the Bayesian network generated from step 1 of the algorithm. After instantiating the real world observations (bo) and the actions (ii) specified by the counterfactual antecedent in accordance with steps 2 and 3, the network takes on the configuration shown in Figure 3. If we propagate the evidence through this Bayesian network, we will arrive at the solution P(s$;, bo) = 0.79. CausalReasoning 235 which is consistent with Laura’s assertion that Bob and Carl would have scuffled if Bob were at the party, given that Bob actually was not at the party. Compare this to the solution to the indicative query that Scott was thinking of: P(slIh) = 0.11. that is, if we had observed that Bob was at the party, then Bob and Carl would probably not have scuffled. This emphasizes the difference between counterfactual and indicative queries and their solutions. Special Case: Linear-Gaussian Models Assume that knowledge equation model is specified by the structural ii! = BZ+Z where B is a triangular matrix (corresponding to a causal model that is a DAG), and we are given the mean i& and covariance C E,E of the disturbances ?(as- sumed to be Gaussian). The mean and covariance of the observable variables Z are then given by: FX = S& c x,x = S&,3 where S = (I - B)-l. (6) (7) Under such a model, there are well-known formulas (Whittaker 1990, p. 163) f or evaluating the conditional mean and covariance of 3c under some observations 0’: -+ Pxlo = Fx + ~,,oz$(~- PO) (8) c x,x10 = c x,x - ~x,oq$~o,, (9) where, for every pair of sub-vectors, z’and 6, of d, CZ,w is the sub-matrix of Cx,x with entries corresponding to the components of Z’ and 5. Singularities of C terms are handled by appropriate means. Similar formulas apply for the mean and covariance of ac’ under an action z. B is replaced by the action- pruned matrix B = [6ij] defined by: &ij = i ifXiEg ij otherwise The mean and covariance of 3 under g is evaluated using Eqs. (6) and (7), where B is replaced by &: FX = Lq!& (11) 2x,x = ~c~,bgt (12) where 3 = (I - B)-l. We can then evaluate the distri- bution of Z under the action z by conditioning on the value of the action z according to Eqs. (8) and (9): -a A 2 PXlii = Pxla = ix + YiI,,a2,l,(ii - 5,) (13) c x,x)& g YiIx,xla = 2x,1 - ilz,aC,l,Ca,x (14) To evaluate the counterfactual query P(x*lii*o) we first update the prior distribution of the disturbances by the observations 0’: -0 PE g /&lo = /& + C,,3(S~,,3y(o’- Go) q, e &lo = &,C - C,,,St(SC,,,St)-lS c We then evaluate the means &la.o and variances x+ ,xLl~oO of the variables in the counterfactual world (x*) under the action k* using Eqs. (13) and (14), with Co and p” replacing C and p. c A x*,x*pi*0 = c;,,,, = IQ, - Q,(Q a)-12; x , t where, from Eqs. (11) and (12), 3: = SFz and Y&,, = Pq E& It’is clear that this procedure can be applied to non- triangular matrices, as long as S is non-singular. In fact, the response-function formulation opens the way to incorporate feedback loops within the Bayesian net- work framework. Conclusion The evaluation of counterfactual queries is applicable to many tasks. For example, determining liability of actions (e.g., “If you had not pushed the table, the glass would not have broken; therefore, you are li- able”). In diagnostic tasks, counterfactual queries can be used to determine which tests to perform in order to increase the probability that faulty components are identified. In planning, counterfactuals can be used for goal regression or for determining which actions, if performed, could have avoided an observed, unex- pected failure. Thus, counterfactual reasoning is an essential component in plan repairing, plan compila- tion and explanation-based learning. In this paper we have presented formal notation, semantics, representation scheme, and inference al- gorithms that facilitate the probabilistic evaluation of counterfactual queries. World knowledge is repre- sented in the language of modified causal networks, whose root nodes are unobserved, and correspond to possible functional mechanisms operating among fami- lies of observables. The prior probabilities of these root nodes are updated by the factual information transmit- ted with the query, and remain fixed thereafter. The antecedent of the query is interpreted as a proposition that is established by an external action, thus prun- ing the corresponding links from the network and fa- cilitating standard Bayesian-network computation to determine the probability of the consequent. At this time the algorithm has not been implemented but, given a subjective prior distribution over the re- sponse variables, there are no new computational tasks introduced by this formalism, and the inference process follows the standard techniques for computing beliefs 236 Causal Reasoning in Bayesian networks (Pearl 1988). If prior distribu- tions over the relevant response-function variables can- not be assessed, we have developed methods of using the standard conditional-probability specification of Bayesian networks to compute upper and lower bounds on counterfactual probabilities (Balke & Pearl 1994). The semantics and methodology introduced in this paper can be adopted to nonprobabilistic formalisms as well, as long as they support two essential compo- nents: abduction (to abduce plausible functional mech- anisms from the factual observations) and causal pro- jection (to infer the consequences of the action-like an- tecedent). We should note, though, that the license to keep the response-function variables constant stems from a unique feature of counterfactual queries, where the factual observations are presumed to occur not ear- lier than the counterfactual action. In general, when an observation takes place before an action, constancy of response functions would be justified if the environ- ment remains relatively static between the observation and the action (e.g., if the disturbance terms ei) rep- resent unknown pre-action conditions). However, in a dynamic environment subject to stochastic shocks a full temporal analysis using temporally-indexed net- works may be warranted or, alternatively, a canonical model of persistence should be invoked (Pearl 1993c). Acknowledgments The research was partially supported by Air Force grant #AFOSR 90 0136, NSF grant #IRI-9200918, Northrop Micro grant #92-123, and Rockwell Micro grant #92-122. Alexander Balke was supported by the Fannie and John Hertz Foundation. This work bene- fitted from discussions with David Heckerman. References Balke, A., and Pearl, J. 1993. Nonparametric bounds on causal effects from partial compliance data. Tech- nical Report R-199, UCLA Cognitive Systems Lab. Balke, A., and Pearl, J. 1994. Bounds on probabilisti- tally evaluated counterfactual queries. Technical Re- port R-213-B, UCLA Cognitive Systems Lab. Boutilier, C. 1992. A logic for revision and subjunc- t ive queries. In Proceedings Tenth National Confer- ence on Artificial Intelligence, 609-15. Menlo Park, CA: AAAI Press. Druzdzel, M. J., and Simon, H. A. 1993. Causality in bayesian belief networks. In Proceedings of the 9th Annual Conference on Uncertainty in Artificial Intel- ligence (UAI-93), 3-11. Ginsberg, M. L. 1986. Counterfactuals. Artificial Intelligence 30:35-79. Goodman, N. 1983. Fact, Fiction, and Forecast. Cam- bridge, MA: Harvard University Press, 4th edition. Harper, W. L.; Stalnaker, R.; and Pearce, G., eds. 1981. Ifs: Conditionals, Belief, Decision, Chance, and Time. Boston, MA: D. Reidel. Jackson, P. 1989. On the semantics of counterfactu- als. In Proceedings of the Eleventh International Joint Conference on Artificial Intelligence, 1382-7 vol. 2. Palo Alto, CA: Morgan Kaufmann. Lewis, D. 1976. Probability of conditionals and condi- tional probabilities. The Philosophical Review 85:297- 315. Lewis, D. 1979. Counterfactual dependence and time’s arrow. Noiis 455-476. Meyer, J.-J., and van der Hoek, W. 1993. Counter- factual reasoning by (means of) defaults. Annals of Mathematics and Artificial Intelligence 9:345-360. Nute, D. 1980. Topics in Conditional Logic. Boston: D. Reidel. Pearl, J., and Verma, T. 1991. A theory of inferred causation. In Principles of Knowledge Representation and Reasoning: Proceedings of the Second Interna- tional Conference, 441-452. San Mateo, CA: Morgan Kaufmann. Pearl, J. 1988. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. San Mateo, CA: Morgan Kaufman. Pearl, J. 1993a. From Adams’ conditionals to de- fault expressions, causal conditionals, and counterfac- tuals. Technical Report R-193, UCLA Cognitive Sys- tems Lab. To appear in Festschrift for Ernest Adams, Cambridge University Press, 1994. Pearl, J. 1993b. From Bayesian networks to causal networks. Technical Report R-195-LLL, UCLA Cog- nitive Systems Lab. Short version: Statistical Science 8(3):266-269. Pearl, J. 1993c. From conditional oughts to qualita- tive decision theory. In Uncertainty in Artificial Intel- ligence: Proceedings of the Ninth Conference, 12-20. Morgan Kaufmann. Pereira, L. M.; Aparicio, J. N.; and Alferes, J. J. 1991. Counterfactual reasoning based on revising assump- tions. In Logic Programming: Proceedings of the 1991 International Symposium, 566-577. Cambridge, MA: MIT Press. Poole, D. 1993. Probabilistic Horn abduction and Bayesian networks. Artificial Intelligence 64( 1):81- 130. Rubin, D. B. 1974. Estimating causal effects of treatments in randomized and nonrandomized stud- ies. Journal of Educational Psychology 66(5):688-701. Skyrms, B. 1980. The prior propensity account of subjunctive conditionals. In Harper, W.; Stalnaker, R.; and Pearce, G., eds., Ifs. D. Reidel. 259-265. Spirtes, P.; Glymour, C.; and Scheines, R. 1993. Cau- sation, Prediction, and Search. New York: Springer. Whittaker, J. 1990. Graphical Models in Applied Mul- tivariate Statistics. New York: John Wiley & Sons. Causal Reasoning 237 | 1994 | 129 |
1,463 | escription Classi e Predicate Robert M. MacGregor USC/Information Sciences Institute 4676 Admiralty Way Marina de1 Rey, CA 90292-6695 macgregor@isi.edu Abstract A description classifier organizes concepts and relations into a taxonomy based on the results of subsumption computations applied to pairs of relation definitions. Until now, description classifiers have only been designed to operate over definitions phrased in highly restricted subsets of the predicate calculus. This paper describes a classifier able to reason with definitions phrased in the full first order predicate calculus, extended with sets, cardinality, equality, scalar inequalities, and predicate variables. The performance of the new classifier is comparable to that of existing description classifiers. Our classifier introduces two new techniques, dual representations and auto-Socratic elaboration, that may be expected to improve the performance of existing description classifiers. Introduction A description is an expression in a formal language that defines a set of instances or tuples. A description logic (also called a terminological logic) consists of a syntax for constructing descriptions and a semantics that defines the meaning of each description. Description logics [MacGregor 90].provide the foundation for a number of modern knowledge representation systems, including Loom[MacGregor 911, BACK[Peltason 911, CLASSIC[Borgida et al 891, KREP[Mays et al 911, and KRIS[Baader&Hollunder 911. Each of these systems includes a specialized reasoner called a description classifier that computes subsumption relationships between descriptions, and organizes them into one or several taxonomies. Subsumption computations play a role in these systems analogous to the match or unification operations performed in other classes of deductive systems. Description logic systems as a class implement a style of deductive inference that is deeper than standard backchaining, and that is much more efficient than theorem prover-based deduction. A hallmark of description logics is that they severely limit the expressive power of their description languages. We believe that the absence of full expressivity is one of the factors that is preventing description classifiers from becoming a standard component in knowledge base management systems [Doyle&Patil 911. Accordingly, we have developed a new classifier that accepts description expressions phrased using the full predicate calculus, extended with sets, cardinality, equality, scalar inequalities, and predicate variables. The description syntax is uniform for predicates of arbitrary arity, and recursive definitions are supported. We have found that architectural principles developed for description logic classifiers can be transferred into a classifier that reasons with predicate calculus expressions. This paper begins by describing the internal format and subsumption algorithm used in the predicate calculus (PC) classifier. We next discuss the normal form transformations used in this classifier. The strategy for normalization incorporates two innovations, dual representations and auto-Socratic elaboration, that increase both the performance and the flexibility of the classifier. Finally, we present results indicating that the new classifier has performance comparable to that of existing classifiers. escriptions A relation description specifies an intensional definition for a relation. It has the following components: a name (optional); a list of domain variables < dvl, . . . ,dvk >, where k is the arity of the relation; a definition-an open sentence in the prefix predicate calculus whose free variables are a subset of the domain variables. a partial indicator (true or false)- if true, it indicates that the predicate represented by the relation definition is a necessary but not sufficient test for membership in the relation. If a relation description is partial, then the relation is said to be primitive. If R is a non-primitive relation with arity one, then its extension is defined as the set {dvl I defnR }, where dvl is the domain variable and defnR is the definition in the description of R. If R is a non-primitive relation with arity k greater than one, then its extension is defined as the set of tuples { < dvl, . . . ,dvk > I defnR }, where dvl, . . . ,dvk are the domain variables and defnR is the definition in the description of R. If R is primitive, then Description Logic 213 From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. its extension is a subset of the set associated with its definition. Relation descriptions are introduced by the def relation operator, with the syntax (defrelation <name> (<domain variables>) [:def 1 :iff-def] <definition>) The keyword : def indicates that the definition is partial, while the keyword : if f -def indicates that it is not. For example (defrelation Person (3~) :def (Mammal ?D)) defines a relation Person to be a primitive subrelation of the relation Mammal. 1 The description (defrelation daughter (?g ?d) :iff-def (and (child ?g ?d) (Female ?a)))) defines the relation daughter to be a non-primitive binary subrelation of the relation child. A simple sentence is predication of the form (P tl . . . tk) where tl . . . tk are terms, and P is either the name of a k-ary relation or a term that evaluates to a k-ary relation. Complex sentences are constructed from simple sentences using the operators and, or, not, and implies, and the quantifiers f orsome and forall. A term is either a constant, a variable, a set expression, or a form (F tl . . . tj ) where F is a single-valued relation of arity (j+l) (i.e., F is a function). A variable is a string of characters prefixed by ‘9 “. A set expression is a term of the form (setof (<variables>) <definition>) thatdefinesan unnamed relation with domain variables <variables> and definition <definition>. The function the-relation takes as arguments a name (a string of characters not prefixed by ‘2”) and a positive integer indicating the arity, and returns the relation with that name and arity (relations of different arity may share the same name). Subsumption The primary deductive task of a description classifier is the computation of “subsumption” relationships. We say that a relation A subsumes a relation B if, based upon their respective definitions, the extension of A contains the extension of B. A description classifier organizes descriptions into a hierarchy, with description A placed above description B if A’s relation subsumes B’s relation. The inverse relation to subsumes is called specializes. B specializes A if A subsumes B. Consider the following pair of descriptions. At-Least-One-Son defines the set of all persons that have at least one son: lA sortal relation such as Person would ordinarily be introduced as a concept, using the def concept operator, rather than being defined as a unary relation. The algorithm is the same for classifying concepts and for classifying unary relations. For simplicity, we avoid distinguishing between concepts and unary relations in this paper. (defrelation At-Least-One-Son (3~) :iff-def (and (Person ?g) (>= (cardinality (setof (?c) (son ?D ?c))) 1)) More-Sons-Than-Daughters defines the setofpersons that have more sons than daughters: (defrelation More-Sons-Than-Daughters (?DD) :iff-def (and (Person ?gg) (> (cardinality (setof (?b) (son ?DD ?b))) (cardinality (setof (19) (daughter ?RR ?g)))))) The PC classifier can prove that At-Least-One-Son subsumes More-Sons-Than-Daughters,i.e.itwill classify More-Sons-Than-Daughters below At-Least-One-Son. The remainder of this paper describes the proof strategy that it uses to find such subsumption relationships. The majority of description classifiers currently implemented, including the PC classifier, employ a structural subsumption test.:! Roughly speaking, to prove that a description A subsumes a description B, a structural subsumption prover attempts to demonstrate that for every structural “component” in (the internal representation for) the definition of A there exists a “corresponding component” in (the internal representation for) the definition of B. An appealing feature of a classification strategy that uses a structural test is that much of the inferencing occurs in a “normalization” phase that precedes the actual test. If a description is repeatedly tested for subsumption against other descriptions (the usual situation in a classifier) the cost of normalization is amortized across all tests, thereby lowering the average cost of each subsumption test. Most classifiers adopt frame-like representations for their internal representation of descriptions, and their subsumption tests operate by comparing the structure between a pair of such frames. The PC classifier parses definitions phrased in the prefix predicate calculus into graph-based structures, and all subsequent reasoning involves operations on these graphs. Each of our graphs represents a set expression, consisting of a list of domain variables and a set membership test (the definition). The root node of such a graph is equated with a set o f expression, additional nodes represent each of the domain variables, and the remaining edges and nodes represent the membership test. A node can represent a variable, a constant, or another set expression. A predicate applied to a list of terms is represented by a (hyper) edge connecting the nodes corresponding to the terms in the list, together with a pointer to the node that corresponds to the predicate. Nodes representing skolem variables are introduced to represent function terms. Variables other than domain 2The KRIS classifier [BaaderkHollunder 911 is the notable exception. 214 Automated Reasoning variables are assumed to be existentially quantified. As we shall see below, our graph representation eliminates the use of (or need for) universal quantifiers. The notation includes explicit representations for disjunction, negation, and enumerated sets-these constructs lie outside of the scope of the present discussion. Because edges in a graph can point to (root nodes of) other graphs, our graph structures form a network. Each edge in the network “belongs to” exactly one graph-the graph corresponding to the innermost setof expression that contains the predication that defines that edge. Each graph is defined to consist of a set of edges plus the set of all nodes referenced by those edges. A node can therefore “belong to” many different graphs. The job of a subsumption test that is comparing graphs A and B is to find a substitution that maps nodes and edges belonging to graph A to corresponding nodes and edges belonging to graph B. The parser that converts predicate calculus descriptions into graphs applies a few simple transformations in the process, including skolemization of set and function expressions, and conversion of material implications into subset relations (this is illustrated later in this section). In the remainder of this paper, we shall use the term “graph” to refer to a setof expression that has undergone these transformations, and we will use graph terminology (e.g., nodes, edges, paths) when referring to structural components and features within our set expressions. The At-Least-One-Son relation defined above is associated with a graph representing the following set (setof (?R) (and (Person 19) (>= (cardinality (setof (?c) (son ?D ?c))) 1) The addition of variables to represent the nested setof expression and the Skolemized cardinality function produces the following equivalent set expression (setof (3~) (exists (?sl ?cardl) [ll (and (Person ?g) (= ?sl (setof (?c) (son ?p ?c))) (cardinality ?sl Pcardl) (>= ?cardl 1)))) We can now illustrate how a structural subsumption algorithm finds a subsumption relationship between At-Least-One-SonandMore-Sons-Than-Daughters. Hereisthe“graph”for More-Sons-Than-Daughters: (setof (?RR) c21 (exists (?s2 ?s3 ?card2 ?card3) (and (Person ?gg) (= ?s2 (setof (?b) (son ?gg ?b))) (= ?s3 (setof (?g) (daughter ?pp 19))) (cardinality 382 3card2) (cardinality 383 3card3) (> ?card2 ?card3)))) To prove that At-Least-One-Son subsumes More-Sons-Than-Daughters, we lookforasubstitution mapping nodes in [l] to nodes in [2] such that for each edge in the graph [l] there is a corresponding edge in the graph [2]. The correct substitution CJ is ?p ~0 ?pp; ?sl ~0 ?s2; ?card 1 ~0 ?card2 except that there is a problem-no edge in graph [2] corresponds to the edge ( >= ?cardl 1) in graph [ 11. However, a constraint representing the missing edge ( >= 3card2 1) is logically derivable from the existing constraints/edges present in graph [2]-the addition of this missing edge would result in a set expression logically equivalent to the expression [2]. Using a process we call “elaboration” (explained in Section 5), the PC classifier applies forward chaining rules to augment each graph with edges that logically follow from the existence of other edges already present in the graph. In the case of graph [2], the following edges would be added during elaboration (Integer ?card2), (>= ?card2 0), (Integer ?card3), (>= ?card3 01, (>= 3card2 1) This last edge, representing our “missing edge”, derives from the fact that ?card3 is non-negative, ?card2 is strictly greater than ?card3, hence greater than zero, and that ?card2 is an integer. After applying elaboration to graph [2], the substitution 0 successfully demonstrates that the relation At-Least-One-Son subsumes the relation More-Sons-Than-Daughters. To our knowledge, no existing description classifier other than the PC classifier can compute this subsumption relation. We know this because none of them have the expressive power needed to represent the relation More-Sons-Than-Daughters. Here aretwo more relations that cannot be expressed in any existing description logic: (defrelation One-Of-Five-Fastest-Ships (38) l iff-def (and . (Ship 38) (<= (cardinality (setof (?fs) and (faster-than ?fs Ps))) 4))) (defrelation Third-Fastest-Ship (1s) :iff-def (and (Ship 3s) (= (cardinality (setof (?fs) (faster-than ?fs 1s))) 2))) Knowledge about upper and lower bounds (in this case, that “= 2” is a stricter constraint than “<= 4”) is hardwired into the PC classifier. Since the remaining structure is identical between the two graphs, it is straightforward for the PC classifier to determine that the relation One-of-Five-Fastest-Ships subsumes the relation Third-Fastest-Ship. Before we conclude our discussion of graph notation, recall that it does not provide a means for explicitly representing universally quantified variables. Instead, when the parser encounters an expression of the form Description Logic 215 (forall (3~1 . . . ?vk) (implies <antecedent> <consequent>)) it transforms this expression into the equivalent expression (contained-in (setof (?vl . . . ?vk) <antecedent>) (setof (3~1 . . . ?vk) <consequent>)) where contained-in is the subset/superset relation. In effect, reasoning about universally quantified variables is transformed into reasoning about set relationships. For example, the graph for (defrelation Relaxed-Parent (?p) :iff-def (and (Parent ?g) (forall (3~) (implies (child ?g ?c) (Asleeg ?c))))) is contained-in(o(x),RB) in GB such that RB specializes RA; (3b) If contains(x,RA) is an edge in GA and RA is a relation, then there exists an edge contains(o(x),RB) in GB such that RB subsumes RA; (4a) If >=(x,kA) is an edge in GA and kA represents a numeric constant (kA denotes a number) then there exists an edge >=(o(x), kg) in GB such that value(kB) >= value(kA), where for all constant nodes k, “value(k)” is the denotation of k; (4b, 4c, 4d) Analogous to (4a) for the relations <=, <, and >. structural subsumption test. Their inclusion in our test enables us to reduce the size of our graphs. For example, if one of our graphs contains both of the edges C(x) and C’(x) and C’ specializes C, then we can eliminate the edge C(x) Remark: The alternatives (ii) and (iii) in condition 2 above serve to relax what would otherwise be a strictly Substituting a reference to the unary (S&Of (3~) (and relation Asleep the set of things satisfying the Asleep (Parent ?g) predicate yields (contained-in (setof (?c) (child ?g ?c)) (S&Of (3~) (Asleep ?C))))) for without sacrificing inferential completeness. (setof (?g) (and (Person ?g) (contained-in (setof (3~) (child ?p 3~)) (the-relation Asleep 1)))) Canonical Graphs To the best of our knowledge, all classifiers that utilize a structural subsumption test preface that test with a series of transformations designed to produce a “canonical” or “normalized” internal representation for each of the relations being tested. The strategy underlying these canonicalization transformations is to make otherwise dissimilar representations become as alike as possible, so that ideally, a structural test would suffice for determining subsumption relationships. For all but very restricted languages this strategy cannot result in a test for subsumption that is both sound and complete. For languages as expressive as NIKL , Loom, or BACK, theory tells us that a sound and complete decision procedure for testing subsumption relationships does not exist (i.e., subsumption testing is “undecidable”) [Patel-Schneider 891. The designers of structural subsumption-based classification systems have concluded that a strategy that relies on (imperfect) canonicalization transformations and a structural subsumption test represents a reasonable approach to “solving” this class of undecidable problems. The Subsumption Test Let A and B be relations defined by expressions/graphs GA and GB. We apply the following test to prove that A subsumes B: If A is primitive (if its description is partial) then succeed if GB explicitly inherits a relation known to specialize A. Formally, B specializes a primitive relation A if GB contains an edge R(dv1, . . . ,dvk) where dvl , . . . ,dvk are the domain variables in the root node of GB and R is a relation that specializes A. Otherwise (A is not primitive) succeed if there exists a substitution o from nodes in GA to nodes in GB such that all of the following conditions hold: (la) If x is a constant node in GA, then o(x) denotes the same constant, i.e., o(x) = x; (lb) If x is a set node in GA, then o(x) is also a set node and definition(x) =o definition(o(x)), where for all set nodes y, “definition(y)” refers to the subgraph that defines y, and “~0” denotes structural equivalence under the substitution (r; (2) If PA(Xl, . . . ,xk) is an edge in GA then there exists an edge PB(o(xl), . . . , o(xk)) in GB such that either (i) PB = O(PA) or (ii) PA and PB are relations and PB specializes PA, or (iii) the edge PA(x1, . . . ,xk) matches one the The PC classifier splits the normalization process into two phases. In the canonicalization phase, equivalence- preserving transformations (rewrite rules) are applied that substitute one kind of graph structure for another. In the subsequent elaboration phase, structure is added to a graph (again preserving semantic equivalence), but no structure is subtracted. The PC classifier implements several canonicalization strategies. The most important is the procedure that “expands” each of the edges in a graph that is labeled by a non-primitive relation. An edge with label special cases 3a, 3b, 4a, 4b, 4c, or 4d; R is expanded by substituting for the edge a copy of the (3a) If contained-in(x,RA) is an edge in GA and RA is graph for R. A second important canonicalization is one a relation, then there exists an edge that substitutes an individual node for a nested set in cases 216 Automated Reasoning Representative Selection of Elaboration Rules Ineaualitv rules: 11 >= MIN and Number(MIN) and 12 >= 11 j 12 >= MIN 11 > MIN and Number(MIN) and 12 >= 11 3 12 > MIN 11 >= MIN and Number(MIN) and 12 > 11 =+ 12 > MIN 11 <= MAX and Number(MAX) and 12 c= 11 d 12 <= MAX 11 c MAX and Number(MAX) and 12 c= 11 + 12 < MAX 11 <= MAX and Number(MAX) and 12 < 11 j 12 < MAX I > MIN and Integer(I) a I >= floor(MIN) + 1 I >= MIN and Integer(I) and Number(MIN) and not(Integer(MIN)) ; propagate lower bound ; propagate strict lower bound ; propagate strict lower bound ; propagate upper bound ; propagate strict upper bound ; propagate strict upper bound ; round lower bound up * I >= floor(MIN) + 1 I < MAX and Integer(I) + I <= ceiling(MAX) - 1 I <= MAX and Integer(I) and Number(MAX) ; round upper bound down and not(Integer(MAX)) 3 I <= ceiling(MAX) - 1 11 >=12andI2>=11 aI1 =I2 11 >= 12 * 12 <= 11 11<=12*12>=11 11>12*12<11 11<12*12>11 ; equate two-way greater or equal ; inverse greater or equal ; inverse lesser or equal ; inverse greater ; inverse lesser Cardinalitv rules: set(S) =$ exists(I) cardinality(S,I) cardinality(S,I) * Integer(I) cardinality(S,I) * I >= 0 contained-in(S 1 ,S2) =$ cardinality(S 1) <= cardinality(S2)) I >= MIN and I <= MAX and Integer(I) and Integer(MIN) ; sets have cardinalities * integer cardinality , ; non-negative cardinality ; greater cardinality superset and Integer(MAX) and domain-variable(S,I) and arity(S) = 1 d cardinality(S) <= MAX - MIN in(I,S) 3 cardinality(S) >= 1 cardinality(S) = 1 and in(W) and in(J,S) * I = J ; non-empty set ; equate members of singleton set Contained-in rules: contained-in(S 1 ,S2) and in(I,S 1) * in(I,S2) ; propagate members up contained-in(S l,S2) and cardinality(S 1) = cardinality(S2) j S 1 = S2 ;equate equal cardinality super-set contained-in(S l,S2) and contained-in(S2,S3) j contained-in(S 1 ,S3) ; transitivity of contained-in contained-in(S 1 ,S2) and contained-in(S2,S 1) 3 S 1 = S2 ; equate two-way containment S 1 = S2 d contained-in(S 1 ,S2) ; reflexivity of contained-in contained-in(S l,S2) and contained-in(S 1 ,S3) and intersection(S2,S3,S4) ; contained-in intersection set j contained-in(S 1 ,S4) contained-in(S 1 ,S2) 3 contains(S2,S 1) contains(S 1 ,S2) * contained-in(S2,S 1) ; inverse contained-in ; inverse contains Other rules: in(I,S 1) and in(I,S2) and intersection(S 1 ,S2,S3) 3 in(I,S3) domain-variable(S 1 ,I 1) and arity(S 1) = 1 and in(I1 ,S2) * contained-in(S l,S2) ; member of intersection set Table 1 when this transformation is guaranteed to preserve semantic equivalence. the constraint propagation procedures incorporated into the PC classifier implement four of the five classes of forward constraint propagation (all but Boolean constraint Elaboration propagation) embodied in McAllester’s SCREAMER system [McAllester&Siskind 931. This section describes two of the elaboration procedures imDlemented in the PC c1assifier.l Each of them imblements a form of constraint propagation. Collectively, 1 Other elaboration procedures include primitive edge expansion, recognition ,and realization. Elaboration Rules and Dual Representations An “elaboration rule” is an if-then rule that adds edges (or occasionally, nodes) to a graph. Table 1 illustrates many of the elaboration rules used in the PC classifier. A comparison of our rules with those published by Borgida to Description Logic 217 describe the Classic classifier [Borgida 921 reveals that our rules tend to be finer grained than those in Classic, enabling it, for example, to have a superior ability to reason about cardinality relationships (as evidenced by the sons- and-daughters and fastest-ships examples in Section 2). A graph is elaborated by applying the rules in Table 1 repeatedly until no additional structure can be produced (the use of these rules is similar to the use of a “local” rule set [Givan&McAllester 921). In addition, the elaboration procedure applies a structural subsumption test between each pair of nested sets, and adds a “contained-in” edge if it finds a subsumption relationship. Consider the following definition: (defrelation Brothers-Are-Friends (?p) :iff-def (contained-in (setof (?b) (brother ?g ?b)) (setof (?f) (friend ?g ?f)))) The graph for this relation is (setof (?p) (exists (?sl ?s2) (and (= 381 (setof (?b) (brother ?g ?b))) (= 382 (setof (?f) (friend ?g Pf)))) (contained-in ?sl 382)))) Applying the applicable elaboration rules results in the following graph (setof (?p) (exists (381 382 ?cardl ?card2) (and (= ?sl (setof (?b) (brother ?p ?b))) (= ?s2 (setof (?f) (friend ?p ?f)))) (cardinality 381 ?cardl) (cardinality ?s2 ?card2) (Integer ?cardl) (>= ?cardl 0) (Integer ?card2) (>= ?card2 0) (>= ?card2 Pcardl) (<= Pcardl 3card2) (contained-in ?sl ?s2) (contains ?s2 ?sl)))) Elaboration is applied to a graph for the purpose of making implicit structure explicit, and therefore accessible to our structural subsumption algorithm. We observe that our graph for Brothers-Are-Friends now has quite a bit of additional structure. The up side to elaboration is that when seeking to prove that Brothers-Are-Friends is subsumed by some other relation R, the additional structure increases the possibility that our subsumption test will discover that R subsumes Brothers-Are-Friends (because in this case the graph for Brothers-Are-Friends contains additional structure to map to). The down side is that the additional structure could make it less likely that the subsumption test finds the inverse subsumption relationship, i.e., that Brothers-Are-Friends subsumes R (because in this case the graph for Brothers-Are-Friends contains additional structure that must be mapped from). Intuitively, if G is a graph, applying elaboration rules to G makes it “easier” to classify G below another graph, but it makes it “harder” to classify another graph below G. The standard answer to this apparent conundrum is to elaborate all graphs before computing subsumption relationships between them. We find two problems with the standard approach: (1) For this strategy to succeed, it is necessary to apply “the same amount” of elaboration to all graphs . As we shall soon see, in the scheme we have implemented for the PC classifier, the amount of elaboration applied to a graph is variable, depending on more than just the initial graph structure. (2) Because it causes the size of a graph to increase, elaboration degrades the performance of a subsumption algorithm at the same time that it increases the algorithm’s completeness. Dual Representations Our solution is to maintain two separate graphs for each relation, one elaborated and one not. Given a relation R, let g(R) refer to the canonicalized but unelaborated graph for R, and let e-g(R) refer to the canonicalized and elaborated graph for R. To test if relation R subsumes relation S, our subsumption algorithm compares g(R) with e-g(S), i.e., it looks for a substitution that maps from the unelaborated graph for R to the elaborated graph for S. This “dual representation” architecture completely solves the first of the two problems we cited above, and reduces the negative effect on performance of the second: (1) For relations R and S, increasing the amount of elaboration applied to e-g(R) increases the completeness of a test to determine if S subsumes R, without affecting the completeness of a test to determine if R subsumes S. (2) Assume that the cost of a subsumption test between two graphs is proportional to the product of the “sizes” of those graphs. If elaboration causes the size of each graph to grow by a factor of K, then the cost of comparing e-g(R) and e-g(S) is (K * K) times the cost of comparing g(R) and g(S). However the cost of comparing e-g(R) with g(S) is only K times the cost of comparing unelaborated graphs. Hence, according to this rough calculation, the standard elaboration strategy has a cost K times that of the dual representation strategy, where K is the ratio between the relative sizes of elaborated and unelaborated graphs. Auto-Socratic Elaboration A potentially serious drawback of conventional (overly aggressive) elaboration strategies is that they may generate graph structures that are never referenced by any subsequent subsumption tests (these represent a waste of both time and space). Alternatively, an overly timid strategy may suffer incompleteness by failing to generate structures that it should. This section introduces a new technique, called “auto-Socratic elaboration”, that assists the classifier in controlling the generation of new graph structure. Given a graph G, if we add a new set node N to G containing any definition whatsoever, but we do not add any new edges that relate N to previously existing nodes in G, then the denotation of G remains unchanged. Hence, this represents a legal elaboration of the graph G. Consider the following pairs of graphs: 218 Automated Reasoning “The set of things with at most two female children” (setof (19) 131 (exists (?sO ?cardO) (and (= ?sO (setof (?c) (and (child ?p ?c) (Female 3~))) (cardinality 380 ?cardO) (>= 2 ?cardO)))) “The set of things with at most two children” (setof (3~) (exists (?sl ?cardl) 143 (and (= Psi (setof (3~) (child ?p ?c))) (cardinality Psi ?cardl) (>= 2 ?cardl)))) In this section, we discuss the problem of determining that the graph [3] subsumes the graph [4]. Our structural subsumption test fails initially because no set node in [4] corresponds to the set node ?sO in [3]. We can elaborate graph [4] by adding to it a new set node ?s2 having the same definition as that of ?sO, resulting in: (setof (?B) 151 (exists (?sl Pcardl 382) (and (= ?sl (setof (3~) (child ?p 3~))) (= ?s2 (setof (?c) (and (child 3g ?c) (Female ?c))) (cardinality ?sl ?cardl) (>= 2 Pcardl)))) The elaboration procedure described in the previous section will apply a subsumption test to the pair <?sl,?s2>, resulting in the addition of the edge “contains(?sl ,?s2)” (thereby making an implicit subsumption relationship explicit). Application of Table 1 elaboration rules yields (setof (?B) 161 (exists (?sl ?cardl ?s2 ?card2) (and (= ?sl (setof (?c) (child ?p ?c))) (= ?s2 (setof (?c) (and (child ?p ?c) (Female 3~))) (cardinality 381 ?cardl) (cardinality ?s2 ?card2) (contains ?sl ?s2) (>= 2 ?cardl) (>= ?cardl 3card2) (>= 2 ?card2)))) Structural subsumption can determine that graph [3] subsumes graph [6], implying that graph [3] also subsumes graph [4]. It remains for us to specify how and when the PC classifier decides to add a new set node to a graph, as exemplified by the transformation from graph [4] to grwW1. Our PC classifier implements a “demand-driven” strategy for adding new set nodes to a graph. If a test to determine if a graph GA subsumes a graph GR returns a negative result, and if the result is due to the identification of a set node NA in GA for which there is no set node in GR having an equivalent definition, the following steps occur: (1) A new set node NR with definition equivalent to that for NA (after substitution) is added to GR; (2) Tests are made to see if NR subsumes or is subsumed by any other sets in GR; (3) If so, new contained-in edges are added, triggering additional elaboration of GR ; (4) The subsumption test is repeated. We call this procedure “auto-Socratic elaboration”. “Socratic” inference [Crawford&Kuipers $91 refers to an inference scheme in which the posing of questions by an external agent triggers the addition (in forward chaining fashion) of new axioms to a prover’s internal knowledge base. We refer to our procedure as “auto-Socratic” because in the PC classifier, the system is asking itself (subsumption) questions in the course of classifying a description, and its attempts to answer such questions may trigger forward-driven inferences (elaborations). The PC classifier was compared with that of the Loom classifier on three different knowledge bases. The largest (containing approximately 1300 definitions) is a translated version of the Shared Domain Ontology (SDO) knowledge base used by researchers in the ARPA/Rome Labs Planning Initiative. The other two knowledge bases were synthetically generated using knowledge base generator procedures previously used in a benchmark of six classifiers performed at DKFI[Profitlich et al 921.1 The results of Table 2 indicate that the Loom classifier is roughly twice as fast the PC classifier.2 1 Loom was one of the faster classifiers in the DFKI benchmark. 2Testing was performed on a Hewlitt-Packard 730 running Lucid Common Lisp. Knowledne Base SD0 Synthetic # 1 Synthetic #2 PC Classifier 80 seconds 5 8 seconds 45 seconds Table 2 Loom Classifier 45 seconds 2 2 seconds 34 seconds Description Logic 219 Completeness DL languages have been developed that have complete classifiers. However, completeness comes at a steep price: the DL languages that support complete classification have very restricted expressiveness. While such languages are of theoretical interest, and may be useful for certain niche applications, their severe constraints limit their utility and preclude them from broad application. In contrast, our approach provides a rich and highly expressive representation language. If this expressiveness is used, then classification must be incomplete. But where should it be incomplete? Different applications and domains will stress different sorts of reasoning. Inferences that are important in one will be inconsequential in another. A virtue of the PC architecture is that it is flexible and extensible. By changing elaboration rules, we can fine-tune the performance of the classifier, allowing us to change the kinds of inferences that are supported and tradeoff the breadth and depth of inference against efficiency. Thus, the PC classifier and language frees an application developer from a representational straitjacket by enhancing both the expressiveness of the language and the range of inference that can be supported. Conclusions DL languages have been developed that have complete classifiers. However, completeness comes at a steep price: the DL languages that support complete classification have very restricted expressiveness. While such languages are of theoretical interest, and may be useful for certain niche applications, their severe constraints limit their utility and preclude them from broad application. In contrast, our approach provides a rich and highly expressive representation language. If this expressiveness is used, then classification must be incomplete. But where should it be incomplete? Different applications and domains will stress different sorts of reasoning. Inferences that are important in one will be inconsequential in another. A virtue of the PC architecture is that it is flexible and extensible. By changing elaboration rules, we can fine-tune the performance of the classifier, allowing us to change the kinds of inferences that are supported and tradeoff the breadth and depth of inference against efficiency. Thus, the PC classifier and language frees an application developer from a representational straitjacket by enhancing both the expressiveness of the language and the range of inference that can be supported. Acknowledgments. I would like to thank Bill Swartout and Tom Russ for their edits and criticisms of earlier drafts of this paper. Eric Melz provided the timings listed in Table 2. and Inference System”, SIGART Bulletin, 2(3), 1991, pp.8 14. [Borgida et al 891 Alex Borgida, Ron Brachman, Deborah McGuinness, and Lori Halpern-Resnick, “CLASSIC: A Structural Data Model for Objects”, Proc. of the I989 ACM SIGMOD Int’l Conf: on Data, 1989, pp.59- 67. [Borgida 921 Alex Borgida, “From Types to Knowledge Representation: Natural Semantics Specifications for Description Logics”, International Journal on Cooperative and Intelligent Information Systems [ 1 ,l] 1992. [Crawford&Kuipers 891 J.M. Crawford and Benjamin Kuipers, “Towards a Theory of Access-Limited Logic for Knowledge Representation, Proc. First Int’l Con. on Principles of Knowledge Representation and Reasoning , Toronto, Canada, May, 1989, pp.67-78. [Doyle&Patil 911 Jon Doyle and Ramesh Patil, “Two Theses of Knowledge Representation: Language Restrictions, Taxonomic Classification, and the Utility of Representation Services”, Artificial Intelligence, 48, 1991, pp.261-297. [Givan&McAllester 921 Robert Givan and David McAllester, “New Results on Local Inference Relations”, Proc. Third Int’l Con. on Principles of Knowledge Representation and Reasoning , Cambridge, Massachusetts, October 1992. pp.403-412. [MacGregor 901 Robert MacGregor, “The Evolving Technology of Classification-based Knowledge Representation Systems”, Principles of Semantic Networks: Explorations in the Representation of Knowledge, Chapter 13, John Sowa, Ed., Morgan-Kaufman, 1990. [MacGregor 911 Robert MacGregor, “Using a Description Classifier to Enhance Deductive Inference”, Proc. Seventh IEEE Conference on AI Applications, Miami, Florida, February, 1991, pp 141-147. [Mays et al 911 Eric Mays, Robert Dionne, and Robert Weida, K-REP System Overview, SIGART Bulletin, 2(3), 199 1, pp.93-97. [McAllester&Siskind 931 Jeffrey M. Siskind and David A. McAllester, “Nondeterministic Lisp as a Substrate for Constraint Logic Programming”, AAAI-93 Proc. of the Eleventh Nat’1 Conf: on Artificial Intelligence, Washington, DC, pp.133-138. [Patel-Schneider 891 Peter Patel-Schneider, “Undecidability of Subsumption in NIKL”, Artificial Intelligence, 39(2), 1989, pp.263-272. [Peltason 9 l] “The BACK System - An Overview”,SIGART Bulletin, 2(3), 1991, pp.1 14-119. [Profitlich et al 921 Jochen Heinsohn, Danial Kudenko, Bernhard Nebel, and Hans-Jtirgen Profitlich, “An Empirical Analysis of Terminological Representation Systems”, AAAI-92 Proc. of the Tenth Nat’1 Conf., San Jose, Calif., 1992, pp. 767-773. References [Baader&Hollunder 9 1 ] Franz Baader and Bernhard Hollunder, “KRIS: Knowledge Representation 220 Automated Reasoning | 1994 | 13 |
1,464 | Symbolic Causal Networks Adnan Darwiche Rockwell International Science Center 444 High Street Palo Alto, CA 94301 darwiche@rpal.rockwell.com Abstract For a logical database to faithfully represent our be- liefs about the world, one should not only insist on its logical consistency but also on its causal consis- tency. Intuitively, a database is causally inconsistent if it supports belief changes that contradict with our perceptions of causal influences - for example, com- ing to conclude that it must have rained only because the sprinkler was observed to be on. In this paper, we (1) suggest the notion of a causal structure to repre- sent our perceptions of causal influences; (2) provide a formal definition of when a database is causally con- sistent with a given causal structure; (3) introduce symbolic causal networks as a tool for constructing databases that are guaranteed to be causally consis- tent; and (4) d iscuss various applications of causal consistency and symbolic causal networks, including nonmonotonic reasoning, Dempster-Shafer reasoning, truth maintenance, and reasoning about actions. Introduction Consider the database, A= wet-ground 1 it-rained sprinkler-was-on 1 wet-ground, which entails no beliefs about whether it rained last night: A k it-rained and A &c= lit-rained. If we tell this database that the sprinkler was on, it surprisingly jumps to the conclusion that it must have rained last night: A U (sprinkler-was-on) j= it-rained. This change in belief is counterintuitive! Given that we perceive no causal connection between the sprinkler and rain, we would not come to believe that it rained only be- cause we observed the sprinkler on. That is, database A supports a belief change that contradicts common perceptions of causal influences, hence, it will be la- beled causally inconsistent. For another example of causal inconsistency, con- sider the database, I? = kind > popular, fat > -popular. Initially, this database is ignorant about whether the person is kind: I’ p kind and A k -kind. How- ever, once we tell the database that John is fat, it Judea Pearl Computer Science Department University of California Los Angeles, CA 90024 pearl@cs.ucla.edu jumps to the strange result that John must be unkind: I’U {fat) k -kind. H ere also, the database contradicts common perceptions of causal influences according to which no causal connection exists between kindness and weight. Therefore, database I’ is also causally in- consistent. As it turns out, it is not uncommon for domain ex- perts to construct databases that contradict with their own perceptions of causal influences, especially when the database is large enough and has multiple authors. The reason is that domain experts tend to focus on the plausibility of individual sentences rather than on the interactions among these sentences or how they would respond to future information. But even when an expert is careful enough to con- struct a causally consistent database, it is not uncom- mon to turn it into a causally inconsistent one in the process of augmenting it with default assumptions. For example, an expert could have constructed the follow- ing database, A= wet-ground A -rabl > it-rained sprinkler-was-on A la b2 > wet-ground, which is causally consistent because it remains igno- rant about rain given information about the sprinkler. However, a nonmonotonic formalism that minimizes abnormalities would turn A into the database, TabI /\ laba A’ = wet-ground A labI > it-rained sprinkler-was-on A la b2 > wet-ground, which is causally inconsistent because it finds sprinkler-was-on a sufficient evidence for it-rained: A’ k it-rained and A’ U {sprinkler-was-on} b it-rained. In fact, we shall see later that causally inconsistent databases are also not uncommon in ATMS implemen- tations of diagnosis systems and Dempster-Shafer rea- soning, thus leading to counterintuitive results (Laskey & Lehner 1989; Pearl 1990). Given the importance of causal consistency, and given the tendency to generate causally inconsistent databases, we shall concern ourselves in this paper with formalizing this notion in order to support do- 238 Causal Reasoning From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. Figure 1: A causal structure. Figure 2: A causal structure. main experts and commonsense formalisms in avoid- ing causally inconsistent databases.’ In particular, we shall suggest the notion of a causal structure to represent perceptions of causal influences; provide a formal definition of when a database is causally con- sistent with a given causal structure; introduce sym- bolic causal networks as a tool for constructing causally consistent databases; and, finally, discuss various ap- plications of symbolic causal networks, including non- monotonic reasoning, truth maintenance, and reason- ing about actions. Causality and Belief Change Since causal consistency is relative to specific percep- tions of causal influences, formalizing causal consis- tency requires one to represent these perceptions for- mally. For this purpose, we will adopt causal struc- tures, which are directed acyclic graphs that have been used extensively in the probabilistic literature for the same purpose (Pearl 1988b; Spirtes, Glymour, & Schienes 1993) - see Figures 1 and 2.2 The parents of a proposition p in a causal struc- ture are those perceived to be its direct causes. ‘The discussion in this paper is restricted to proposi- tional databases. 2The results in this p a p er do not depend on a graphical representation of causal influences. For example, one can introduce a predicate Direc+Cause and proceed to axiom- atize the contents of a causal structure. The descendants of p are called its effects and the non-descendants of p are called its non-effects. In the structure of Figure 2, drunk and injured are the direct causes of unable-to-stand; unable-to-stand and whiskey-bottle are the effects of drunk; while injured and bloodstains are its non-effects. Propositions that are relevant to the .domain un- der consideration but do not appear in a causal struc- ture are called the exogenous propositions of that structure. Exogenous propositions are the source of uncertainty in causal influences. They appear as ab- normality predicates in nonmonotonic reasoning (Re- iter 1987), as assumption symbols in ATMSs (de Kleer 1986), and as random disturbances in probabilistic models of causality (Pearl & Verma 1991). Each state of exogenous propositions will be referred to as an ex- tension. For example, if abl and ab2 are the exoge- nous propositions of the structure in Figure 1, then labI A ab2 is an extension of that structure. Given an extension of a causal structure, there would be no longer any uncertainty about the causal influences it portrays. The basic premise of this paper is that changes in our beliefs are typically constrained by the causal struc- tures we perceive (Pearl 1988a). And the purpose of this section is to make these constraints precise so that a database is said to be consistent with a causal struc- ture precisely when it does not contradict such con- straints. The key to formalizing these constraints is the following interpretation of causal structures: The truth of each proposition in a causal structure is functionally determined by (a) the truth of its direct causes and (b) the truth of all exogenous propositions. Following are some constraints on belief changes that are suggested by the above interpretation of causal structures (Pearl 1988b): Common causes: In Figure 3a, the belief in c2 should be independent of information about cl assuming that no information is available about e and that all exoge- nous propositions are known. Indirect eflects: In Figure 3b, the belief in e should be independent of information about c when- ever proposition m and all exogenous propositions are known. Common e$ects: In Figure 3c, the belief in e2 should be independent of information about el given that proposition c and all exogenous propositions are known. These constraints on belief changes and others are summarized by the principle of causal indepen- dence, which is a version of the Markovian Con- dition in the probabilistic literature (Pearl 198813; Spirtes, Glymour, & Schienes 1993). In a nutshell, the principle says that “once exogenous propositions and the direct causes of a proposition are known, the belief in that proposition should become independent Causal Reasoning 239 Figure 3: Causal interactions. of information about its non-effects.“3 the database, We will formalize this principle in the remainder of this section and then use it later in defining causal consistency. But first, we need to define when database A finds X conditionally independent of Y given 2, that is, when adding information about Y to A does not change its belief in any information about X4 given that A has full information about Z.5 The following definition captures exactly this intuition: Definition 1 (Conditional Independence) Let X, Y, and Z be disjoint sets of atomic propositions and let X,Y, and Z be instantiations of these propo- sitions, respectively. Database A finds X independent of Y given Z precisely when the logical consistency of ,. h h n AU(Z,X) and AU(Z,Y) implies the logical consis- tency of A U {2, ?, 2). battery-is-ok > lights-on, ibattery-is-ok 1 lcar-starts, finds {lights-on) dependent on {car_starts), but finds them independent given {battery-is-ok}. We are now ready to state the principle of causal independence formally: Definition 2 (Causal Independence) Database A satisfies the principle of causal independence with re- spect to causal structure G precisely when (a) A is logi- cally consistent and (b) for every extension E of G that is logically consistent with A, the database A U (8) finds each proposition in 6 conditionally independent of its non-e$ects given its direct causes. This is equivalent to saying that if A has full informa- tion about Z, then the addition of information about Y to A will not change its belief in any information about X.6 For example, the database Consider the following database, it-rained V sprinkler-was-on E wet-ground finds {it-rained} independent of (sprinkler-was-on), but finds them dependent given {wet-ground}. Similarly, the database, A= wet-ground A labI > it-rained sprinkler-was-on A la b2 > wet-ground. This database does not satisfy the principle of causal independence with respect to the structure in Figure 1 because it finds sprinkler-was-on a sufficient evidence for it-rained under the extension -rabl A laba. it-rained > wet-ground, wet-ground 3 slippery-ground, finds (slippery-ground} dependent on {it-rained}, but finds them independent given {wet-ground}. Finally, Note, however, that although database A does not satisfy the principle of causal independence, it does not contradict it either. Specifically, the extended database AU {a bl V a b2) satisfies the principle because the added sentence abl V ab2 rules out the only exten- sion, TabI A laba, under which the database violates the principle. This suggests the following definition: Definition 3 (Causal Consistency) Let A be a database and let G be a causal structure. An extension C of G is causally consistent with A precisely when A U (8) satisfies the principle of causal independence with respect to g. 3 The requirement of knowing exogenous propositions renders this principle applicable to incomplete causal struc- tures and constitutes, in fact, a definition of what informa- tion ought to be summarized by the exogenous variables. 4 By informati on about a set of atomic propositions we mean a logical sentence constructed from these propositions. 5Database A has full information about atomic propo- sitions 2 if for each p in 2, A entails p or entails lp. ‘This notion of independence is isomorphic to the one known in the literature on relational databases as multi- valued embedded dependencies. Therefore, this notion of independence satisfies the graphoid axioms (Pearl 1988b), which include symmetry (X is independent of Y given 2 precisely when Y is independent of X given 2). Causal Consistency That is, the extension TabI A laba above is causally inconsistent with database A, while the remaining ex- tensions TabI A abz, abl A -rabz, and abl A abz are causally consistent with it. We can further define a database as causally consis- tent precisely when it has at least one causally con- sistent extension. A definition of causal consistency was given in (Goldszmidt & Pearl 1992) for databases 240 Causal Reasoning containing defeasible conditionals (defaults) and was based on a probabilistic semantics of defeasible condi- tionals. Definition 3 is based on standard propositional semantics, which makes it directly applicable to com- monsense formalisms based on classical logic. More- over, causal consistency as we have defined it here is a semantical notion, independent of database syntax. As we mentioned earlier, even when a domain ex- pert is careful enough to construct a causally consis- tent database, it is not uncommon for a nonmonotonic formalism to turn it into a causally inconsistent one. Consider, for example, the database whiskey-bottle A labI 1 drunk A= drunk A labs 1 unable-to-stand unable-to-stand A labs > injured injured A lab4 r> bloodstains, which could easily be authored by a person perceiv- ing the causal structure in Figure 2. When this database is fed to a nonmonotonic formalism that min- imizes abnormalities, the formalism ends up augment- ing it with the extension 8 = (la bl , la b2, la ba, -ra ba}, which is causally inconsistent with A. Specifically, AuE jumps to the conclusion bloodstains only because whiskey-bottle was observed: A U E /& bloodstains and A U I U (whiskey-bottle) b bloodstains. With respect to the structure in Figure 2, database A has sixteen extensions corresponding to the different instantiations of the four abnormality predicates. As it turns out, all extensions containing --rab% A ia ba are causally inconsistent, including the extension in which all abnormalities are minimized. Separating the causal structure from the rule syntax has a number of merits. First, it keeps one within the realm of classical logic, which makes the approach ap- plicable to logic-based formalisms such as circumscrip- tion and ATMSs. Next, using rules to communicate a causal structure may overburden domain experts. For example, to express the sentence -sprinkler,was-on A lit-rained > lwet-ground, one needs three C-E rules: lsprinkler-was-onAlit_rained --+c -wet,ground, wet-ground A lit-rained -+E sprinkler-was-on, and wet-ground A isprinkler-was-on BE it-rained. Finally, it is not always clear whether a rule is causal or ev- idential. For example, given the rules it-rained --+c -cprinkler-was-on7 and wet-ground -J-J it-rained, it is not clear whether the rule lit-rained A wet-ground b sprinkler-was-on is evidential or causal. One way to avoid selecting these counterintuitive ex- tensions is to inform nonmonotonic formalisms about causal consistency and provide them access to causal structures. A nonmonotonic formalism would then in- sist that only causally consistent extensions are se- lected. This approach should then lead to causal versions of existing nonmonotonic formalisms - for example, causal circumscription, causal default logic, and so on - in which each theory has a causal struc- ture associated with it (Goldszmidt & Pearl 1992). Another solution is to feed nonmonotonic formalisms databases that already satisfy the principle of causal independence. In this case, nonmonotonic formalisms need not know about causality; they are guaranteed to stay out of danger because any extension that is logi- cally consistent with the database is also causally con- sistent with it. For example, the database A U {abz V aba} satisfies the principle of causal independence be- cause the four extensions that are causally inconsistent are also logically inconsistent with the database. We will provide in the next section a systematic method for constructing databases that satisfy the principle of causal independence, thus eliminating the need to test for the causal consistency of extensions. Symbolic Causal Networks We will concern ourselves in this section with pro- viding a systematic procedure that is guaranteed to generate databases that are not only causally con- sistent, but also satisfy the principle of causal inde- pendence. Therefore, the generated databases can be given to nonmonotonic formalisms without having to worry about whether adding assumptions will make them causally inconsistent. The procedure we propose is that of constructing a symbolic causal network: a propositional database that is annotated by a causal structure. A symbolic causal network has two components: A causal structure 6 that captures perceptions of causal influences and a set of micro theories cap- turing logical relationships between propositions and their direct causes - see Figures 4 and 5. A micro theory for p is a set of clauses 6, where 1. each clause in 6 refers only to p, its direct causes, and ‘to exogenous propositions; 2. if S entails a clause that does not mention p, then that clause must be vacuous. Condition 1 ensures the locality of a micro theory to a proposition and its direct causes, while Condition 2 The connection between causality and nonmono- ‘We have a device that deactivates the sprinkler when tonic reasoning was discussed in (Pearl 1988a), where a it detects rain. system, called C-E, has been proposed for ensuring the faithfulness of default inferences to causal perceptions. In the C-E system, causality is represented by classify- ing defaults into either causal or evidential rules. We observe, however, that by classifying rules as such, one is indirectly communicating a causal structure. For example, the C-E rules wet-ground -E it-rained and sprinkler-was-on WC wet-ground implicitly encode the causal structure in Figure 1. In the C-E system, a causal structure is used procedurally to block default inferences that contradict with the structure. In the approach described here, a causal structure is used declaratively to classify extensions into those consis- tent with the structure and those inconsistent with it. Causal Reasoning 241 al => battery-ok key-turned & battery-ok & a2 => car-starts -battery-ok Jz a3 => -car starts Figure 4: A symbolic causal network. prohibits a micro theory for p from specifying a rela- tionship between the direct causes of P.~ One can ensure the previous conditions by adhering to micro theories that contain only two types of mate- rial implications: $ A cy > p and q5 A ,O > lp, where 1. $J and 4 are constructed from the direct causes of p; 2. o and p are constructed from exogenous proposi- tions; and 3. o A p is unsatisfiable whenever 1c, A 4 is satisfiable. For example, the sentences kind A labI > popular and fat A lab2 > -popular do not constitute a micro theory for popular since -rabl A -rabz and kind A fat are both satisfiable. This leads to the relationship -ablAlabzAfat > -kind between weight and kindness, thus violating Condition 2 of micro theories. We stress here that micro theories do not appeal to the distinction between evidential and causal rules. Formally, a micro theory contains standard proposi- tional sentences and is constrained only by its local- ity (to specific propositions) and by what it can ex- press about these propositions, both are characteristic of causal modeling. For example, one typically does not specify a relationship between the inputs to a dig- ital gate by stating that certain input combinations would lead to conflicting predictions about the output. If one induces a propositional database using a sym- bolic causal network - that is, by associating micro theories with the propositions of a causal structure - then one is guaranteed the following: Theorem 1 Let A be a database induced by a sym- bolic causal network having causal structure $?. Then A satisfies the principle of causal independence with respect to 6. ‘The satisfaction of such local conditions permits us to predict feasible scenarios in a backtrack-free manner (Dechter & Pearl 1991). 242 Causal Reasoning As a representational language, symbolic causal net- works are complete with respect to databases that do not constrain the state of exogenous propositions: Theorem 2 Let A be a database satisfying the prin- ciple of causal independence with respect to a causal structure s. If A is logically consistent with every extension of g, then A can be induced by a symbolic causal network that h.as 6 as its causal structure. Applications of Symbolic Causal Networks The basic motivation behind symbolic causal networks has been their ability to guarantee causal consistency. But symbolic causal networks can be viewed as the logical analogue of probabilistic causal networks; see Table 1. Therefore, many of the applications of proba- bilistic causal networks have counterparts in symbolic causal networks. We elaborate on some of these ap- plications in this section. Other applications, such as diagnosis, are discussed elsewhere (Darwiche 1993). Logical consistency One of the celebrated features of probabilistic causal networks is their ability to en- sure the global consistency of the probability distri- bution they represent as long as the probabilities as- sociated with each proposition in a causal structure are locally consistent. Symbolic causal networks pro- vide a similar guarantee: As long 1 r the micro theories associated with individual proposi- Ion.. satisfy their lo- cal conditions, the global database is guaranteed to be logically consistent. This is a corollary of Theorem 1. Causal truth maintenance In the same way that probabilistic causal networks are supported by algo- rithms that compute probabilities (Pearl 1988b), sym- bolic causal networks are supported by algorithms that compute ATMS labels (Darwiche 1993; de Kleer 1986; Probabilistic causal network Symbolic causal network Represents probability distribution + effects of actions propositional database + effects of actions Graphically Encodes probabilistic independences + causal structure logical independences + causal structure Guarantees probabilistic consistency logical consistency Markovian condition causal independence Computes probabilities arguments (ATMS labels) Table 1: Analogous notions in probabilistic and symbolic causal networks. Reiter & de Kleer 1987).’ Therefore, symbolic causal networks inherit the applications of ATMSs. The im- portant difference with traditional ATMSs, however, is that the database formed by a symbolic causal net- work is guaranteed (by satisfying causal independence) to protect us from conclusions that clash with our causal understanding of the domain. The importance of this property is best illustrated by an example that uses ATMSs to implement Dempster-Shafer reason- ing (Laskey & Lehner 1989; Pearl 1990). Specifically, the Dempster-Shafer rules, wet-ground z it-rained and sprinkler-was-on 2 wet-ground, are typically rea- soned about in an ATMS framework by constructing the database, A= wet-ground A al > it-rained sprinkler-was-on A a2 > wet-ground, and attaching probabilities .7 and .9 to the assump- tions al and a2 (Laskey & Lehner 1989). Initially, the ATMS label of it-rained is empty and, hence, the belief in it-rained is zero. After observing sprinkler-was-on, however, the ATMS label of it-rained becomes al A as, which raises the belief in it-rained to .7* .9 = .63. That is, the belief in it-rained increased from zero to .63 only because sprinkler-was-on was observed; see (Pearl 1990) for more related examples. We get this counterintuitive behavior here because database A does not satisfy the principle of causal in- dependence with respect to the causal structure in Fig- ure 1. If the database satisfies this principle, the ATMS label of it-rained is guaranteed not to change as a result of adding sprinkler-was-on to the database - see (Dar- wiche 1993) for more details on this guarantee. For example, the database A U {la1 V la2) satisfies the principle of causal independence with respect to the causal structure in Figure 1. Therefore, the ATMS la- bel it assigns to it-rained is empty and so is the label that A U {sprinkler-was-on} assigns to it-rained. This guarantees that Dempster-Shafer belief in it-rained re- mains zero after sprinkler-was-on is observed. ‘More precisely, sy mbolic causal networks compute ar- gzsmepzts, which are logically equivalent to ATMS labels but are not necessarily put in canonical form (Darwiche 1993). Computing arguments is easier than computing ATMS la- bels. In fact, the complexity of computing arguments in symbolic causal networks is symmetric to the complexity of computing probabilities in probabilistic causal networks (Darwiche 1992; 1993). PLeasoning about actions Our focus so far has been the enforcement of causal constraints on be- lief changes that result from observations (belief re- visions). But perceptions of causal influences also con- strain (and in fact are often defined by) belief changes that result from interventions (named belief updates in (Katsuno & Mendelzon 1991)). For example, if we connect C in the first circuit of Figure 5 to a high volt- age, we would come to believe that A and D will be set to OFF. But if we perform the same action on the second circuit, we would not come to the same belief. Note, however, that the two circuits have the same logical description, and do not mention external inter- ventions explicitly, which means that they would lead to equivalent belief changes under all observations. The reason why the same action leads to differ- ent results from two logically equivalent descriptions is that the descriptions are accompanied by different causal structures. It is the constraints encoded by these structures that govern our expectations regard- ing interventions in these circuits (Goldszmidt & Pearl 1992). A related paper (Darwiche & Pearl 1994) pro- vides a specific proposal for predicting the effect of action when domain knowledge is represented using a symbolic causal network, showing also how the frame, ramification and concurrency problems can be handled effectively in this context. The key idea is that micro theories allow one to organize causal knowledge effi- ciently in terms of just a few basic mechanisms, each involving a relatively small number of propositions. Each external elementary action overrules just one mechanism leaving the others unaltered. The specifica- tion of an action then requires only the identification of the mechanism which is overruled by that action. Once this is identified, the overall effect of the action (or combinations thereof) can be computed from the immediate effect of the action, combined with the con- straints imposed by the remaining mechanisms. Thus, in addition to encoding a set of current beliefs, and be- lief changes due to hypothetical observations, a causal database constrains how future beliefs would change in response to every hypothetical action or actions com- bination (Pearl 1993). These latter constraints can in fact be viewed as the defining characteristic of causal relationships, of which the Markovian condition is a byproduct (Pearl & Verma 1991). CausaI Reasoning 243 r \ A B C&ok(X)=>-A -C&ok(X)=>A I A&B&ok(Y)=>D -(A&B) & okqY) => -D I A&ok(X)=>4 A&B&ok(Y)=>D -A&ok(X)=>C -(A&B)&ok(Y)=>-D Figure 5: Different symbolic causal networks leading to logically equivalent databases. Conclusion If a classical logic database is to faithfully represent our beliefs about the world, the database must be con- sistent with our perceptions of causal influences. In this paper, we proposed a language for representing such perceptions and then formalized the consistency of a logical database relative to a given causal struc- ture. We also introduced symbolic causal networks as tools for constructing databases that are guaranteed to be causally consistent. Finally, we discussed other applications of symbolic causal networks, including the maintenance of logical consistency, nonmonotonic reasoning, Dempster-Shafer reasoning, truth mainte- nance, and reasoning about actions and change. Acknowledgments The research was partially supported by ARPA con- tract #F30602-91-C-0031, Air Force grant #AFOSR 90 0136, NSF grant #IRI-9200918, and Northrop- Rockwell Micro grant #93-124. This work benefit- ted from discussions with Moises Goldszmidt and Sek- Wah Tan. References Darwiche, A., and Pearl, J. 1994. Symbolic causal networks for reasoning about actions and plans. Working notes: AAAI Spring Symposium on Decision-Theoretic Planning. Darwiche, A. 1992. A Symbolic Generalization of Probability Theory. Ph.D. Dissertation, Stanford Uni- versity. Darwiche, A. 1993. Argument calculus and networks. In Proceedings of the Ninth Conference on Uncer- tainty in Artificial Intelligence (UAI), 420-427. de Kleer, J. 1986. An assumption-based TMS. Arti- ficial Intelligence 28:127-162. Dechter, R., and Pearl, J. 1991. Directed constraint networks: A relational framework for causal model- ing. In Proceedings, 12th International Joint Confer- ence of Artificial Intelligence (IJCAI-91), 1164-1170. 244 Causal Reasoning Goldszmidt, M., and Pearl, J. 1992. Rank-based systems: A simple approach to belief revision, belief update and reasoning about evidence and actions. In Proceedings of the Third Conference on Principles of Knowledge Representation and Reasoning, 661-672. Morgan Kaufmann Publishers, Inc., San Mateo, Cal- ifornia. Katsuno, H., and Mendelzon, A. 1991. On the differ- ence between updating a knowledge base and revising it. In Principles of Knowledge Representation and Reasoning: Proceedings of the Second International Conference, 387-394. Laskey, K. B., and Lehner, P. E. 1989. Assump- tions, beliefs, and probabilities. Artificial Intelligence 41( 1):65-77. Pearl, J., and Verma, T. 1991. A theory of inferred causation. In Principles of Knowledge Representation and Reasoning: Proceedings of the Second Interna- tional Conference, 441-452. Pearl, J. 1988a. Embracing causality in default rea- soning. Artificial Intelligence 35:259-271. Pearl, J. 1988b. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann Publishers, Inc., San Mateo, California. Pearl, J. 1990. Which is more believable, the probably provable or the provably probable. In Proceedings, CSCSI-90, 8th Canadian Conference on AI, l-7. Pearl, J. 1993. From bayesian networks to causal net- works. Technical Report R-195-LLL, Cognetive Sys- tems Laboratory, UCLA. (Short version in Statistical Science, Vol. 8, No. 3 (1993), pp. 266-269.). Reiter, R., and de Kleer, J. 1987. Foundations of assumption-based truth maintenance systems: Pre- liminary report. In Proceedings of AAAI, 182 -188. AAAI. Reiter, R. 1987. Nonmonotonic reasoning. Ann. Rev. Comput. Sci 2~147-186. Spirtes, P.; Glymour, C.; and Schienes, R. 1993. Cau- sation, Prediction, and Search. New York: Springer- Verlag . | 1994 | 130 |
1,465 | Causal Default Reasoning: Principles and Algorithms Hector Geffner Departamento de Computation Universidad Simon Bolivar Aptdo 89000, Caracas 1080-A Venezuela hector@usb.ve Abstract The minimal model semantics is a natural interpre- tation of defaults yet it often yields a behavior that is too weak. This weakness has been traced to the inabiity of minimal models to reflect certain implicit preferences among defaults, in particular, preferences for defaults grounded on more ‘specific’ information and preferences arising in causal domains. Recently, ‘specificity’ preferences have been explained in terms of conditionals. Here we aim to explain causal prefer- ences. We draw mainly from ideas known in Bayesian Networks to formulate and formalize two principles that explain the basic preferences that arise in causal default reasoning. We then define a semantics based on those principles and show how variations of the al- gorithms used for inheritance reasoning and temporal projection can be used to compute in the resulting formalism. Motivation The semantics of minimal models provides a direct in- terpretation of defaults: to determine the acceptable consequences of a classical theory W augmented with a set D of formulas labeled as ‘defaults’, the semantics selects the models of W that violate minimal subsets of D. The resulting semantics is simple and captures the basic intuition that ‘as many defaults should be accepted as it is consistently possible”. Yet the con- clusions sanctioned are too weak. Consider for example the theory comprised of the defaults’ (*) rr:a+d, r,:aAb+~d, rs:c-+b The defaults may stand for the rules ‘if I turn the key, the car will start’, ‘if I turn the key and the battery is dead, the car won’t start’, and ‘if I left the lights on last night, the battery is dead’. They can also be thought as a simplified representation of the Yale Shooting sce- nario (Hanks & McDermott 1987): ‘if Fred is alive, he will be alive ‘, ‘if Fred is alive and I shoot, Fred won’t be alive’, and ‘if I load the gun, I will shoot’. ‘Throughout the paper, defaults p + q are regarded as material implications, not as rules of inference. Material implications which are firmly believed will be denoted with the symbol %‘. Given the facts a and c, we would expect the conclu- sion bA_d. Yet the facts produce three minimal models Mi, each Mi violating a single default ri, i = 1,2,3, with two of those models, Mz and MS, sanctioning ex- actly the opposite conclusion. Part of the reason the expected conclusion is not sanctioned is that we haven’t explicated the precedence of the rule aAb -’ yd over the conflicting but less ‘spe- cific’ rule a + d. This precedence can be expressed in a number of ways: by giving the first rule priority over the second, by making the first rule a strict non- defeasible rule, or by adding explicit axioms to ‘cancel’ the second rule when the first rule is applicable. Each of these options have different merits. For our purposes what matters is that they all prune the minimal model M2 that violates the ‘superior’ rule ~2, but leave the other two minimal models Ml and MS intact. Hence, the expected conclusion b A Td still fails to be sanc- tioned. The question this raises, is: on what grounds can the ‘unintended’ minimal model MS, where rule rg is violated, be pruned? The default theory (*) is not very different from the theories handled by inheritance and temporal projec- tion algorithms (Horty, Thomason, & Touretzky. 1987; Dean & McDermott 1987). The idea in these algo- rithms is to use a default like A + p to conclude p from A, when either there are no ‘reasons’ for yp or when those reasons are ‘weaker’ than A + p. The essence of these algorithms is a careful form of forward chaining that can be expressed as follows: Procedure Ps bP ifpinW or IVA,A+pinD,andall forward arguments for -p not weaker than A + p contain a rule B + q s.t. I- -q As it is common in these procedures, rules are defi- nite, A and B are conjunctions of atoms, p and wp are incompatible propositions, and t- A holds when k ai holds for each ai E A. Likewise, forward arguments for -p are collection of rules A that permit us to establish -p from the facts W by reasoning along the direction of the rules. That is, such collection of rules A must contain a rule C + up such that C logically follows Causal Reasoning 245 From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. from A and W. The strength of such rules determine the strength of the argument. The theory (*) can be processed by this procedure once the literal ld is replaced by a new atom d’ de- clared to be incompatible with d. Ps then yields a and c, as W = {a, c}, then b, as r3 is a reason for b and there are no forward arguments for wb, and finally c, as r2 is a reason for d’ and the only forward argument for wd’ (d) rests on the rule r1 that is weaker than r2. The procedure PO is simple and intuitive, and as the example shows, captures inferences that escape the minimal model semantics. To understand why this happens it is useful to look at the latter from its proof-theoretic perspective. The proof-theory of mini- mal models can be understood in terms of arguments: Definition 1 A subset A of D is an argument against a default r in D given a set of formulas W, when W+A is logically consistent but W + A + (r) is not. If there is an argument against r, there will be a mini- mal model that violates r, and vice versa. In the pres- ence of priorities, arguments need to be ordered by ‘strength’ and this criterion has to be modified slightly (Baker & Ginsberg 1989; Geffner 1992). In the theory (*) the ‘spurious’ model Ma pops up because the rules rl and r2 provide an argument against r3. This argument is not considered by PO be- cause it is not a forward argument. Since this is the right thing to do in this case, one may wonder whether non-forward arguments can always be ignored. The answer is no. For example, if d is observed, PO would still derive b, implicitly favoring rule r2 over r3 with no justification. The same is true for approaches in which defaults are represented by one-directional rules of inference. Interestingly, in such a case, the minimal model semantics behaves correctly. Causal Rule Systems Preliminary Definitions The problem of defining a semantics for causal default theories can be seen as the problem of distinguishing legitimate arguments from spurious ones. In this sec- tion we will look closer at this distinction in the context of a class of simple causal default theories that we call causal rule systems or CRSs for short. The language of causal rule systems is sufficiently powerful to model interesting domains and most scenarios analyzed in the literature but purposely excludes non-causal rules and disjunctions. We will report the extensions to handle these and other constructs elsewhere. A causal rule system T comprises a set D of de- feasible causal rules A ---f p where p is an atom and A is a conjunction of zero or more atoms, a set F of atomic facts, and a set C of constraints express- ing that a given conjunction of atoms B cannot be true. Variables are assumed to be universally quan- tified. Rules and constraints will refer to members of D and C respectively, or to ground instances of them. Likewise, constraints will be divided between background constraints and observed constraints. The former will express domain constraints and will be de- noted by rules B + with no head (e.g., alive(p, t) A dead(p, t) ---) ), while the latter will express contin- gent constraints and will be denoted as -43 (e.g., l[on(a, b, ti) Aon(b, c, ti)]). This distinction between background and evidence is implicit in probability the- ory and in Bayesian Networks (Pearl 1988b) and it used in several theories of default reasoning (Geffner 1992; Poole 1992). For simplicity, we will assume that back- ground constraints B + involve exactly two atoms. We will say that such pairs of atoms are incompatible and use the expression wp to denote atoms q incompati- ble with p. The generalization to n-ary background constraints is straightforward but makes the notation more complex and it is seldom needed. Every rule will have a priority which will be repre- sented by a non-negative integer; the higher the num- ber associated with a rule, the higher its priority. This scheme is sufficiently simple and will not introduce the distortions common to total orderings of defaults because the scope of priorities will be local: priori- ties will only establish an order among rules A -+ p and B + mp whose consequents are incompatible. Priorities thus play a role analogous to local condi- tional probability matrices in Bayesian Nets allowing us to determine the net effect of conflicting causal influ- ences acting on the same proposition. Unless otherwise stated, all priorities will be assumed equal. Finally, as in other representations involving causal relations (e.g., (Shoham 1988; Pearl 1988b)), we will require that causal rules be acyclic. To make this pre- cise, let us define the dependency graph of a CRS as a directed graph whose nodes are ground atoms, and where for every instance of a causal rule A --+ p there is a c-link relating every atom a in A to p, and for every instance of a background constraint p A q +, there are two k-links, one from p to q and another from q to p. Then, a CRS is acyclic iff its dependency graph does not contain cycles involving c-links. It’s very simple to check that the encoding of acyclic inheritance network in CRSs is acyclic, like the encoding of theories about change (see below). Semantics The key to distinguishing legitimate arguments from spurious ones lies in an idea advanced by Pearl in a number of places which is at the basis of the model of intercausal relations captured by Bayesian Networks. It says that two events should not interact through a common variable that they causally asect, if that vari- able or a consequence of it has not been observed (Pearl 1988b, pp. 19). A s we will see below, there are argu- ments in causal rule systems that violate this criterion. To identify and prune such arguments, it will be useful to recall the distinction between forward and backward arguments: 246 Causal Reasoning Definition 2 An argument A against a rule A ---+ p is a forward argument when A contains a rule B + -p such that B is supported by A.2 An argument which is not a forward argument is a backward argument. In causal rule systems, all forward and backward arguments arise from rules violating some constraint. That is, a consistent collection of rules A will be an argument against a default A + p when the rules A + {A + p) support a conjunction of atoms B ruled out by some constraint. Moreover, such a constraint can be either part of the evidence or part of the back- ground. In a Bayesian Network, the former would be represented by an observation and the latter by the network itself.3 It is simple to check then that the argu- ments that violate Pearl’s criterion are precisely the the backward arguments that do not originate in evidential constraints but in background constraints. Such argu- ments establish a relation on events merely because they have conflicting causal influences on unobserved variables. To identify and prune those arguments, we will first make precise the distinction between back- ground and evidential constraints. Definition 3 An evidential constraint is a formula ‘B, where B is a conjunction of atoms that is con- sistent with the background constraints but inconsistent with the background constraints and the evidence (facts and observed constraints). Basically, 1B is an evidential constraint when 4? is an observed constraint, or when B A A + or l(B A A) are background or observed constraints, and A is a conjunction of one or more facts. The backward arguments that arise from evidential constraints will be called evidential arguments, and the ones arising from background constraints will be called spurious arguments. Definition 4 A collection of rules A is refuted by the evidence or is un evidential nogood when A supports B, for an evidential constraint YB. A backward argu- ment A against a rule r is evidential when A + {r} is refuted by the evidence, and is spurious otherwise. For example, in the theory that results from (3;) by replacing ld by an atom d’ incompatible with d, there are no evidential constraints and the backward argu- ment against rule r3 comprised of rules rl and $2 is spurious. On the other hand, if d is observed, there will be an evidential constraint ld’ and r2 will provide an evidential argument against r3. 2A formula w is supported by A, when w logically fol- lows A and the facts. 3The constraint that p and q cannot be both true can be captured in a Bayesian Network either by encoding p and q as values of a single variable, or by encoding them as values of two different variables with a third variable, observed to be ‘fcabe ‘, which is true when both p and q are true. Before defining a semantics that reflects this distinc- tion, let us define causal arguments as forward argu- ments with the appropriate strength: Definition 5 A forward argument A against a default A ---+ p is a causal argument when A contains a rule B ---) -p not weaker than A + p such that B is sup- ported by A. The basic principle in causal be expressed as follows: default reasoning can then Only causal and evidential arguments need to be considered in causal default reasoning. In particu- 1 should be accepted. 1 ’ ’ lar, rules not facing causal or evidential arguments Although the given definitions of causal and eviden- tial arguments are tied to the language of causal rule system, these notions, like the notions of causal and ev- idential support in Bayesian Networks, are more gen- eral. It should thus be possible to devise analogous definitions for more powerful languages. The semantics of causal rule systems will reflect this principle. We will define it in two steps. Let us first say that an argument A is validated by a model M when M does not violate any rule in A, and that a rule r violated by a model M is causally (evidentially) justified when there is a causal (evidential) argument against r validated by M. Let us also say that a causal rule system is predictive when it does not give rise to evidential arguments; i.e., when no collection of rules is refuted by the evidence. Then, since in the absence of evidential arguments only causal arguments need to be considered, the semantics of predictive systems can be defined as follows: Definition 6 The causal models of a predictive causal rule system are the models in which every rule violated is causally justified. To extend this semantics to an arbitrary causal rule system T, we will use the expression T/A to denote the result of removing the rules in A from T. The minimal collection of rules A that render T/A a pre- dictive system will be called culprit sets. It’s easy to check that these culprit sets are exactly the minimal sets of rules that ‘hit’ all collections of rules refuted by the evidence (the evidential nogoods). The semantics of arbitrary causal rule systems can then be defined as follows: Definition ‘7 The causal models of an arbitrary causal rule system T are the causal models of the pre- dictive systems T/A for any culprit set A. The system that results from the theory (*) after changing ld by d’ is a predictive system with a single class of causal models where r1 is the only violated rule. On the other hand, if the fact d is added, two culprit sets {rz} and { r3) arise, and the causal models of the resulting theory will satisfy r1 and violate one of r2 or r3. Causal Reasoning 247 Definition 8 A ground atom p is a causal conse- quence of a causal rule system T iffp holds in all causal models of T. Some Properties Proposition 1 Causal models always exist facts and constraints are logically consistent. when the Now let A[lM] d enote the collection of rules violated by a causal model M of a causal rule system T. Then, T/A[M] is a logically consistent set of Horn clauses, and thus, has a unique minimal Herbrand model MH. Clearly MH is a causal model of T as well, and more- over, it is a canonical model in the sense that only models like MH need to be considered: Proposition 2 A ground atom p is a causal conse- quence of a causal rule system T if p is true in all its canonical causal models. Every consistent causal rule system will have one or more canonical causal models. Finding one of them can be done efficiently: Theorem 1 The problem of finding a canonical causal model for a finite propositional CRS is tractable. The importance of this result is that many theories of interest have a unique canonical causal model. Let us say that a theory is deterministic when for every pair of rules A + p and B + -p with incompatible consequents and compatible antecedents, one rule has priority over the other. Then for predictive theories that are deterministic we get: Theorem 2 Causal rule systems which are predic- tive and deterministic possess a single canonical causal model. Corollary I. The problem of determining whether a given ground atom is a causal consequence of a fi- nite, predictive and deterministic causal rule system is tractable. A class of causal theories that are predictive and de- terministic are the theories for reasoning about change which include no information about the ‘future’ (Lin & Shoham 1991; Sandewall1993). They can be expressed as CRSs of the form: Persistence: Action: w; 4); T(f 7 rk 4) > -AT(~ra,s) -+ T(f f r(a, 4) Facts: T(f1, so) ; T(f2,so) ; . . . Constraints: T(f > s) A T(g, s) + It’s simple to verify that the resulting theories are acyclic, predictive and deterministic (assuming rules about change have priority over rules about persis- tence). An equivalent formulation based on time rather than on situations would have similar properties. Most semantics for reasoning about change coincide for the- ories like the one above: the preferred model is the one which is ‘chronologically’ minimal (Shoham 1988), where every ‘exception’ is explained (Gelfond & Lif- schitz 1988)) where actions are minimized (Lifschitz 1987), etc. Moreover, the proof-procedure PO pre- sented in Section 1 is adequate for such systems (the facts F should take the place of W in PO): Theorem 3 The proof-procedure PO is sound and complete for finite causal rule systems that are both predictive and deterministic. The procedure PO re- mains sound but not necessarily complete for systems which are predictive but not deterministic. Priorities Revisited The results of the previous section seem to confirm that for predictive theories causal models are adequate. For more general theories however, they are not. A simple example illustrates the problem. Consider the chain of rules: rl :a+b, ra:b+c, r3:c--,d together with the rule r4 : f + c’, where c and c’ are incompatible and r4 has priority over r2. Given a and f, we get a predictive, deterministic theory, whose sin- gle (canonical) causal model M sanctions b and c’. If the prediction c’ is confirmed, however, another causal model M’ pops up, which refutes the prediction b. This is because the observation c’ yields an evidential constraint lc that refutes the rules r1 and r2, produc- ing two culprit sets { rl} and (ra}. The explanation for this behavior can also be found by looking at probability theory and Bayesian Net- works. We have a ‘gate’ composed of two conflicting rules r4 : f ---f c’ and r2 : b -+ c, with r4 having priority over r2. The semantics partially reflects this priority by validating r4 in all causal models. Yet, while it makes b irrelevant to c’, it makes c’ relevant to b; i.e., c’ refutes b even though b does not refute c’. This is anomalous because irrelevance should be symmetric (Pearl 1988b). A common solution to this problem is to add ‘cancellation axioms’ (like making rule r2 in- applicable when f holds). Here we develop a different solution that avoids making such axioms explicit. Let us say that a rule A -+ p is a defeater of a conflicting rule B + -p when its priority is higher and that A ---) p is verified in a model when both A and p hold in the model. Then, we will be able to regard a collection of rules A as irrelevant when those rules are defeated as follows:4 Definition 9 A collection of rules A is defeated or preempted in a causal rule system T when every causal model of T/A verifies a defeater for each rule in A. Then the second principle needed is: 4The notions of defeat and preemption are common in inheritance algorithms and argument systems (e.g., (Horty, Thomason, & Touretzky. 1987; Pollock 1987)). Here the corresponding notions are slightly more involved because rules are used to reason both forward and backward. Causal Reasoning Arguments involving defeated rules are irrelevant Certainly, defeat is closed under union: Theorem 4 If Ai and A2 are two sets of rules de- feated in T, then the union Ai + A2 of those sets is also defeated in T. This means that in any causal rule system T there is always a unique maximal set of defeated rules which we will denote as Ac[T]. The strengthened causal con- sequences of a causal rule system T can then be defined as follows: Definition 10 (Revised) A ground atom p is a causal consequence of a causal rule system T if p is true in all causal models of T/Ae[T]. Since the second principle was motivated by prob- lems that result from the presence of evidential argu- ments, it’s reassuring that the new semantics is equiv- alent to the old one when such arguments are not present: Theorem 5 A ground atom p is a causal consequence of a predictive causal rule system T i$p is true in all causal models of T. To check whether a given atom p is a causal conse- quence however, it’s not necessary to first identify the maximal set of rules defeated; any such set suffices: Theorem 6 If a ground atom p is true in all causal models of T/A, f or any set of rules A defeated in T, then p is a causal consequence of T. We address theories: finally the task of computing in general Theorem 7 Let P = (~1, ~2, . . . , p,) be a collection of atoms derived by the procedure PO from a system T by application of a collection of rules A. Then each pi in P is a causal consequence of T if P shields A from every evidential argument A’ against rules in A; i.e., if every such At contains a rule B + “pa for some pa in P. The new proof-procedure works in two stages: in the first it uses PO to derive tentative conclusions, ig- noring evidential counterarguments. In the second, it verifies that all such counterarguments are defeated, and therefore, can legitimately be ignored. In the theory above, the procedure PO yields b and c’ by application of the rules r1 and rd. To verify whether b and c’ are actual causal consequences of the theory, Theorem 7 says we only need to consider the evidential arguments against rl or r4. Since the only such (minimal) argument A’ = { r2, rg} contains a rule r2 : b + c whose consequent c is incompatible with c’, we are done. Indeed, r-2 is defeated in the theory, and b and c’ follow. Related Work The first principle is a reformulation of Pearl’s idea that ‘causes do not interact through the variables they influence if these variables have not been ob- served’ (Pearl 1988b, pp 19). Other attempts to use this idea in the context of causal default reasoning are (Pearl 1988a; Geffner 1992; Goldszmidt & Pearl 1992). The contribution of this work is to explain the patterns of causal default reasoning in terms of some simple and meaningful principles that tie up a number of important notions in default reason- ing: minimal and ‘coherent’ models (see below), back- ground vs. evidential knowledge, forward vs. back- ward reasoning, etc. Interestingly, the need to distin- guish background from evidence also appears in sys- tems that handle ‘specificity’ preferences (Poole 1992; Geffner 1992), and in a slightly different form, in certain systems for causal reasoning (Sandewal 1991; Konolige 1992). Many approaches to causal default reasoning are based on a preference criterion that rewards the ‘most coherent’ models; namely, the models where the set of violated rules without a causal justification is minimal or empty (e.g., (Gelfond 1989), as approaches based on non-normal defaults and the stable model semantics). For predictive theories, the most coherent models and causal models coincide; indeed, the causal models of predictive theories are defined as the models where all violated rules have a causal justification. Yet the two types of models differ in the general case. For exam- ple, in a theory comprised of three rules ri : pi + qa, i= l,..., 3, where q2 is incompatible with both q1 and 43, the most coherent model given the facts ~1, p2,p3 and lql, is the model M that validates the rule r2. Yet, the model M’ that validates rg is also a causal model. Likewise, if r3 is given priority over r2, both M and M’ would be the most coherent models but only the former would be a causal model. In the two cases, causal models behave as if the rule rl, which is violated by the evidence, was excluded. The ‘coherence’ seman- tics, on the other hand, does not exclude the rule but rewards the models where it gets a causal explanation. The result is that sometimes the coherence semantics is stronger than causal models and sometimes is weaker. More important though is that the coherence seman- tics violates the first principle; indeed, in the second case, the rule r-3 fails to be sanctioned even though rg does not face either causal or evidential arguments. As a result, the coherence semantics makes the proof- procedure described in Theorem 7 unsound. Causal rule systems which are predictive can be eas- ily compiled into logic programs with negation as fail- ure. Every ground rule rd : Ai ----) pi can be mapped to a logic programming rule pi t Ai, labi along with rules obj + Ai, labi for every conflicting rule rj : Aj -+ pj with priority equal or smaller than ra. In addition, facts translate to rules with empty bodies. From the discussion above, it’s clear that the causal con- Causal Reasoning 249 sequences of the original theory would be exactly the atoms that are true in all the stable models of the re- sulting program (Gelfond & Lifschitz 1988). The same mapping will not work for non-predictive systems as the semantics and algorithms for logic programs are limited in the the way negative information is handled (Geffner 1991). Causal rule systems are also related to ordered pro- grams (Laenens & Vermeir 1990) where, using our lan- guage, facts are treated like rules, and all constraints are treated like background constraints. As a result, there are no evidential arguments and all reasoning is done along the direction of the rules. This is also common to approaches in which defaults are regarded as tentative but one-way rules of inference. We have argued that such interpretations of defaults may be ad- equate in the context of predictive theories but will fail in general: even if a default a + b does not provide a reason to conclude la from lb, it may well provide a reason to avoid concluding a when lb is known. Conclusions We have presented a pair of principles that account for the basic preferences among defaults that arise in causal domains. We have also defined a semantics based on those principles and presented some useful proof-procedures to compute with it. The language of the formalism can be extended in a number of ways. Some extensions are more direct (e.g., n-ary back- ground constraints, non-causal rules); others are more subtle (e.g., disjunctions). We are currently working on these extensions and will report them elsewhere. We are also looking for more efficient procedures that would avoid the need to precompute all ‘evidential no- goods’ when they exist. Acknowledgments. Thanks to Wlodek Zadrozny, Pierre Siegel, Benjamin Grosof, and Kurt Konolige for discussions related to the content of this paper. References Baker, A., and Ginsberg, M. 1989. A theorem prover for prioritized circumscription. In Proceedings IJCAI- 89, 463-467. Dean, T., and McDermott, D. 1987. Temporal data base management. Artificial Intelligence 32: l-55. Geffner, H. 1991. Beyond negation as failure. In Pro- ceedings of the Second International Conference on Principle of Knowledge Representation and Reason- ing, 218-229. Geffner, H. 1992. Reasoning with Defaults: Causal and Conditional Theories. Cambridge, MA: MIT Press. Gelfond, M., and Lifschitz, V. 1988. The stable model semantics for logic programming. In Proceedings of the Fifth International Conference and Symposium on Logic Programming, 1070-1080. Cambridge, Mass.: MIT Press. Gelfond, M. 1989. Autoepistemic logic and formaliza- tion of commonsense reasoning. a preliminary report. In et al., M. R., ed., Proceedings of the Second In- ternational Workshop on Non-Monotonic Reasoning, 177-186. Berlin, Germany: Springer Lecture Notes on Computer Science. Goldszmidt, M., and Pearl, J. 1992. Stratified rank- ings for causal relations. In Proceedings of the Fourth Workshop on Nonmonotonic Reasoning, 99-110. Hanks, S., and McDermott, D. 1987. Non-monotonic logics and temporal projection. Artificial Intelligence 33:379-412. Horty, J.; Thomason, R.; and Touretzky., D. 1987. A skeptical theory of inheritance. In Proceedings AAAI- 87, 358-363. Konolige, K. 1992. Using default and causal reasoning in diagnosis. In Proceedings of the Third International Conf. on Principles of Knowledge Representation and Reasoning. Laenens, E., and Vermeir, D. 1990. A fixpoint se- mantics for ordered logic. Journal of Logic and Com- putation 1(2):159-185. Lifschitz, V. 1987. Formal theories of action. In Pro- ceedings of the 1987 Workshop on the Frame Problem in AI, 35-57. Lin, F., and Shoham, Y. 1991. Provably correct the- ories of action. In Proceedings AAAI-91, 349-354. Pearl, J. 1988a. Embracing causality in default rea- soning. Artificial Intelligence 35:259-271. Pearl, J. 198810. Probabilistic Reasoning in Intelligent Systems. Los Altos, CA.: Morgan Kaufmann. Pollock, J. 1987. Defeasible reasoning. Cognitive Science 11:481-518. Poole, D. 1992. The effect of knowledge on belief: Conditioning, specificity and the lottery paradox. Ar- tificial Intelligence 49:281-309. Sandewal, E. 1991. Features and fluents. Technical Report R-91-29, CS Department, Linkoping Univer- sity, Linkoping, Sweden. Sandewall, E. 1993. The range of applicability of nonmonotonic logics for the inertia problem. In Pro- ceedings IJCAI- 99, 738-743. Shoham, Y. 1988. Reasoning about Change: Time and Causation from the Standpoint of Artificial In- telligence. Cambridge, Mass.: MIT Press. 250 Causal Reasoning | 1994 | 131 |
1,466 | Testing Physical Systems Peter Struss Technical University of Munich, Computer Science Dept., Orleans&r. 34 D-8 1667 Munich, Germany struss@informatik.tu-muenchen.de Abstract We present a formal theory of model-based testing, an algorithm for test generation based on it, and outline how testing is implemented by a diagnostic engine. The key to making the complex task of test generation feasible for systems with continuous domains is the use of model abstraction. Tests can be generated using manageable finite models and then mapped back to a detailed level. We state conditions for the correctness of this approach and discuss the preconditions and scope of applicability of the theory. Introduction Testing means shifting a system into different states by appropriate inputs in order to find observations that determine its present behavior mode. Often, the tests are designed to confirm a particular behavior, usually the correct or intended one, for instance in manufacturing. In diagnosis we may, in contrast, want discriminating tests which effectively and efficiently identify the present (faulty) behavior. This paper focuses on confirming tests. There exist theories and algorithms for test generation in particular domains. For digital circuits, for instance, a solution is feasible because, although the number of components can be large, the individual components exhibit a simple behavior and, more fundamentally, because of the Boolean domain of the variables (Roth 1980). For variables with large domains or for physical systems with continuous behavior, these techniques are not applicable. In extending methods from model-based diagnosis, and exploiting our work on multiple modeling (Struss 1992), we propose a general theory that addresses the generation and application of tests in such domains. We first discuss the problems addressed and outline the basic ideas of our approach by presenting a simple (continuous and dynamic) system, a thyristor. In the next section, we present the basic theory and an algorithm for test generation. Then testing of constituents in the context of a whole device is shown to be a straightforward extension. We outline briefly how testing is implemented by a standard model-based diagnosis engine, and finally, we discuss the achievements, preconditions, and restrictions of the approach. Due to space limitations, we do not always treat the most general cases, and we omit proofs. Both can be found in (Struss 1994). The Intuition behind Testing A thyristor is a semi-conductor with anode, A, cathode, C, and gate, G, that operates as a (directed) switch: it works in two states, either conducting current in a specified direction with almost zero resistance (exaggerated by the upper line of the simplified characteristic curve in Fig. la), or blocking current like a resistor with almost infinite resistance (the horizontal line). The transition from the OFF state to ON is controlled by the gate; if it receives a pulse the thyristor “fires”, provided the voltage drop exceeds a threshold, VTn. There is a second way to fire a thyristor (which is normally avoided, but may occur in certain circuits and situations), namely if the voltage drop exceeds the breakover voltage, VB~ as is indicated by the characteristic in Fig. la. The annotation with 1 and 0 indicates the presence and absence of a gate pulse. Now suppose we want to test a thyristor, i.e. to make sure that it behaves according to the described correct behavior. This creates several problems: voltage and current are considerered to have a continuous domain. We can only gather a finite set of sample observations. But if they all agree with the desired behavior, what would then make us confident that more observations could not reveal a contradiction to this behavior? It is the fact that there is no other possible behavior (a faulty one) that would also be consistent with the previous observations. What are the possible faults of a thyristor? A thyristor may be punctured, i.e. acting like a wire, or blocking like an open switch. A third kind of fault may be due to the fact that the actual breakover voltage is less than the nominal one, with the result that the thyristor fires at a voltage drop well below VB~ without a gate pulse. With V’B~ we denote the lowest tolerable actual breakover voltage (or the highest one which is considered to characterize a faulty behavior). Fig. 1 shows the (idealized) characteristics of these behaviors in comparison to the correct behavior. Considering these behaviors (and, perhaps, looking at the figures), we may get the following idea for a set of two tests: the first one with a high voltage drop (i.e. between V’B~ and VB~) without a gate pulse, and a second one with a medium or high voltage drop (i.e. between VTh and VB~) in conjunction with a gate pulse. Causal Reasoning 251 From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. Figure 1 The characteristic of the behaviors of a thyristor: a) correct b) blocking c) punctured d) with a reduced breakover voltage If we obtain results that comply with the correct behavior in both cases (zero current for the former, positive current for the latter), then the thyristor must be correct, because these observations rule out all three types of faults: the first one contradicts the punctured behavior and a reduced breakover voltage, while the second one refutes the blocking mode. This simple example illustrates several fundamental ideas : 0 A particular behavior is confirmed if all others can be refuted by some finite set of observations. l We obtain such sets of tests by describing behaviors through relations among variables and by determining their distinctions (i.e. set differences). 0 We may end up with less tests than the number of behaviors to be refuted (in the thyristor example two tests for an infinite number of behaviors). Finally, the thyristor indicates a way to address the complexity problem when we have to handle large or even infinite domains: 0 We may be able to perform test generation using a (qualitative) abstraction of the behavior description (e.g. with characterizations such as “high” and “medium”). In the remainder of this paper we develop these ideas into a formal theory and an algorithmic solution for test generation and testing. Test Generation for Single Constituents First, we present the basic definitions and results that allow the generation of tests, based on relational behavior models. For all definitions and theorems, we first paraphrase them in English before presenting the formal statement. Throughout this section, we consider one constituent (component, mechanism, process, subsystem that is) of a system that is assumed to be accessible. It has a (not necessarily finite) set of possible behaviors, BEHVS, associated with it. The Foundation: Finding Observable Distinctions As motivated by the example (and common in model- based reasoning systems which use constraints for modeling), we describe behavior modes by the set of value tuples that are possible under this behavior, i.e. by a relation R in some representation. Using the formalism of (Struss 1992) such a representation is determined by selecting a vector v = (vi ,...,v2) of local variables and their respective domains: DOM(y) = DOM(v1) x DOM(v2) x . . . . x DOM(vk). For the time being, we assume one fixed representation (I, DOM(y)), because this simplifies the notation and is not an essential restriction (the general case is treated in (Struss 1994)). The behavior models of the thyristor can be described in the representation ((AV,i,gate), RxR~{0,1}). By SIT we denote the set of situations physically possible under the present mode of a constituent. We define a behavior model M(R) as the claim that, the relation RcDOM(v) covers all value tuples 1 may take in a situation SE SIT: Definition 1 (Behavior Model) M(R) : w \~I~EDOM(~) (3s~ SIT v(s)=%) j JQ-JE R 1 . If M(Ri) is a model of the behavior Bie BEHVS, i.e. Bi + M(Ri), and if an observation (obs) contradicts the behavior model, i.e. lies outside Ri, then we can safely rule out the behavior: obs 3 TM(Ri) C- obs 3 TBi. While this provides a way for refuting behaviors, we are interested in confirming a particular behavior. As suggested by the example, tests are defined as sets of value tuples Ti such that observing at least one tuple in each set in reality allows us to conclude the presence of a behavior mode. More formally: a set of value tuples V= {Ii} containing at least one tuple out of each Ti, YTi 3yiEV IiETi, is called a hitting set of { Ti}. The fact that all the values in V are actually taken in some real situation is denoted by the sentence <pv: <pv E VxiE V 3siE SIT v(si)=vi Definition 2 (Test, Confirming Test Set) A test is a non-empty relation on some representational space: Ti c DOM(l). A set { Ti } of tests is a confirming test set for a behavior BOE BEHVS iff for all hitting sets V of {Ti}, observation of V entails Bg: TV I--o- What assured us that the tests in the previous section actually confirm the thyristor’s correct behavior? The fact that no other behavior mode would survive observations from both tests. In general, fo each behavior Bj, different from the one to be confirmed, there must exist a test Ti 1 v(s) = ~0 means that 41 has the value ~0 in situation s rather than equality. Because 1 can take different values (from different domains, but also in the same domain), (Struss 1992) uses a special predicate Val . 252 Causal Reasoning lying completely outside a modeling relation of Bj. In other words, the complement of Ti, Tic := DOM(y)\Ti, specifies a model of Bj. This is stated by Lemma 1. L4?IlllWl { Ti } is a confirming test set for BO iff YBjE BEHVS B* # BO a (3Ti Bj a M(Ti’)). A test is only use ul / if it is observable. So, in the following, let OBS(~)EV ARS (v) be the set of observable variables in the representation (v,DOM(v)) with the respective projection P&s : DOMW + DOM(xobs). Definition 3 (Observable Test Set) A test set {Ti} is observable, if all Ti are observable, i.e. Ti c DOM(xobs). Lemma 1 indicates the way to generate confirming (observable) test sets for some behavior BOE BEHVS: we have to find (observable) distinctions between BO and each other mode Bi, and confirm these distinctions to be present. We can grasp them as the set differences Di:=Pobs(RO)\Pobs(Ri) of appropriate modeling relations of these behaviors. Note that the number of differences Di can be smaller than the number of behaviors to be refuted, because the modeling relations chosen may cover several behaviors (For the thyristor, for instance, RRED-B~ covers an infinite set of behaviors). Any observable test refuting M(Ri) and containing only tuples consistent with M(Ro) must be a subset of Di. Although we could use {Di} as a test set, we may further reduce the number of tests by replacing several Di by a common subset. We call a set of sets, {Tk}, a hitting set of sets of { Di}, if it contains a non-empty subset of each Di: VDi 3Tk @#Tk c Di . This is the basis for the generation of observable confirming test sets: Lemma2 Let {Ri I Ric D 0 M(v)} cover all behaviors (except Bo): YBj E BEHVS\{Bo} 3 Ri Bj a M(Ri), and RocDOM(x) cover Bo: Bo * M(Ro). If { Tk } is a hitting set of sets of { Di }, then it is an observable confirming test set for Bg. The thyristor test set is an illustration of Lemma 2. We also obtain a neccessary condition for the existence of a confirming test set: if BO is actually a restriction of some other behavior Bj, it is impossible to find a confirming test set for Bo. Note that even if RO\Ri is non-empty, Di may be empty, because the distinction is not observable in the given representation. Now we have determined test sets that confirm a particular behavior, ifthey are observed. However, we do not want to wait for them to drop from heaven, but we would like to enforce them by an appropriate causal input Finding Deterministic Test Inputs We assume that the causal variables are observable, which is reasonable, because it means we know what we are doing to the constituent. So, let CAUSE(y) c OBS(y) c VARS(1) be the set of susceptible variables and Pcause : DO%9 + DOMbcause) P’cause : DOWY~) + DOM(kause) the respective projections into the set of input tuples. What we would like to have is test inputs, i.e. subsets of DOM(kause), that are guaranteed to determine whether or not a particular behavior is present. More precisely: if we input one tuple out of each set to the constituent, the resulting value tuples of v deterministically either confirm or refute the behavior: Definition 4 (Test Input, Deterministic Input Set) A test input is a non-empty relation on DOMbcause): TIiGDOM(~cause)- A set of test inputs {TIi} is deterministic for a behavior BOG BEHVS iff for all sets V={~i}~DOM(~) whose set of causes {p cause&)} forms a hitting set of { TIi } , observation of V is inconsistent with BO or entails it: (Pv c- lB0 or (Pv I--- Bo How can we generate deterministic input sets? Unfortunately, for a test set {Ti} confirming Bo, the input set { pcause(Ti)} is not necessarily deterministic. To illustrate this, we consider the relation Rneg which is a subset of R,k\R unct (for AV < 0) and which could be used to rule out t K e fault “punctured” of the thyristor (Fig. 2). pcause projects to (AV, gate): Pcause(Rneg) = (-w,-E)x{ 0 1. However, if we choose a test input with (AV, gate) out of (-w,-E)X{ 0}, a value of i might be observed such that the vector lies in the intersection of R,k and Rpunct (indicated by “x” in Fig. 2) and, hence, is consistent with the correct behavior but also fails to refute the fault. As a cure, we have to exclude pcause(RoknRnunct), i.e. to - reduce the test input for Av to (-=, 6). Av Figure 2 Pcause(Rok\Rpunct) and Pcause @ok n Rpunct) overlap More generally, in order to construct input sets deterministic for some BOE BEHVS and leading to observable test sets, for each Bi#Bo we have to determine and eliminate those inputs that possibly lead to the same observations under both BO and Bi. This is the set P’cause(Pobs(RO) n Pobs(Ri))- Hence, if we define DIi := Pcause(R0) \ P ‘cause(Pobs(R0) A Pobs(Ri)), to the system. Causal Reasoning 253 then we are guarameed that any input chosen from DIi causes an observable value tuple that is inconsistent with M(Ri) or with M(Ro) (possibly with both of them). This is the idea underlying the proof of Theorem 3. Theorem3 Under the conditions of Lemma 2, each set of test inputs { TIk} that is a hitting set of sets of {DIi} is deterministic for Bg and {Tk) :={Pobs(RO)np’-lcause(TIk) 1 is an observable confirming test set for Bg. In practice, one wants to avoid test inputs that are extreme and possibly cause (or make worse) damage. For instance, we do not want to test with AV > VB~, because the thyristor could be destroyed. In this case, DIi may have to be further reduced by intersecting it with a set of admissible inputs: DIiadm:=RadmnDIi. Lemma 2 does not prevent us from constructing observable tests that are not real, but rather an artificial result of the choice of model relations: a non-empty Di=pobs(RO)\pobs(Ri) may be due to choosing Rg much larger than what is covered by the behavior, and Di potentially contains only physically impossible values. In contrast, simply because nothing prevents us from causing inputs and observing observables, we have Theorem4 The existence of a deterministic input set ensures the existence of an observable and controllable test set in reality. A Test Generation Algorithm Here, we outline a family of algorithms (Fig. 3) based on Theorem 3, and discuss it briefly. TI-SET = NIL FOR R in MODEL-RELATIONS DO (1) DI = bhd-‘pcause(Ro) \ P’cause(Pobs(Ro)npobs(R)) (2) IF DI = 0 . I THEN “No (adm.) deterministic test input against” R (3) “II, R$n$Pcause(Pobs(RO) \ Pobs(R)) THEN “No (adm.) observable test against” R GOT0 .NEXT Select TIE TI-SET with DI n TI #0 IF TI exists (4) THEN TI = TI n DI (5) ELSE Append DI to TI-SET .NEXT END FOR FOR TI IN TI-SET (6) Collect pobn(Ro) n p’-*-(TI) in T-SET Figure 3 An algorithm for generating (preferably deterministic) test inputs TI and test sets T confirming Bo The algorithm iterates over the model relations of behaviors Bi#BO and attempts to create an admissible input set that discriminates between Rg and Ri deterministically and in an observable way according to the above definition of DIi (step 1). If this is impossible (2), it determines in (3) the admissible input set corresponding to an observable test (obtained as Di according to Lemma 2) - which may fail, as well. 254 Causal Reasoning If there exist input sets from previous iterations with a non-empty intersection with the new DI, one of them is selected and replaced by this intersection (4). Thus, we account for the behavior(s) corresponding to the current R without increasing the number of tests. Otherwise, the current DI is added as a new test input in itself (5). In step 6, an observable test set is constructed from the final input set according to Theorem 3. It is confirming Bg, if all Ri could be accounted for. The algorithm generates the two tests for the thyristor mentioned before. The selection of TI for step 4 opens space for variations and heuristics. For instance, simply the first one with a non-empty intersection could be chosen, or the one with the largest intersection. The latter strategy always requires intersection with all existing input sets and assessment of the result, but may get closer to the optimum w.r.t. the number of tests generated. If there exists a single test, the algorithm generates it in linear time. In other cases, it is quadratic w.r.t. the number of model relations (which may be less than the number of behaviors) and may fail to generate a test set of minimal cardinality. Its result, including whether or not an existing minimal cardinality test set is found, can depend on the ordering of the model relations. In many domains, it will pay off to use more elaborate and costly algorithms in order to reduce the number of tests required. Making Test Generation Feasible through Model Abstraction For physical systems with large or continuous domains and complex behavior, the question arises whether it is practically feasible to compute projections, intersections and set differences. The answer is that we do not have to. As pointed out in the beginning, we want to make test generation for such domains feasible by performing it with model relations in an abstract representation (with small domains). We have to formalize this procedure and prove its correctness. The key idea is simple: If M(Ri) is a model of Bi, i.e. Bi j M(Ri), and if R’i is another relation (preferably in a finite domain) that specifies a weaker model, i.e. M(Ri)*M(R’i), then refuting M(R’i) suffices to rule out Bi. Hence, we can build test sets from such finite relations R’i. The task is then to find conditions and a systematic way to generate models that are guaranteed to be weaker (in the logical sense specified above) by switching to a different representation ($,DOM’($)) with finite domains. In (Struss 1992), a large class of transformations between representations is characterized by conditions that are both rather weak and natural: efinition 5 (Representational Transformation) A surjective mapping 2: DOM(y) + DOM’(d) is a representational transformation iff it has the following properties v(s) = x0 * d(s) = alo) d(s) = $0 * 3yys d(v’()) y(s) = y(). This simply means that, in the same situation, variables in the different representations have values related by 2. Under such representational transformations, models are preserved (Struss 1992): Lemma5 If 2: DOM’(y’) -+ DOM(x) is a representational transformation, then M(R) * M@(R)) and M(R) j M(z-l(R)). This means, if we map a model relation from some original representation into a different one under a representational transformation the image will specify a weaker model, as required . In particular, we can choose a representation with a finite domain, construct (observable) confirming test sets and (deterministic) input sets in this representation from the transformed model relations and map them back to the original detailed representation. The following theorem states that this actually yields (deterministic) input sets and (observable) confirming test sets in the original representation, thus justifying the intuitive approach: Theorem6 Let z&s: DoM(yobs) + DoM’(y’obs) and zcause: DOMhcause) + DOM’Wcause) be representational transformations. If { T’i } is an observable confirming test set for Bg then so is { Ti} := { z-‘cbs(T’i)} . If {TIi} is a deterministic input set for Bg, then so is { TIi} I={ 2-lcause(TI’i) }. In particular, qualitative abstraction (mapping real numbers to a set of landmarks and the intervals between them) is a representational representation. In the thyristor example, the landmarks can be chosen as 0, VTn, V’g,, VB~ for AV and 0 for i. With the respective model relations in this representation, the test generation algorithm produces the deterministic input set { { (high,O)}, {medium,high}x{ l}}, where high=(V’Bo, VBo) and medium=(VTn, V’B~). Of course, the abstract representation may be too coarse to allow for the separation of particular behaviors. We can use this as a criterion for selecting representations and behavior models, for instance, as the highest level that still allows to distinguish one behavior from the others. Remark 7 Zcause being a representational transformation is also a necessary condition in the following sense: If it is violated, we can construct behaviors and model relations such that there exist observable test sets with deterministic input sets for them in the abstract representation but none in the stronger one. However, these constructed behaviors may be irrelevant to any real physical system, and the back-transformation of tests may work for the practical cases nevertheless. Testing Constituents in an Aggregate Quite often the constituent to be tested is embedded in a particular context, namely an environment consisting of other interacting constituents, and only the entire aggregate can be controlled and observed. Our approach is general enough to cover this case. We regard the aggregate as the constituent to be tested, and observables and causes are related to this aggregate constituent. The goal is to confirm one behavior of this aggregate constituent by refuting the other behaviors out of a certain set. This set is given as the behaviors of the aggregate resulting from the different behaviors of the constituent embedded in it. More formally, let a constituent Co be in a particular context CTX consisting of constituents Cl ,...Cn with their respective variables. The aggregate is Cagg= { Cj } u { Co}, and representations for describing the aggregate’s behavior can be obtained from the representations for single constituents by taking the union of the local variables. For the sake of simplicity, we assume that all local relations are already specified in the aggregate representation. Issues that arise if the assumption is dropped are discussed in (Struss 1994). If M(Rj) are behavior models for constituents Cj, then RCTX=nRj specifies a corresponding behavior model for CTX={Cj}. If M(Rio) are models of the behaviors Bi of Co, then the relations Ri=RCTXnRio specify models of the behaviors of Cagg- -CTXu{ Co} produced by the behaviors of Co in CTX. In applying the test generation algorithm to these relations, we can construct observable tests and deterministic test inputs for the behavior of Cagg that involves the particular behavior Bo of Co. Since Pcause and pobs project to input sets and observables of Cagg, the tests are observable and controllable through Cas9. Of course, this provides a confirming test set for Bg of Co, only if M(RcTx) holds. This corresponds, for instance, to the widespread assumption that while testing a constituent, its context works properly. However, we can also generate tests based on the assumption that the context may contain particular faults, which, for instance, have been hypothesized by a diagnosis step. By constructing all behavior modes of Cagg corressponding to a single fault of any constituent, we can generate a test set confirming the correctness of all constituents under this assumption. ealization of Testing Now we have to implement a test system, i.e. a program that takes the test inputs and the observed responses of the device and returns whether the respective behavior has been confirmed or refuted. For this purpose, we do not have to invent a new machinery but can apply an existing diagnostic system. Tests confirming a behavior are based on refuting models of all other behaviors. Refuting behaviors through observations is also the principle of consistency-based diagnosis (de Kleer, Mackworth & Reiter 1990), and we can implement testing through one of the consistency-based diagnosis engines, GDE+ (Struss & Dressler 1989). In more detail, GDE+ represents a constituent by the set of behavior models M(Ri). If a complete test set {Tk} is observed, i.e. <pv holds for some hitting set V of { Tk}, Causal Reasoning 255 then we have V’Tk !kE SIT &keTk v(s)=&. By construction, there exists for each Di:=ROVCi at least one Tk c_ Di . Hence, it follows ‘di#O 3s~ SIT 3yiEDi v(s)=yi, which means GDE+ refutes all behaviors except Bo: ‘dig0 3s~ SIT 3yieRi v(s)=yi 3 Yi#O yM(Ri) 3 Yi#O TBi. Then GDE+ confirms Bg by applying its “physical negation” rule (stating the completeness of the enumerated behaviors) yB1 A 1B2 A . ..A TBn 3 Bo. Of course, observation of a value outside Rg lets GDE+ refute Bg. In summary, GDE+ makes the inferences required for the application of a deterministic input set. Note that, for the purpose of testing, we can replace the constituent’s model set {M(Ri)} by the complements of the tests, {M(Tck)}, thus potentially reducing the number and, perhaps, the complexity of models to be checked. (Again, the details are discussed in (Struss 94)). Conclusions We make the rather strong claim that the theory presented here really solves the problem of testing physical systems. It solves it “in principle”, in the same sense as model- based diagnosis is a-solution to the problem of fault localization and identification. By this, we want to emphasize two aspects: On the positive side, it is a general theory covering large classes of devices, for which there exists no formal theory or systematic solution of the testing problem today. All other solutions to test generation are only variations of this principled approach, perhaps by applying heuristics, making certain assumptions, or exploiting particularities of the domain (For instance, we can show that the D-algorithm (Roth 1980) is a specialization of our algorithm for digital circuit testing). Particularly people from model-based diagnosis may be sceptical about the necessity of (complete sets of) fault models for this approach. However, knowledge (or assumptions) about the possible faults is not a drawback of our system, but is inherent to the task of testing. In constrast to diagnosis, where we may be content with refutation of (correct) behaviors, testing aims at confirming a particular behavior, usually the correct one. This is impossible, unless we make certain assumptions about the other possible behaviors, although this may happen unconsciously and implicitly. (This is why we are talking about testing of physical systems, and, for instance, not about testing system designs or software.) Our approach has the advantage to make such assumptions explicit (and the multiple modeling framework allows us to treat them as defeasible hypotheses, see (Struss 1992)). The representation through relations is quite natural for broad classes of physical systems. Note that the models are not required to be deterministi‘- {-emember the model of the class of thyristor faults called “Reduced VB~“). On the problem side, it is a solution only “in principle”, because it shifts the burden to the hard task of modeling. A major problem is finding appropriate models of devices with complex dynamic behavior. The thyristor, a dynamic device, illustrates that it can be possible to do the testing under temporal abstraction. Model abstraction is the key for the feasibility of the algorithm. But the models have to be strong enough to distinguish the behavior of interest from the other ones. We do not expect the algorithm to handle systems with thousands of components in a flat structure. But first experiments suggest that it can produce results in a reasonable amount of time for devices which are already complex enough to prohibit the completeness and/or optimality of manually generated tests. Currently, we are exploring binary-decision diagrams as a compact representation of the model relations. In this paper, we considered only testing with the goal of confirming one particular behavior. Testing in the context of diagnosis for identifying the present behavior is the subject of another paper. Other perspectives are supporting design for testability and sensor placement. In summary, we presented an approach to model-based test generation and testing that makes a large class of systems amenable to principled methods and well-founded algorithms. The exploitation of model abstraction is crucial to making the task practically feasible for an interesting class of technical systems, notwithstanding the fact that the general task of hypothesis testing is np- complete (McIlraith 1993). Finally, the basis of the theory is quite simple, simple enough to be powerful. Acknowledgements This work has been supported in part by the Christian- Doppler-Labor of the Technical University of Vienna. References de Kleer, J., Mackworth, A., and Reiter, R. 1990, Characterizing Diagnoses. In Proceedings of the AAAI 90, 324-330. McIlraith, S. 1993, Generating Tests Using Abduction. In Working Papers of the Fourth International Workshop on Principles of Diagnosis, Aberystwyth, 223-235. Roth, G. P. 1980, Computer Logic, Testing, and Verification. Rockville: Computer Science Press. Struss, P. 1992, What’s in SD? Towards a Theory of Modeling for Diagnosis. In: Hamscher, W. Console, L., and de Kleer, J. eds., Readings in Model-based Diagnosis. San Mateo: Morgan Kaufmann: 4 19-449. Struss, P., Dressler, 0. 1989, “Physical Negation” - Integrating Fault Models into the General Diagnostic Engine. In Proc. 1 lth Int. Joint Conf. on Artificial Inteligence, Detroit, MI, 13 18- 1323. Struss, P. 1994, A Theory of Testing Physical Systems Based on First Principles, Technical Report, Christian- Doppler-Labor, Technical University of Vienna. 256 Causal Reasoning | 1994 | 132 |
1,467 | Abstraction in Bayesian Belief Networks and Automatic Discovery From Past Inference Sessions * Wai Lam Department of Computer Science University of Waterloo Waterloo, Ontario, Canada, N2L 3Gl wlamlQlogos.uwaterloo.ca Abstract An abstraction scheme is developed to simplify Bayesian belief network structures for future in- ference sessions. The concepts of abstract net- works and abstract junction trees are proposed. Based on the inference time efficiency, good ab- stractions are characterized. Furthermore, an approach for automatic discovery of good ab- stractions from the past inference sessions is pre- sented. The learned abstract network is guaran- teed to have a better average inference time effi- ciency if the characteristic of the future sessions remains moreorless the same. A preliminary ex- periment is conducted to demonstrate the feasi- bility of this abstraction scheme. 1 Introduction One of the most advanced techniques for conducting exact probabilistic inferences in a multiply-connected Bayesian belief network is the junction tree approach developed by (Jensen, Lauritzen, & Olesen 1990; Jensen, Olesen, & Andersen 1990). Whereas it pro- vides a good structure for propagating and updat- ing beliefs through local computations in an object- oriented fashion, the inference time complexity is still intractable in the worst case due to the fact that prob- abilistic inference on Bayesian belief networks is a NP- hard problem (Cooper 1990) in general. In this paper we explore an approach to improving the inference time efficiency by means of abstraction. The concepts of abstract networks and abstract junction trees are char- acterized. The essence of this abstraction scheme is to hide those unimportant ground subnetworks so that probabilistic reasoning can be conducted on a higher abstract level with significantly increased efficiency. In cases where we need to know the beliefs in the hidden ground subnetwork, we can readily restore the ground subnetwork structure and resume the inference. The main goal of our approach is to reduce the average inference time for future inference sessions. In accor- dance with this objective, we characterize good ab- stract junction trees. Based on the characteristic of * Wai Lam’s work was supported by an OGS scholarship. the past inference sessions, a method is developed for automatic discovery of good structures for abstraction. Some work has been done on improving the time efficiency of probabilistic inference on Bayesian be- lief networks. In this paper we concentrate on exact probabilistic inference instead of approximate infer- ence. An early technique was due to Baker and Boult who proposed a technique for pruning a Bayesian net- work structure before conducting belief updating and propagation given an inference session (Baker & Boult 1990). This approach prunes away those variables that are probabilistically independent of the variables of in- terest in the session. However, if majority of the vari- ables are not probabilistically independent, the pruned structure is almost the same size as the original one. Heckerman developed a kind of structure known as similarity networks which can deal with some large and complex structures common in fault diagnosis domains such as medical diagnosis (Heckerman 1990). Whereas it is very useful in structures containing a distinguished variable, it cannot be applied in an arbitrary network structure in general. Recently, (Xiang, Poole, & Bed- does 1992) developed multiply sectioned Bayesian net- works which partition an original network into sepa- rate localized Bayesian subnetworks. It requires an assumption that the whole network can be viewed as a collection of natural subdomains where the user will pay attention to one subdomain for a period of time before switching to another subdomain. In contrast, the abstraction approach proposed in this paper does not impose any restrictions on the network structure. 2 Overview of Basic Idea \ a ground Bayesian network *\ HB ,, aderived f ,1 ,,‘,I abstract , v’ I I ,‘\ .I network structure structure Figure 1: A Ground Network and Its Abstract Network Consider a Bayesian belief network for a concerned do- main with the ground structure as depicted in Figure 1. Suppose many inference sessions only involve proba- Uncertainty Management 257 From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. bilistic inferences between the elements in the variable The inference time efficiency can be briefly set: {A, B, C, D, E, F) (In other words, the variable overviewed as follows: Some computational efforts are set CP, Q, R, S, T) d oes not involve in most infer- needed for calculating the conditional probability pa- ence sessions). For instance, in a particular session, rameters associated with the abstract arcs and con- the beliefs of’A and F are queried given the evidences instantiated for the variables B and C. In order to compute the posterior beliefs of the queried variables in these sessions, we need to perform belief updating and propagation on the entire network structure. Suppose based on the past experience, this observa- tion is true for most inference sessions. We can sum- marize these inactive variables to form four abstract arcs namely: C -+ E, D + E, C + F and D --+ F. Figure 1 shows the structure of the new abstract net- work that has incorporated these abstract arcs (the abstract arcs are denoted by dashed lines). Instead of using the original ground network, this abstract net- work is used for conducting inferences. It raises an is- sue regarding how to determine the conditional prob- ability parameters associated with the abstract arcs. However, if appropriate conditional probability param- eters could be determined, the probabilistic inference results based on this abstract network would be exactly the same as that computed from the original ground network. Clearly, the inference time will be greatly reduced since the structure of the abstract network is simpler and smaller. Determining the conditional probability parameters for the abstract arcs is not a trivial task since we require the joint probability dis- tribution of the abstract network be the same as that of the ground network marginalized appropriately. In Section 3.4, we show that the conditional probability parameters of the abstract arcs can be calculated based on a local subnetwork structure consisting of the vari- ables to be abstracted. strutting the abstract junction tree. Nevertheless, this step is required only once for a network since it is not required during the inference sessions. If a good ab- straction has been chosen when generating the abstract network, we expect most of the inference sessions only deal with the variables in the abstract network, and thus abstract expansions will rarely occur. Under this circumstance, it greatly reduces the inference time even though some computational costs are required for ex- pansions in a few sessions. Section 5 will compare in detail the inference time efficiency of the ground and the abstract network. Based on the analysis of the in- ference time efficiency, we characterize a condition for good abstractions. 3 Abstract Networks and Abstract June- tion Trees As outlined in Section 2, Jensen’s junction tree ap- proach (Jensen, Lauritzen, & Olesen 1990) is chosen in our abstraction scheme due to its great flexibility in reasoning about a multiply-connected network. It transforms the original network into a secondary clique tree structure, called junction tree, where belief up- dating and propagation is carried out. The nodes and edges in a junction tree are known as junction nodes and junction links respectively. Each junction node contains a number of variables from the original net- work. Associated with each junction link, there is an- other structure called separator which contains the in- tersection of the variables in both junction nodes con- nected by the junction link. Moreover, there is a belief table stored in each junction node and each separator. In an inference session, the evidences are entered into the junction tree and belief propagation is performed on the tree structure. The main operations for be- lief propagation are Collect-Evidence and Distribute- Evidence as described in (Jensen, Lauritzen, & Olesen 1990). We use the term ground network and ground junc- One of the advanced methods for conducting infer- ence in Bayesian networks is the junction tree approach proposed by (Jensen, Lauritzen, & Olesen 1990). It transforms the original network into a secondary clique tree structure where belief updating and propagation is carried out. The inference in our abstraction scheme is also based on the junction tree approach. We present how the abstract arcs incorporate into a junction tree environment forming an abstract junction tree. Hence, the actual belief updating and propagation are per- formed on the abstract junction tree structure after the location of the abstractions are determined and assimilated. If there is a need to compute the belief of a variable tion tree referring to the original Bayesian belief net- work and its corresponding junction tree respectively. Before discussing the concepts of abstract networks and abstract junction trees, we first introduce the no- tions of self-contained subnetworks and self-contained junction subtrees. 3.1 Definitions Let the variables of a ground network be 2 = {X1,X2 )“., X7&}. Definition 3.1 A self-contained subnetwork is a con- netted subgraph of a ground network and possesses the following property: Let the variables in the subnetwork be 6 (i.e., 8 c z), VXi, Xj E e, if Xk is a variable along a directed path between Xi and Xj, then we have XI, E 6. currently not found in the abstract junction tree in a particular session, an abstract expansion will be exe- cuted on the appropriate spot of the junction tree and belief propagation can be done accordingly to obtain the required result. If the abstract junction tree is a good abstraction of the domain, we expect the need for abstract expansion is very infrequent. Based on the pattern of inference sessions, we characterize good subnetworks where abstraction can be done. 258 Uncertainty Management Intuitively, a self-contained subnetwork is a compact, connected unit in the underlying ground network. Definition 3.2 With respect to a self-contained sub- network consisting of the variable set e, a destination variable set contains all such a variable Xi that it is outside the subnetwork (i.e., Xi E (2 - 6)) and Xi is a direct successor of a variable inside the subnetwork. Also, a source variable set contains all such a variable Xj that it is outside the subnetwork and Xj is a direct parent of a variable in the subnetwork or it is a direct parent of a variable in the corresponding destination variable set. For instance, in the ground network of Figure 1, the subnetwork comprising the variables { P, Q, R, S, T } and the associated arcs inside the subnetwork is an example of a self-contained subnetwork. The set of variables (6, D} and {E, F) are its corresponding source and destination variable set respectively. Note that the parents of each destination variable must be in the source variable set, or the subnetwork variable set, or the destination variable set itself. For a given self-contained subnetwork, we identify a self-contained junction subtree as follows: Definition 3.3 With respect to a self-contained sub- network, a self-contained junction subtree is a subtree in the ground junction tree and is composed of: i) all junction nodes containing a variable in the self- contained subnetwork; ii) the junction links associated with these junction nodes in the subtree. The junction node adjacent to the self-contained junction subtree in the ground junction tree is known as the adjacent junction node. Essentially, a self-contained junction subtree in a ground junction tree is a counterpart of a self-contained subnetwork in a ground network. 3.2 Abstract Network Construction Now, we are in a position to explain our abstraction scheme. In our scheme, abstraction is performed on a self-contained subnetwork unit. Basically, the whole subnetwork is summarized as a collection of abstract arcs as discussed below. Consider a self-co_ntained subnetwork 0~ consisting of the variable set C = {Cl, C2, . . .C;} from a ground network. Let the source variable set s’, with respect to 0~ be {Sl, S2, . . . S,}; the destination variable set 6 with respect to 0~ be {Dl, D2, . . . Dk}. Suppose that the numbering of the variables are named according to the corresponding variable ordering in the ground net- work. Let the ordering of all the above variables in the ground network be Si, S2, . . . Snt, Cl, C2, . . . Ci, D1, D2, . . . Dk. An abstract arc is constructed by linking a source variable Sm, (i.e., Sm, E S) to a destination variable Dk,, (i.e., Dkl E 6) if there exists a directed path from Sm, to Dkl in the ground network. As a result, associated with each self-contained sub- network, there is a group of abstract arcs linking the source and destination variables. The whole self- contained subnetwork unit together with its incoming arcs (from the source variables) and outgoing arcs (to the destination variables) can be extracted from the ground network and substituted by the corresponding group of abstract arcs. After the substitution, it gives rise to an abstract subnetwork structure which is com- posed of: (1) the group of abstract arcs, (2) the source variable set, and (3) the destination variable set. For instance, the variables C, D, E, F and the arcs C --f E, C + F, D + E, D + F in Figure 1 form an abstract subnetwork. Thus, an abstract subnetwork can be viewed as an abstraction element representing for the corresponding self-contained subnetwork. Intuitively, the abstract subnetwork can capture all the proba- bilistic relationships contributed by the self-contained subnetwork. When the self-contained subnetwork has been replaced by the abstract subnetwork, the original ground network becomes an abstract network. We hope that probabilistic inferences regarding the variables in the abstract network can be carried out without any loss of accuracy. In Section 3.4, we will show that it can be achieved by setting the appropriate conditional probability parameters associated with the abstract arcs. 3.3 Abstract Junction Tree Construction Based on the structure of an abstract subnetwork, an abstract junction subtree can be constructed by trans- forming the local structure of the abstract subnetwork to a junction tree using Jensen’s ordinary transfor- mation technique (Jensen, Lauritzen, & Olesen 1990). The belief tables of the junction nodes in the abstract junction subtree can be readily computed from the new conditional probability parameters associated with the destination variables. Therefore, an abstract junction subtree can be viewed as an individual object repre- senting a summary of the corresponding self-contained junction subtree. We are now ready to make use of the abstract junc- tion subtree to perform abstraction on the ground junc- tion tree summarized below: 1 the self-contained junction subtree is removed from the ground junction tree. 2 the abstract junction subtree is inserted into the ground junction tree by setting up a junction link between each adjacent junction node and the appro- priate junction node in the abstract junction subtree. (Adjacent junction node is defined in Definition 3.3) 3 the self-contained junction subtree is stored and in- dexed by the abstract junction subtree so that it can be restored back into the ground junction tree if necessary. 3.4 Computing New Probability Parameters Using the notation in Section 3.1, we further analyze the structure and the conditional probability parame- ters associated with the abstract arcs. In an abstract subnetwork, each destination variable has a new parent Uncertainty Management 259 set whose elements come from one or all ing two groups: (1) the source variable of the follow- abstract -arcs), and- (2) set S (due to th e original parents not in the subnetwork variable set 8 (This kind of parent must be in the destination variable set 6). Specifically the new parent set of a destination variable Dkl is a subset of h{Dl,D2 ,... Dkl.ml}. 4 Abstract Expansion Once an abstract junction tree quent inference tree structure. is formed, the subse- sessions will be performed on this new If a good abstraction has been used, We need to calculate the new conditional probability parameters associated with the new parent set for each destination variable. A requirement for these param- eters is that the joint probability distributions of the abstract network must be equivalent to that of the orig- inal ground network marginalized appropriately. This will guarantee the results of probabilistic inferences re- garding the variables in the abstract network are iden- tical to that computed from the original ground net- work. We propose a technique to calculate the required probability parameters based on the local structure of the subnetwork as follows: First, a local Bayesian net- work 0:: is identified by including all of the following it ems: - the variables in the set 2 U 6 U 5; - the existing arcs linking the variables within the sub- network 0~; - the existing arcs linking from a source variable to a variable in the subnetwork 0~; and - the existing incoming arcs for a destination variable In fact, this local belief network Ol, has almost the same topological complexity as the self-contained sub- network structure 0~. To determine the conditional we expect the cient for most - abstract junction tree structure is suffi- of the inference sessions. However, there are some sessions, albeit infrequently occurred, which require evaluation of the variables in a self-contained s&network. the abstract in These kinds of variables do not exist in junction tree. This situation may occur the middle- of an To deal with this inference problem, session. we propose a mechanism called abstract expansion which makes it possible to continue the required inference. Basically, abstract expansion transforms an abstract junction subtree by restoring back its corresponding self-contained junc- tion subtree. This operation is exactly the reverse of the substitution of the self-contained subtree. 5 Computational Advantages The computational advantage of the abstract junction tree will be analyzed in this section. First, let us discuss the computational cost required to perform a probabilistic inference in a su .btree. We concentrate our attention to the inference cost needed to conduct a basic inference operation in a connected junction subtree. Then, the-inference cost of the ground and abstract junction trees are compared and the overall computational gain is characterized. Consider a connected junction subtree @. Let jnode(@) denote the set ofjunction nodes in a; se&@) denote the set of separators in ip; size(J) denote the size of the belief table stored in J where J can be a junction node or a separator; and numZink( J) denote the number of junction links adjacent to the junction node J. Suppose the computational cost of an addition operator is of the factor A to the cost of a multiplica- tion operator. Let probinf(+) be the total number of multiplication operations associated with the junction subtree ip in an inference session. It can be shown that probin IElf (G) is given by: (1 + X)numZink( J) size(J) + 2 C size(S) JEjrrode( 9) SEsep( 9) (1) Now, let us consider the computational cost for an abstract expansion. Let ezpsn(O) denote the number of multiplication operations required to perform an ab- stract expansion in the self-contained junction subtree @. Since the main task for an abstract expansion is a Distribute-Evidence operation in the corresponding self-contained junction subtree, ezpsn(+) is C (l+X(numhk(J)-1)) size(J)+ 2 iven by: size(S) JEjnode( 9) SE-p(*) (2) It can be observed that the number of the separa- tors (i.e., 1 sep(9) I) actually depends on the number of junction nodes (i.e., 1 jnode(@) I); the size of a sep- arator also depends on the size of its associated junc- tion nodes. Therefore, the main thrust of probinf and expsn operations is the number and the size of the probability parameter of a destination variable given a particular instantiation of its new parent set, be- lief updating and propagation is performed on the net- work Ol, with the parent instantiation as the specific evidences. The required conditional probability pa- rameter value is just the posterior belief of the desti- nation variable. It is claimed that the probability pa- rameters computed by this technique render equivalent joint probability distribution of the abstract network and the ground network. The proof is given in (Lam 1994). 3.5 Performing Abstraction In the above discussion, we only consider one self- contained subnetwork. However, it is absolutely pos- sible to have more than one subnetworks in a ground network. Each subnetwork serves a local part of the ground network where an abstraction can be done in- dividually. As a result, an abstract network, evolv- ing from a ground network, may contain one or more abstract subnetworks which replace their correspond- ing self-contained subnetworks. Similarly, an abstract junction tree is obtained by replacing each of the self- contained junction subtree unit with the corresponding abstract junction subtree and it becomes the new sec- ondary structure on which subsequent inferences are conducted. 260 Uncertainty Management 6 Discovery of Abstract Networks junction nodes within the junction subtree. Note that the size of a junction node refers to the size of the belief tables stored in the junction node. Now, we can compare the inference costs in the ab- stract junction tree and the ground junction tree where no abstraction has been done. There is no need to ex- amine the whole tree in both cases since the only dif- ferences are those spots where abstractions have been done. Hence, we can focus on the computational costs around self-contained junction subtrees and abstract junction subtrees. Suppose +pG denotes a junction sub- tree, in a ground junction tree, which consists of (1) all the junction nodes in the self-contained junction sub- tree and all its adjacent junction nodesi (2) all the junc- tion links within the self-contained Junction subtree and connecting to the adjacent junction nodes. Simi- larly, let +A denote a junction subtree, in an abstract junction tree, which consists of (1) all the junction nodes in the abstract junction subtree and all its ad- jacent junction nodes; (2) all the junction links within the abstract junction subtree and connecting to the ad- jacent junction nodes. Let N be the total number of in- ference sessions. The total number of multiplication re- quired in the self-contained junction subtree for N ses- sions is N pobinf(@G). On the other hand, the total number of multiplication required in the abstract junc- tion subtree is N probinf(@A) + n ezpsn(+G) where n is the number of inference sessions which require ab- stract expansions. To compare the costs, we define the computational gain (gain) as follows: gain = N probinf(9~) 1 (N probinf(9~) + n expsn(@G)) N probinf (Gp,) where ~1 is the fraction of sessions which require ab- stract expansions. If gain > 0, it conveys the fact the abstraction defi- nitely reduces some computational costs for the given N sessions. We can also conclude that the average inference time efficiency is improved. The maximum possible value for gain is 1 and it occurs when no com- putation is needed in the abstract junction subtree. As a result, it is expected that gain will be between 0 and 1.l If a good abstraction has been chosen, we have the following two observations: First, the num- ber and size of the junction nodes in the abstract junc- tion subtree (i.e., +A) should be far less than that of the self-contained junction subtree (i.e., @G). Second, the fraction of sessions which require abstract expan- sion (i.e., p) should be close to 0. Both observations will lead to a computational gain greater than 0 and thus the average inference time efficiency is improved. Based on this characterization of computational cost, an algorithm for automatic discovery of good abstract networks from the past inference sessions is presented in the next section. ‘If gain < 0, it means the abstract junction tree requires extra computations over the ground junction tree and it indicates a bad abstraction has been chosen. We have developed a learning mechanism which can discover good abstraction network from the past infer- ence sessions. Precisely, it can discover possible loca- tions in the ground network where abstraction can be performed. Also, it will guarantee a better average in- ference computational efficiency if the future sessions follow moreorless the same pattern as the past sessions on which the learning approach is based. An inference session includes setting the evidences and making queries. First, some information about a past session needs to be collected for the learning algorithm. We locate a set of inactive variables for each session. An inactive variable is actually a vari- able which is in neither the evidence nor the query set for that session. For each session, we record these in- active variables pertaining to this particular session. When more past sessions are gathered, the records of inactive variables form an ina&ive variable table and this table provides valuable information for the learn- ing algorithm. The objective is to obtain a set of vari- ables where good self-contained subnetworks can be identified and good abstract subnetworks can be con- structed. Intuitively, if a variable appears in many records in the inactive variable table, it iS probably a good vari- able for abstraction. Suppose an abstract junction tree is constructed from a ground junction tree. The com- putational gain based on the past inference sessions can be calculated by Equation 3. The greater the gain is, the better the abstraction in term-of the average inference time efficiency. The remaining problem is to find the set of vari- ables on which the abstraction should be done and possesses the greatest computational gain (as defined in Equation 3). In principle, all different combinations of the variables appearing in the inactive variable table can be tested by k;aluatLg their computational gains. Then, the set of variables which gives the maximum gain is chosen as the solution. Clearly, there are expo- nentially many different combinations and so we tackle this problem by a best-first search algorithm. In fact, the inactive variable table gives us a good source of information for the task of learning a set of good variables for abstraction. First we extract all the variables in the inactive variable table and rank them in a list in descending order according to the num- ber of occurrences of that variable in the table. The resulting list formed is called the INACTIVE-LIST. A best-first search based on the INACTIVE-LIST is per- formed. The goal is to find the set of inactive variables which maximizes the computational gain as defined in Equation 3. The OPEN list for the best-first search al- gorithm contains search elements sorted in ascending order of the merits of the elements. Each search ele- ment consists of three components, namely, the current inactive variable set (CURRENT), the next variable to be added (NEXT-NODE), and the merit value of this search element (M-VALUE). NEXT-NODE is a variable from the INACTIVE-LIST and it will the next variable Uncertainty Management 261 to be added to the CURRENT for evaluating the com- putational gain if this search element is explored. The merit value M-VALUE is just the computational gain of CURRENT. The initial OPEN list is constructed by search elements with CURRENT being a single variable from INACTIVE-LIST and NEXT-NODE being the vari- able which follows that single variable in the INACTIVE- LIST. The best-first search algorithm is outlined as below: the first search element in the OPEN list is extracted and examined. the NEXT-NODE of this element is added to its CURRENT forming the set of variables called the NEW-CURRENT. The computational gain of the NEW-CUR.RENT is evalu- ated. Let the variable following the NEXT-NODE in the INACTIVE-LIST be the NEW-NEXT-NODE. A new search element is generated from the NEW- CURRENT andthe NEW-NEXT-NODE andisinsertedinto the OPEN list appropriately according to the new com- putational gain. go to step 1 if the computer resource is permitted and the OPEN list is non-empty. After the search algorithm, the CURRENT associated with the first element in the OPEN list gives the re- quired set of variables for abstraction. 7 A Preliminary Experiment 25 Figure 2: The Ground Network of MUNIN A preliminary experiment has been done to demon- strate the automatic discovery algorithm. The struc- ture of the Bayesian belief network used in this exper- iment as depicted in Figure 2 is derived from a system called MUNIN which is a Bayesian belief network for interpretation of electromyographic findings developed by (Andreassen et al. 1987). One possible ground junc- tion tree for this network is shown in Figure 3. Some only junction nodes and iunction links are shown Figure 3: A Ground Junction Tree for MUNIN hypothetic inference sessions were synthesized and an inactive variable table was generated as shown in Ta- ble 1. Next, our learning algorithm was applied to the table and the set of variables proposed for abstrac- tion, which is (16, 17, 18, 19, 20, 21}, was obtained. set of inactive variables in number of each inference session sessions 15131415161718192021 46 4 9 10 16 17 18 19 20 21 23 24 21 16 17 18 19 20 21 62 3 16 17 18 19 21 24 16 1 6 11 16 17 18 19 20 21 22 12 2 3 12 16 17 18 19 20 21 28 4 13 15 17 18 19 20 I 15 1 total : 200 Table 1: Inactive Variables from Past Inference Sessions Based on this set of variables, the self-contained sub- network and the self-contained junction subtree is lo- cated. Then, the abstract subnetwork and the abstract junction subtree were generated. The learned abstract network is shown in Figure 4. Note that the structure of the abstract junction subtree is much simpler than that of the self-contained junction subtree. The com- putational gain evaluated by Equation 3 based on our inference sessions was 0.71. Thus the learned abstract junction tree will have a better average inference time. Figure 4: A Learned Abstract Junction Tree Acknowledgements I owe a special debt of gratitude to Fahiem Bacchus for introducing me to this problem. References Andreassen, S.; Woldbye, M.; Falck, B.; and Andersen, S. 1987. MUNIN - a causal probabilistic network for inter- pretation of electromyographic findings. In Procceedings of the International Joint Conference on Artijical Intelli- gence (IJCAI), 366-372. Baker, M., and Boult, T. 1990. Pruning Bayesian net- works for efficient computation. In Proceedings of the Con- ference on Uncertainty in Artijicial Intelligence, 257-264. Cooper, G. F. 1990. The computational complexity of probabilistic inference using Bayesian belief networks. Ar- tificial Intelligence 42:393-405. Heckerman, D. 1990. Similarity networks for the con- struction of multiple-fault belief networks. In Proceedings of the Conference on Uncertainty in Artificial Intelligence, 32-39. Jensen, F.; Lauritzen, S.; and Olesen, K. 1990. Bayesian updating in causal probabilistic networks by local compu- tations. Computational Statistics Quarterly 4:269-282. Jensen, F.; Olesen, K.; and Andersen, S. 1990. An algebra of Bayesian belief universes for knowledge-based systems. Networks 20:637-659. Lam, W. 1994. Characterizing abstract Bayesian belief networks. In preparation. Xiang, Y.; Poole, D.; and Beddoes, M. 1992. Exploring localization in Bayesian networks for large expert systems. In Proceedings of the Conference on Uncertainty in Arti- ficial Intelligence, 344-351. 262 Uncertainty Management | 1994 | 133 |
1,468 | Noise and Uncertainty Manage Xiaohui Liu and Gongxian Cheng Birkbeck College Department of Computer Science University of London, Malet Street London WClE 7HX, United Kingdom hui@dcs.bbk.ac.uk; ubacr46@dcs.bbk.ac.uk Abstract The management of uncertain and noisy data plays an important role in many problem solv- ing tasks. One traditional approach is to quan- tify the magnitude of noise or uncertainty in the data and to take this information into account when using this type of data for different pur- poses. In this paper we propose an alternative way of handling uncertain and noisy data. In par- ticular, noise in the data is positively identified and deleted so that quality data can be obtained. Using the assumption that interesting properties in data are more stable than the noise, we pro- pose a general strategy which involves machine learning from data and domain knowledge. This strategy has been shown to provide a satisfac- tory way of locating and rejecting noise in large quantities of visual field test data, crucial for the diagnosis of a variety of blinding diseases. Introduction Much research has been done to see how real world data can be intelligently modeled using AI methods to produce useful knowledge (Frawley, Piatetsky-Shapiro, & Matheus 1991; Weiss & Kulikowski 1991). Notable examples include the TDIDT (Top Down Induction of Decision Trees) family of learning systems where clas- sification rules are learned from a set of training exam- ples (Quinlan 1986; Bratko & Kononenko 1987). The data are also modeled and directly used to solve prob- lems in application domains. For example, visual field test data are directly used to train neural networks which would associate these data with different kinds of blinding diseases (Nagata, Kani, & Sugiyama 1991). The real world data, collected or generated in a va- riety of different environments, however, often con- tain noise, and are incomplete and uncertain. One of the most challenging research issues in intelligent data analysis is, therefore, how to handle noise and uncertainty in the data so that these data can be used correctly and most effectively in achieving the above described objectives. One of the traditional approaches to the manage- ment of noisy and uncertain data is to use mathemat- John Xingwang Wu Institute of Ophthalmology Department of Preventive Ophthalmology University of London, Bath Street London EClV 9EL, United Kingdom smgxjow@ucl.ac.uk ical and statistical techniques to quantify their mag- nitude in the data and to present general informa- tion about the data quality. The decision-making or problem solving process using this type of uncertain information, however, is ultimately a subjective one, depending on one’s experience and knowledge. The outcome from this process, therefore, would be often uncertain as well. Also, knowledge discovered from this type of data might be of questionable validity. In this paper we propose an alternative way of han- dling noisy data, which has great potential in improv- ing the quality of problem solving and knowledge ac- quired from data. Instead of measuring and providing information on the amount of noise in the data, we try to explicitly identify and then discard the noise before these data are used for any purpose. In section 2, the type of noise considered in this pa- per is defined and a general strategy for its identifica- tion is proposed, which involves machine learning from data and domain knowledge. In section 3, this strat- egy is applied to large quantities of visual field test data which are crucial for the diagnosis of a variety of blinding diseases. In section 4, this strategy is evalu- ated and we show that it provides a satisfactory way of locating and rejecting noise in the test data. Finally, the work is summarized in section 5. Noise and its Identification Measurement Noise In learning classificatory knowledge from data, there is a universe of objects that are described in terms of a collection of attributes (Quinlan 1986). The objective is to extract from a set of training examples, rules for classifying objects into a number of prespecified cat- egories using those attributes. In these learning sys- tems, data are defined as noisy when either the values of attributes or classes contain errors. In this paper we shall put an emphasis on the errors of attribute values as we are considering the use of data for general purpose applications, not limited to learn- ing classification rules. One of the main reasons for these errors is that the attributes used to describe an object are often based on measurements. To illustrate Uncertainty Management 263 From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. the idea, consider the task of diagnosing blinding dis- eases such as glaucoma. A dominating attribute would be to test the visual field of a patient. It is highly un- likely that one could obtain absolutely correct visual field data because these data, collected from patients’ responses to visual stimuli on a computer screen, neces- sarily contain errors caused by various behavioral fac- tors such as the learning effect, inattention, failure of fixation, fatigue etc. These errors are typically in the form of false positive or negative responses from pa- tients (Lieberman & Drake 1992). Quinlan has also given an example of false positive or negative readings for the presence of some substance in the blood (Quin- lan 1986). The noise in data considered in this paper refers to incorrect data items caused by measurements. Con- sequently we shall use the term measurement noise throughout the paper. Identifying the Measurement Noise One fundamental assumption made in (Becker & Hin- ton 1992), h w ere a new self-organizing neural network is proposed, is that interesting properties in data are more stable than the noise (Mitchison & Durbin 1992). For example, the property that a normal person who does not have any visual function loss should be able to see the stimuli on the test screen most of the time is more stable than the occasional fluctuation in data caused by errors (e.g. false positive or false negative responses) for whatever reasons. We have adopted this assumption as our basic principle for identifying mea- surement noise, to which we shall refer as the noise identification principle. Suppose that a repeated test is designed where the same measurement is made a fixed number of times and consider the visual test as an example. A normal per- son might be distracted in the middle of a test, say, for example the fifth of the repeated measurements. This results in poor sensitivity values for, perhaps, most of the locations within the visual field, leading to fluctu- ation in the data. This type of fluctuation, however, should not affect the overall results of the visual field as she or he should be able to see the stimuli on the screen during most of the other trials in the test. The main task here is to identify the common feature ex- hibited by most of the trials, i.e., the person can see the stimuli most of the time. The part of the data inconsistent with this feature, i.e. the fifth trial, will then be exposed and consequently suspected as noise. The question is, then, how to find a computa- tional method capable of detecting interesting features among data. Unsupervised learning algorithms seem to be natural candidates, as they are known to be ca- pable of extracting meaningful features, which reflect the inherent relationships between different parts of the data (Fisher, Pazzani, & Langley 1991). For ex- ample, we can use an unsupervised learning algorithm such as self-organizing maps(Kohonen 1989) to let the data self-organize in such way that more stable parts of data are clustered to reflect certain interesting fea- tures, while parts of data which are inconsistent with those features will be separated from the stable cluster. It should be emphasized that the less stable part of data should not necessarily be the measurement noise in that they can be actually the true measurements re- flecting real values of an attribute. In the example of diagnosing glaucoma using visual field data, the fluc- tuation in the data can be caused by behavioral fac- tors such as fatigue and inattention, but can also be caused by pathological conditions of the patient. Con- sider that a glaucoma patient undergoes a visual field test. It is quite possible that there will be still fluctu- ations in the responses at certain test locations, even if s/he has fully concentrated during the test. The na- ture of the disease has dictated her/his responses. The elimination of these responses would lead to the loss of much useful diagnostic information, and worse still, could lead to incorrect conclusion about the patient’s pathological status. Therefore, it would be desirable to check whether the less stable part of data is indeed the measurement noise. This is difficult to achieve using the data alone, as there are often many possible explanations for fluc- tuation in the same data set, as discussed above. The use of a substantial amount of domain specific knowl- edge, however, has potentials in resolving this diffi- culty. For example, the knowledge of how diseases such as glaucoma manifest themselves on the test data is crucial for identifying the measurement noise, as we can then have a better chance of finding out the com- ponent within the less stable part of the data, which is caused by pathological reasons. The above discussions lead to a general strategy for identifying the measurement noise in data, which con- sists of two steps. Firstly, an unsupervised learning algorithm is used to cluster the more stable part of the data. This algorithm should be able to detect some interesting features among those data. The less stable part of the data, which are inconsistent with those fea- tures, then becomes the suspect of measurement noise. Secondly, knowledge in application domains, to- gether with knowledge about the relationships among data, is used to check whether the less stable part of data is indeed the measurement noise. This type of domain specific knowledge may be acquired from ex- perts, however, it is often incomplete. For example, only a partial understanding has been obtained about how diseases like glaucoma manifest themselves on any visual field test data (Wu 1993). Therefore, it is often desirable to apply machine learning methods to the ini- tially incomplete knowledge in order to generalize over unknown situations. One such example is shown in the next section. 264 Uncertainty Management Identifying Noise in Glaucomatous Test Data The Computer Controlled Video Perimetry (CCVP). The CCVP (Fitzke et al. 1989; Wu 1993) is a newly developed visual function test method and has been shown to be an effective way of overcoming difficulties in the early detection of visual impairments caused by glaucoma. It examines the sensitivity of a number of locations in the visual field using vertical bars on the computer screen [see Figure 1 for an ex- ample]. All these locations are tested by several differ- ent stimuli and the test is repeated a fixed number of times. One popular version of the CCVP test exam- ines 6 locations using the same stimulus and the test is repeated 10 times. 0 The visual focus point Figure 1: A CCVP screen layout If the stimulus is seen at any stage of the test, the patient presses a button as a response. At the end of this CCVP test, ten data vectors are produced, each of which records the patient’s response during a single trial. Each vector consists of 6 data elements refer- ring to the results of testing 6 locations using the same stimulus. As far as each location is concerned, there will be a sensitivity value calculated by counting the percentage of positive responses. The clinician relies heavily on these location sensitivity values to perform diagnosis. Applying the Strategy to the CCVP Data Identifying the More Stable Part of the CCVP data. The method for identifying the more stable part of the CCVP data is to model the patient’s test behavior using the self-organizing maps (SOM). Data clusters can then be visualized or calculated. This method consists of three steps. Firstly, Kohonen’s learning technique (Kohonen 1989) is used to train a network capable of generating maps which reflect the patient’s test behavior. Each response pattern for each test trial is used as an input vector to the self-organizing map and each winner node is produced on the output map. In all, 2630 trial data vectors corresponding to 263 tests are used to train the network and the whole data set is reiteratively submit- ted 100 times in random orders. Secondly, an effort is made to find a network which shows better neighborhood preservations, i.e. similar input patterns are mapped onto identical or closely neighboring neurons on the output map. This step is important as we want to map similar response patterns from patients onto similar neurons. We have used the topographical product (TP) (Bauer & Pawelzik 1992) as a measurement for this purpose where TP indicates the magnitude of neighborhood violation. Therefore, the smaller the value of TP is, the better the neighborhood preservation would become. Having obtained a well-performed network, the final step is to generate the behavior maps for individual patients and analyze these maps to identify the more stable part of data. As far as each patient is concerned, there would be ten winner nodes and nine transitions on the output map. These transitions constitute a transition trajectory, which graphically illustrates how patient’s behavior changed from one trial to the other [Figure 23. 0 b 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 _J 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 9 0 0 0 0 0 0 0 0 0 0 0 000000000000 Figure 2: A transition trajectory in the output map As one of the key SOM features is that similar in- put vectors would lead to similar winner nodes, here we have the general rule for identifying the more sta- ble part of the data: if most of the winner nodes are centered around one particular region, then the input data vectors associated with these nodes constitute the more stable part of the data. These vectors share one common feature: they are similar to each other, judged to a large extent by a distance measurement such as the Euclidean distance. The above rule can be implemented by algorithms using the geometry positions of the nodes and their rel- ative distances. The approach taken here is to search for a maximum set of neurons on the output map, which occupies the smallest topographical area. In particular, an evaluation function is defined in equa- tion 1 for this purpose and the objective is to find a subset of winner nodes, S, which minimizes the value of F(S). F(S) = A(S(k))/k2 (k = N, N- 1, . . . . [N/2+ l])(l) Where N is the total number of winner nodes (ten Uncertainty Management 265 in our application), A denotes the topographical area in the map occupied by a subset of winner nodes, and S(h) represents a subset of winner nodes with Ic mem- bers. Checking the Less Stable Part of Data. Let us now examine the less stable part of data, for example, the data vectors associated with winner nodes 8 and 9 in Figure 2, and see whether or not some of these vectors are the measurement noise. For our chosen ap- plication, we are particularly interested in finding out whether data items within this less stable part of data are caused by pathological conditions of the patient during the visual field test. To achieve this, a deep understanding of how dis- eases manifest themselves on the data is essential. Here we have used both knowledge about inherent relation- ships among data and domain knowledge from experts to obtain this understanding. The knowledge about data is reflected on the maps produced by the SOM. For example, each neuron on the output map is likely to have a number of input vectors associated with it, and these input vectors in turn determine the physical meanings of the neuron such as average sensitivity, the number of input pat- terns the neuron represents, and typical patterns the neuron represents etc. Using these physical meanings, domain experts can try to group those input patterns which have the same or similar pathological meanings. In our case, an input pattern consists of a vector of 6 elements, each of which represents whether the patient sees the stimulus in a certain location on the computer screen [Figure 11. There are four major groups created by experts. Group A is composed of those input patterns reflect- ing that the patient under test is showing the early sign of upper hemifield damage, while group B consists of those patterns demonstrating that the upper hemifield of the patient is probably already damaged. Group C and D are made of those patterns similar to group A and B, except they are used to represent two different stages of the lower hemifield damage. Any two pat- terns which fall into the same group, no matter how distant they may appear on the test behavior map, will be considered as having the same pathological mean- ings. Take group A as an example. It contains the follow- ing three patterns: { (1, l,O, Lb I>“, (1, LO, LO, I)? (1, 1, 1, LO, qt 1 These have been identified as possible patterns for a glaucoma patient showing early sign of upper hemi- field damage. Two factors have been taken into con- sideration by experts when selecting these patterns. Firstly, the domain knowledge about early upper hemi- field damage is used, for example, locations 3 and 5 which are within the upper hemifield were not seen in some of those patterns and the reason why location 1 is not included is that it often indicates the upper hemi- field is probably already damaged (Wu 1993). Sec- ondly, the physical meanings of the trained map are used, especially how typical input patterns are asso- ciated with output neurons. For example, the above three patterns are in a topographically connected area on the map. These pathological groups are then used to check whether the less stable part of the data are the mea- surement noise. A simple way to do this is as fol- lows. When those nodes, whose corresponding input data vectors are the less stable part of the data, are identified, check whether each of these data vectors belongs to the same pathological groups as those pat- terns which were recognized as the more stable part of data. If yes, then treat it as a true measurement; otherwise, it is measurement noise. One of the major difficulties in applying this method is that the patterns which are made up those patholog- ical groups are not complete in that they (27 in total) are only a subset of all the possible patterns (a6 = 64). Therefore, when there is a new pattern occurring, the above method cannot be applied. One of the main rea- sons why experts cannot classify all the patterns into those four groups is that the CCVP is a newly intro- duced test and the reflection of glaucoma patients and suspects on the CCVP data is not fully understood. To overcome this difficulty, machine learning meth- ods can be applied to generalize from those 27 clas- sification examples provided by the experts. In par- ticular, we have used the back-propagation algorithm (Rumelhart, Hinton, & Williams 1986) for this pur- pose. The input neurons represent the locations within the visual field, output neurons are those pathological groups, and three hidden nodes are used in the fully configured network. The trained network is able to reach 100% accuracy for the training examples and to further classify an- other 26 patterns. One of the interesting observations is that patterns within each of the resultant groups tend to be clustered in a topographically connected area, a property demonstrated by the initial groups. The remaining patterns are regarded as the unknown class since they have no significant output signal in out- put neurons. They have been found to be much more likely to appear in the less stable part of the CCVP data than in the more stable one. It should be noted that the application described above is rather a simple one in which there are only 64 possible input patterns. This particular version of the CCVP test is chosen for its simplicity in order to make it easier to describe the general ideas in implementing the noise identification principle. In fact, there is a more popular version of CCVP which also tests the six locations within the visual field by ten repeated trials, however, using four different stimuli. Therefore the data vectors produced within this test contain 24 items, instead of 6, and consequently, there are 224 possible input patterns. We have also experimented 266 Uncertainty Management with large quantities of data from this test using the proposed noise identification strategy. The results are similar to those of the simpler test described ir the next section. Evaluation such two repeated tests within this time period should be very close. However, this is not always true under real clinical situations as measurement noise is involved in each test, perhaps for different reasons. Thus it is not surprising to note that there are a large number of repeated tests, which were conducted within an av- erage time span of one month, whose results showed disagreements to various degrees. The Strategy The noise identification strategy is based on the as- As one of the main reasons for the disagreement is sumption that interesting properties in data are more the measurement noise, it is natural to assume that stable than the noise. It can be applied to those ar- the sensitivity results of the two tests should agree (to eas where repeated measurements can be easily made various degrees) after the noise is discarded. This then about attributes concerned. Below are several obser- constitutes a method for evaluating our proposed strat- vations regarding this strategy. egy for identifying and eliminating noise from data. Firstly, explicit identification and deletion of mea- surement noise in data may be a necessary step before the data can be properly explored, as shown in our application. In particular, we have found that noise deletion can offer great assistance to the clinician in diagnosing those otherwise ambiguous cases (see sec- tion 4.2). In a separate experiment with learning hid- den features from the CCVP data, we have found that many useful features, such as behavioral relationship between two test locations, were not initially found from the raw CCVP data, but were uncovered from the data after the measurement noise was deleted us- ing the strategy proposed in this paper. (a) 100: 80. E 13 60. 8 3 40. 3 % 20. Secondly, the use of domain knowledge supplied by experts is of special concern as this type of knowledge involves a substantial amount of subjective elements, and is often incomplete as shown in our application. It should be pointed out that this strategy cannot be applied to those applications where there is little rele- vant high quality knowledge but a lot of false noise, i.e., those data items from the less stable part of the data which actually reflect the true measurements. Where there is little concern about the false noise situation, however, an unsupervised learning algorithm can be used directly to identify the measurement noise, in this case, the entire less stable part of the data. (b) loo 80 9 s :I 60 % p4a Q is 20 Finally, no claim is made that this strategy can be used to identify all the measurement noise in data, or all the noise identified is the real one. This depends on the ability of the chosen algorithms to accurately cluster those data items with common features and the quality of domain knowledge used to exclude the false noise. 0 Figure 3: (a) before deletion; (b) after deletion The Results Here we present the results in applying the proposed strategy to a set of clinical test data (2630 data vec- tors) collected from a group of glaucoma patients and suspects. To find out how successful this strategy is in achieving its objective, we use the idea of reproducibil- ity of the test results. Ninety-two pairs of test records are available for this purpose. The average sensitivity values of these tests are contrasted in Figure 3(a) where the dot is used to indicate the result of the first test, the oval is used for the result of the second test, and the difference between the two results for each case is illustrated by the line in between them, The same results after the rejection of noise by the proposed strategy are given in Figure 3(b)* As glaucoma is a long term progressing disease, the The results from the two repeated tests have much visual function should remain more or less the same better agreements after the noise is rejected. This is during a short period of time. Therefore results from indicated by the observation that the lines between the Uncertainty Management 267 two tests are in general shortened in Figure 3(b). In fact, if one calculates the mean difference between the two tests, 5.4 is the figure for the original data, while 3.6 is obtained after the noise is eliminated. Another major finding is that noise deletion may also be of direct diagnostic assistance to the clinician. One of the difficulties for the clinician is that the re- sult from one test suggests that the patient is normd (no glaucoma), while the result from the other test shows that the patient is abnormal (having glaucoma of some kind). It has been found that the average sen- sitivity value of 75% appears to be the golden line n CCVP that divides the normal and abnormal groups (Wu 1993). Since much better agreement is shown be- tween the two repeated tests after the deletion of noise, there would be fewer cases whose test results are split by the golden line. This is indeed the case with our data as shown in Figure 3: there are quite a few con- flicting cases in Figure 3(a), while only about two such cases exist in Figure 3(b). It is worth reiterating that the CCVP is”a newly in- troduced test method. A deeper understanding of its characteristics and its relevance to diagnosis can help further improve the results of identifying the measure- ment noise in the CCVP data. Concluding Remarks In this paper we have introduced an alternative way of dealing with noisy data. Instead of measuring and pro- viding information on the amount of noise in the data, we explicitly identify and then discard the noise so that quality data can be used for different applications. The principle we adopted for identifying measure- ment noise is that interesting properties in data are more stable than noise. To implement this principle for our application, self-organizing maps are used to model patient’s behavior during the visual field test and to separate the more stable part of data from the less stable one. Expert knowledge, augmented by super- vised learning techniques, is also used to check whether data items within the less stable part are measurement noise caused by behavioral factors, or those caused by the patient’s pathological conditions. The proposed strategy has been shown to be a sat- isfactory way of identifying measurement noise in vi- sual field test data. Moreover, the explicit identifica- tion and elimination of the noise in these data have been found not just desirable, but essential, if the data are to be properly modeled and explored. Finally, the strategy may be used as a preprocessor to a variety of systems using data with measurement noise. Acknowledgements on an early draft of informative review. this PaPer and anonymous referees’ References Bauer, H. U., and Pawelzik, K. R. 1992. Quantify- ing the neighborhood preservation of self-organizing feature maps. IEEE Trans. on Neural Networks 3(4):570-9. Becker, S., and Hinton, G. E. 1992. Self-organizing neural network that discovers surfaces in random-dot stereograms. Nature 355:161-163. Bratko, I., and Kononenko, I. 1987. Learning di- agnostic rules from incomplete and noisy data. In Phelps, B., ed., Interactions in Artificial Intelligence and Statistical Methods. Technical. 142-53. Fisher, D. H.; Pazzani, M. J.; and Langley, P. 1991. Concept Formation: Knowledge and Experience in Unsupervised Learning. Morgan Kaufmann. Fitzke, F. W.; Poinoosawmy, D.; Nagasuberamanian, S.; and Hitchings, R. A. 1989. Peripheral displace- ment threshold in glaucoma and ocular hypertension. Perimetry Update 1988/89 399-405. Frawley, W. J.; Piatetsky-Shapiro, G.; and Matheus, C. J. 1991. Knowledge discovery in databases: An overview. In Piatetsky-Shapiro, G., and Frawley, W. J., eds., Knowledge Discovery in Databases. AAAI Press / The MIT Press. l-27. Kohonen, T. 1989. Self-Organization and Associative Memory. Springer-Verlag. Lieberman, M. F., and Drake, M. V. 1992. Comput- erized Perimetry. Slack Inc. Mitchison, G., and Durbin, R. 1992. Learning from your neighbour. Nature 355:112-113. Nagata, S.; Kani, K.; and Sugiyama, A. 1991. A computer-assisted visual field diagnosis system using a neural network. Perimetry Update 1990/91291-95. Quinlan, J. R. 1986. Induction of decision trees. Mu- chine Learning 1:81-106. Rumelhart, D. E.; Hinton, G. E.; and Williams, R. J. 1986. Learning representations by back-propagating errors. Nature 323:533-36. Weiss, S. M., and Kulikowski, C. A. 1991. Computer Systems that Learn. Morgan Kaufmann. Wu, J. X. 1993. Visual Screening for Blinding Dis- eases in the Community Using Computer Controlled Video Perimetry. Ph.D. Dissertation, University of London. This work is in part supported by the International Glaucoma Society, British Council for Prevention of Blindness, and International Center for Eye Health. We would like to thank Phil Docking for his comments 268 Uncertainty Management | 1994 | 134 |
1,469 | Markov Chain Monte-Car o Algorithms for t Calculation of Dempster-Shafer Serafh Moral Nit Wilson Departamento de Ciencias de la Computation e I. A., Department of Computer Science Universidad de Granada Queen Mary and Westfield College 18071- Granada - Spain Mile End Rd., London El 4NS, UK smc@robinson .ugr .es nic@dcs.qmw.uk.ac Abstract be very high (Shafer 1992) making this algorithm un- usable for those cases. A simple Monte-Carlo algorithm can be used to calculate Dempster-Shafer belief very ef- ficiently unless the confiict between the evi- dences is very high. This paper introduces and explores Markov Chain Monte-Carlo algo- rithms for calculating Dempster-Shafer belief that can also work well when the conflict is high. 1. Introduction Similar problems have been found for Monte-Carlo algorithms in Statistics, and also in Bayesian networks. A common solution is to use Markov Chain Monte- Carlo algorithms (Smith & Roberts 1993, Geyer 1992, Hrycej 1990) f or which the trials are not independent, but are instead governed by a Markov Chain (Feller 1950). Such methods are used when it is very difficult to simulate independent realizations of some compli- cated probability distribution. Dempster-Shafer theory (Shafer 1976, 1990, Dempster 1967) is a promising method for reasoning with un- certain information. The theory involves splitting the uncertain evidence into independent pieces and cal- culating the combined effect using Dempster’s rule of combination. A major problem with this is the compu- tational complexity of Dempster’s rule. The straight- forward application of the rule is exponential (where the problem parameters are the size of the frame of discernment and the number of evidences). A num- ber of methods have been developed for improving the efficiency, e.g., (Laskey & Lehner 1989, Wilson 1989, Provan 1990, Kennes & Smets 1990) but they are lim- ited by the #P-completeness of Dempster’s rule (Or- ponen 1990). However, the precise value of Dempster- Shafer belief is of no great importance-it is sufficient to find an approximate value, within a small range of the correct value. Dempster’s formulation suggests a simple Monte-Carlo algorithm for calculating DS- belief, (Pearl 1988, Kampke 1988, Wilson 1989, 1991). This algorithm, described in section 3, involves a large number of independent trials, each taking the value 0 or 1 and having an expected value of the Dempster- Shafer belief. This belief is then approximated as the average of the values of these trials. The algorithm can be used to efficiently calculate Dempster-Shafer belief unless the conflict between the evidences is very high. Unfortunately there are cases where the conflict will In this paper we develop Markov Chain algorithms for the calculation of Dempster-Shafer belief. Section 4 describes the algorithms and gives the convergence results. Convergence of the algorithms is dependent on a particular connectivity condition; a way of testing for this condition is given in section 5. Section 6 discusses the results of computer testing of the algorithm. Sec- tion 7 shows how the algorithm can be extended and applied to the calculation of Dempster-Shafer belief on logics, and on infinite frames, and section 8 briefly dis- cusses some extensions to these algorithms which may work when the connectivity condition is not satisfied. elief Let 8 be a finite set. 8 is intended to represent a set of mutually exclusive and exhaustive propositions. A mass function over 0 is a function m: 2e + [ 0, 1 ] such that m(B) = 0 and CAEze m(A) = 1. Function Bel: 2@ + [ 0, l] is said to be a belief function over 8 if there exists a mass function m over 8 with, for all X E 2’, Bel(X) = CACX m(A). Clearly, to ev- ery mass function over 8 there corresponds (with the above relationship) a unique belief function; conversely for every belief function over 8 there corresponds a unique mass function (Shafer1976). Belief functions are intended as representations of subjective degrees of belief, as described in (Shafer Uncertainty Management 269 From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. 1976, 1981). Mathematically, they were derived from Dempster’s lower probabilities induced by a multival- ued mapping (Dempster 1967), and Dempster’s frame- work (using what we call source triples) turns out to be more convenient for our purposes. ’ A source triple over 0 is defined to be a triple (Q, P, I?) where s1 is a finite set, P is a probability func- tion on St, and F is a function from Sl to 2*, such that for all w E Q, I’(w) # 8 and P(w) # 0. Associated with a source triple is a mass func- tion, and hence a belief function, given respec- tively by m(X) = C, :r(wj=x P(w) and Bel(X) = c w : J?(w)CX w+ c onversely, any mass/belief func- tion can be expressed in this way for some (non-unique) source triple. Each belief function (or source triple) is intended to represent a separate piece of evidence. The impact of a set of independent evidences is cal- culated using Dempster’s rule2, which (in terms of source triples) is a mapping sending a finite set of source triples {(Q, Pi, Fi), for a’ = 1,. . . , m}, to a triple (0, PDF, I’), defined as follows. Let a = s2i x - . . x Cl,.,,. For w E a, w(i) is defined to be its ith component (sometimes written wi), so that w = (w(l), . . . ,w(m)). Define I’% + 2e by I”(w) = ny=, I’&(a)) and prob- ability function P’ on a by P’(w) = n:, Pi(w( i)), for w E a. Let a be the set (w E a : l?(w) # 0}, let I’ be I” restricted to a, and let probability function PDF on 52 be P’ conditioned on Q, so that for w E s1, pDS(u) = P’(w)/P’(Q). The factor l/P’(Q) can be viewed as a measure of the conflict between the evi- dences (Shafer 1976). The combined measure of belief Be1 over $ is thus given, for X C Q, by Bel(X) = Pns((w E a : l?(w) E X}), which we abbreviate to Pns(I’(w) C X). Letting, for i = 1, . . . , m, Beli be the belief function corresponding to (Szi, Pi, I’i), then Be1 = Bell @ - - - @ Bel, where @ is Dempster ‘s rule for belief functions as defined in (Shafer 1976). Since there are exponentially many subsets of 0, we are never going to be able to calculate the belief in all of them for large 8. Instead, it is assumed that, for a fairly small number of important sets X C 8, we are interested in calculating Bel(X). 3. A Simple Monte-Carlo Algorithm Since, for X E $, Bel(X) = P&F(W) C X), the ob- vious idea for a Monte-Carlo algorithm for calculating Bel(X) is to repeat a large number of trials, where for each trial, we pick w with chance PDS(W) and let the value of the trial be .I if I’(w) E X, and 0 otherwise. Bel(X) is then estimated by the average value of the trials. We can pick w with chance PDS(W) by repeat- edly (if necessary) picking w E n with chance P’(w) until we get an w in s1. (Picking w with chance P’(w) is easy: for each i = 1, . . . , m we pick wi E Szi with chance Pi(wi) and let w = (wi , . . . , w,,,).) The time that the algorithm takes to achieve a given accuracy is roughly proportional to I@jm/P’(n), mak- ing it very efficient for problems where the evidences are not very conflicting (Wilson 1991).3 If, however, there is high conflict between the evi- dences, so that P’( $2) is extremely small, then it will tend to take a very long time to find au w in Sz. Example Let 0 = (21, ~2, . . . , a~,,.,}, for each i = 1 , m let Qi = {1,2), let Pi(l) = Pj(2) = *, let I’$) = (ti} and let I’i(2) = 0. The triple (&Pi, I’i) corresponds to a simple support function (see (Shafer 1976)) with mi((zi}) = 3 and mi(@) = 3. The conflict between the evidences is very high for large m since we have P’(Q) = (m + 1)/2” so the simple Monte-Carlo algorithm is not practical. Here we consider Monte-Carlo algorithms where the trials are not independent, but instead form a Markov Chain, so that the result of each trial is (probabilis- tically) dependent only on the result of the previous trial. 4.1 The Connected Components of R The Markov Chain algorithms that we will consider re- quire a particular condition on s1 to work, which we will call connectedness. This corresponds to the Markov Chain being irreducible (Feller 1950). For i E {l,..., m) and w, w’ E Sz write w pi w’ if w and w’ differ at most on their ith co-ordinate, i.e., 1 It also seems to be more convenient for justification of Demp- ster’s rule, see (Shafer 1981, Wilson 1993). 2There has been much discussion in the literature on the soundness of the rule, e.g., (Pearl 1990, IJAR 1992); justifica- tions include (Shafer 1981, Ruspini 1987, Wilson 1989, 1993). 3 Of course, the constant of proportionality is higher if greater accuracy is required. If this algorithm is applied to calculate unnormalised belief (Smets 1988), which has been shown in (Orponen 1990) to be a #P-complete problem, then the l/P’(n) factor in the com- plexity is omitted; this means that we have, given any degree of accuracy, a low order polynomial efficiency algorithm for calcu- lating an ‘intractable’ problem (up to that degree of accuracy). 270 Uncertainty Management if for all j E (1, . . . , m) \ (i}, w(j) = w’(j). Let R be the union of the relations pi for i E { 1, . . . , m}, so that w R w’ if and only if w and w’ differ at most on one co-ordinate; let equivalence relation G be the transitive closure of R . The equivalence classes of E will be called connected components of $2, and $2 will be said to be connected if it has just one connected component, i.e, if f is the relation S2 x Q. asic Markov Chain Monte-Carlo Algorithm Non-deterministic function PDSN(wo) takes as input initial state wo E 0 and number of trials N and returns a state w. The intention is that when N is large, for any initial state WO, Pr(PDSN(wo) = w) w Pus for all w E 0. The algorithm starts in state wo and randomly moves between elements of $2. The current state is labelled w,. FUNCTION PDSN(wo) WC := w(-J for n = 1 to N for i = 1 to m wc := operutioni(w,) next i next n return w,. Non-deterministic function operationi changes at most the ith co-ordinate of its input w,--it changes it to y with chance proportional to Pi(y). We therefore have, for w, w’ E Sz, Pr(operationi(w’) = w) = F’pi(w(i)) 1 ft~e~~i~~; . The normalisation constant CU,~ is given by cy,,l = c w li WI Pi+(i)). 4.3 The Calculation of Now that we have a way of picking w with chance ap- proximately Pus we can incorporate it, in the obvi- ous way, in an algorithm for calculating Bel(X). This gives function Bg(wo) with inputs wg, N and K, where wg E Q is a starting value, N is the number of trials, and K is the number of trials used by the function PDSK(.) used in the algorithm. The value BE(wo) can be seen to be the proportion of the N trials in which I’(w, b C X. In the BK (WO) algorithm, for each call of PDSK( .), Km values of w are generated, but only one, the last, is used to test if I’(w) C X. It may well be more effi- cient to use all of the values, which is what BELN(wo) does. The implementation is very similar to that for PDSN(wo), the main difference being the extra line in the inside for loop. The value returned by BELN(wo) is the proportion of the time that I’(we) & X. FUNCTION WC := wo s := 0 for n = 1 to N wc := PD!qw,) if IY(w,) 5 x thenS:=S+l next n return + for n = 1 to N for i = 1 to m wc := operationi if IY(w,) C X thenS:=S+l next i next n return & The key result is the following. Theorem Suppose 0 is connected. Then given E > 0 there exists N’ such that for all N 2 N’, any w E s1 and any starting value wo, 1 Pr(PDSN(wo) = w) - PDS(W)/ < E; given E, 6 > 0 there exists K’ and N’ such that for all K 2 K’ and N 2 N’ and any wo, Pr(lBz(wo) - Bel(X)l < E) 2 1 - S; and given E, 6 > 0 there exists N’ such that for all N 1 N’ and any wg, Pr(lBELN(wo) - Bel(X)l < E) 2 1 - 6. This shows that PDSN approximates PDs to arbi- trary accuracy, and that Bg and BELN approximate Bel(X) to arbitrary accuracy. The proof of this theo- rem is a consequence of general convergence results for Markov Chain Monte-Carlo algorithms; a summary of these can be found in (Smith & Roberts 1993). How- ever, the main problem with the convergence results is that, in general, it is very difficult to assess when we have reached the desired precision. The reason that we require that s1 be connected is that that, in the algorithms, the only values that w, can take are the members of the E-equivalence class of the starting position wg. ing up Intersections In the implementation of Bg and BELN we have to perform operation operationi( which involves chang- ing the ith co-ordinate of w to y, with a probability proportional to Pi(y). The main difficulty lies in that the new w’ has to belong to R, that is, F(w’) # 8. In order to find possible values of w’, we have to calculate the intersection of all the sets l?j(wj) for j # i. This calculation is of order O(l@((m - 1)). Wowever, this operation can be speeded up. De- fine, for each w E 0, a function h, : 8 + AJ given Uncertainty Management 271 by h,(0) = Cr=r lr,(w,)(0) where I,i(,i)(‘) is equal to 1 if 8 E l?;(w;) and 0 otherwise. Suppose we have stored the function h,. If we calculate h, - Irj(wi) then the desired intersection is given by the elements of 0 with a maximum value of this difference. Suppose now that we have randomly picked a new ith co-ordinate, y. We can calculate the new h- function, h,t by just adding lpjtY) to the above dif- ference. This method allows us to calculate the inter- section in O(l0l). 5. The Connectivity of a For the above algorithms Bf: and BELN to converge we require that 0 be connected. Many important cases lead to a connected a; for example, if the individual belief functions Beli are simple support functions, con- sonant support functions, discounted Bayesian or any other discounted belief functions (i.e., with mi(@) # 0) then a will be connected. Other cases clearly lead to non-connected s2, for example, if each Beli is a Bayesian belief function. Unfortunately it will some- times not be at all clear whether s1 is connected or not, and the obvious way of testing this requires a number of steps exponential in m. In 5.1 we construct a method for dealing with this problem. 5.1 Using 0 to Find the Connected Components 63, the core of 9, is defined to be UwenI’(~). For 0 E 8, let t9* s 52 be the set (w E Sz : I’(w) 3 t9) and, for i E (l,...,m), 9; = {Wi E sli : l?i(Wi) 3 0). Define relation R’ on 8 by 0 R’ $J _ w R w’ for some w E 0* and w’ E @ (relation R was defined at the beginning of section 4). Let equivalence relation E’ be the transitive closure of R’ . Proposition (9 (ii) Suppose 8, II, E 9. Then 0 R’ 1c, a for at most one i, the set (0; n $t) is empty. A one-to-one correspondence between the equiv- alence classes of E’ and = is given by X + Ueex e*, for G’ -equivalence class X. The inverse of this mapping is given by W --) U wEW Wf for E-equivalence class W. Part (i) implies that R’ can be expressed easily in terms of commonality4 functions: 0 R’ t+b +a Qi({e,$}) = 0 for at most one i E (1,. . . ,m}. The most important consequence of this proposition is that it gives an alternative method for testing if Sz is connected: we can use (i) to construct the equivalence ‘The commonality function & cm~esponding to a mass func- tion m is defined by Q(X) = c,,, m(A) for X E 8. classes of G’ ; by (ii), Cl is connected if and only if there is a single E’ -equivalence class. Often 8 will be very much smaller than 0, so this method will be much more efficient than a straightforward approach. 5.2 Finding a Starting Position w. E R The algorithms require as input a value 00 in a (and any element will do). It might seem hard to find such an element we if Q is very much smaller than 0. How- ever, we should have no problem in picking an ele- ment 8 in 8, the core of 0, since the core consists of possibilities not completely ruled out by the evi- dence. But then, for i = 1,. . . , m, we can pick wi such that l?i(wi) 3 8. Letting we = (WI,. . . ,w,,,), we have I’(wo) 3 8 SO w. E R as required. 5.3 Barely Connected R The convergence theorem guarantees that the algo- rithms will converge to the correct value for connected Q, but it does not say how quickly. The following ex- ample illustrates that the convergence rate will tend to be very slow if 52 is only barely connected (i.e., if it is very hard for the algorithm to move between some elements of Q). Example Let m = 2k - 1, for some k E AJ, and let 8 = {zi,zz}. For each i = 1,. . . , m let Q = {1,2}, let Pi(l) = Pi(2) = 3, let I’i(2) = 0 and, for i 5 k, let Fi(l) = {xl}, and, for i > k, let I’i(1) = (x2). Each triple (sli , Pi, Pi) corresponds to a simple support function. Q is very nearly not connected since it is the union of two sets (xl}* (which has 2k elements) and {x2}* (which has 2 ‘-l elements) which have just a singleton intersection ((2, . . . ,2)}. Suppose we want to use function Bg(wc) or function BELN(we) to estimate Bel((xl}) (which is just under 3). If we start with we such that F(we) = (21) then it will probably take of the order of 2” values of w to reach a member of {x2}*. Therefore if k is large, e.g. k = 30, and we do a million trials then our estimate of Bel((xl}) will al most certainly be 1. Other starting positions we fare no better. Since P’(Q) = 3/2’ the simple Monte-Carlo algo- rithm does not perform satisfactorily here either. Gen- erally, if 0 is barely connected, then it will usually be small in comparison to n, so the contradiction will tend to be high, and the simple Monte-Carlo algorithm will not work well either. 6. Experimental Testing The performance of the three Monte-Carlo algorithms for estimating Bel(X), MCBELN, BE and BELN, was 272 Uncertainty Management tested experimentally; MCBELN is the simple Monte- Carlo algorithm described in section 3, where N is the number of times an element w E a is picked with chance P’(w) (so the number of useful trials will be approximately NP’(SZ)). We considered randomly generated belief functions on a frame 8 with 30 elements. The number of focal el- ements (i.e, sets with non-zero mass) was chosen using a Poisson distribution with mean 8.0. Each focal ele- ment A was determined by first picking a random num- ber p in the interval [ 0, 11, and then, independently for each 8 E 0, including 8 in A with chance p. Two experiments were carried out. In the first one, six belief functions were combined and a set X C 0 was randomly generated. The exact belief of this event was calculated and 100 approximations were made for each of the three Monte-Carlo algorithms, with N = 5000, K = 6. This was repeated for ten different combina- tions and ten randomly selected sets X. The second experiment was very similar: the only difference being that 10 belief functions were combined and K = 10. The calculation times of the different algorithms were similar. In the light of the results we can con- elude: - The Markov Chain Monte-Carlo algorithms per- formed significantly better than the simple Monte-Carlo algorithm; e.g, in the first experi- ment, the mean errors of the Markov Chain al- gorithms were typically about 0.005, whereas in the simple algorithm, mean errors were typically about 0.01. - There was no significant difference between the performance of the two Markov Chain Monte- Carlo algorithms. In BELN we use more cases of the sample, but in BE there is a greater degree of independence between the cases. It also appears that when m is increased, the rel- ative precision of the Markov Chain algorithms with respect to the simple algorithm increases. This is due to the fact that the degree of conflict increases with m. Detailed results will appear in (Moral & Wilson 1994). 7, xtensions and Applications Calculation of Belief on Logics: Dempster-Shafer theory can easily be extended to logics (see also (Pearl 1990, Wilson 1991)). For example, let L be the lan- guage (i.e. the set of well-formed formulae) of a propo- sitional calculus. To extend the definitions of mass function, belief function, source triple and Dempster’s rule we can (literally) just replace the words ‘over 8’ by ‘on L’, replace 2e by fZ, replace 8 by I, E by b and intersection n by conjunction A.5 To adapt the algo- rithms we just need to change the condition I’(w,) E X to the condition I’(we) b X in the functions Bg(wo) and BELN(wo). nite frames: The Monte-Carlo algorithms open up the possibility of the computation of Dempster- Shafer belief on infinite frames 8, with perhaps also an infinite number of focal elements (so that 52 is in- finite). Clearly we will need some effective way of in- tersecting the focal elements; for example, this may be practical if 0 C L&Y for some n, and the focal elements are polytopes. The algorithms can also be used for calculating Dempster-Shafer belief in belief networks, and in decision-making, for calculating upper and lower ex- pected utility, see (Moral & Wilson 1994). iscussio Although the Markov Chain Monte-Carlo algorithms appear to often work well, there remains the problem of cases where Q is not (or is barely) connected. We will briefly discuss ways in which the algorithm could be improved for such cases. locking Components: A technique sometimes useful in Gibbs samplers is to block together compo- nents (see (Smith & Roberts 1993)); for our problem, this amounts to changing simultaneously more than one co-ordinate of w at a time; this can connect up components which were previously not connected (but will increase the time for each trial). Artificially Increasing s1: Recall that the state space Sz was defined to be {w E n : I”(w) # 0). If we use 52’ as the state space where 0 C 52’ C n, and define relations pi , R and c on 52’, then the functions B$(wo) and BELN(wo) will converge to Bel(X), given that 52’ is connected, so long as we don’t count trials where w E 0’ \ St ( i.e, we don’t increment S or the trial counter for such an w). This means that if St is not connected or is barely connected then we could improve the connectivity by judiciously adding extra points to Sz. Weighted Simulation: Another approach to solv- ing this type of problem, is to sample from a wrong but easier model and weighting to the distribution of interest (also known as ‘importance sampling’). This method has been used in Bayesian networks (Fung & 5Note that now a belief function does not determine a unique maus function. These definitions also work for many other logics, such as modal lo&s, or L could be the set of closed formulae in a first order predicate calculus. Uncertainty Management 273 Chang 1990). It could be used directly, or in conjunc- tion with the algorithms given here to improve the con- nectivity of 0. Several combinations of the above procedures could produce optimal results in diflicult situations. A sim- ple strategy is to combine first by an exact method the groups of belief functions with a high degree of conflict, and then to combine the results with a simulation pro- cedure. Acknowledgements The second author is supported by a SERC postdoc- toral fellowship. This work was also partially sup- ported by ESPRIT (I and II) basic research action 3085, DRUMS. We are also grateful for the use of the computing facilities of the school of CMS, Oxford Brookes University. eferences Dempster, A. P., 1967 Upper and Lower Probabilities Induced by a Multi-valued Mapping. Annals of Math- Feller, W., 1950, An Introduction to Probability The- ematical Statistics 38: 325-39. ory and Its Applications, second edition, John Wiley and Sons, New York, London. Fung, L., and Chang, K. C., 1990, Weighting and Integrating Evidence for a Stochastic Simulation in Bayesian Networks, Uncertainty in Artificial Intelli- gence 5, 209-220. Geyer, C. J., 1992, Practical Markov Chain Monte- Carlo (with discussion), Statistical Science 7, 473-511. Hrycej, T., 1990, Gibbs Sampling in Bayesian Net- works, Artificial Intelligence 46, 351-363. IJAR, 92, International Journal of Approximate Rea- soning, 6, No. 3 [special issue]. K%mpke, T., 1988, About Assessing and Evaluating Uncertain Inferences Within the Theory of Evidence, Decision Support Systems 4: 433-439. Kennes, R., and Smets, Ph., 1990, Computational As- pects of the Mijbius transform, in Proc. 6th Confer- ence on Uncertainty in Artificial Intelligence, P. Bonis- sone, and M. Henrion, (eds.), MIT, Cambridge, Mass., USA, 344351. Laskey, K. B., and Lehner, P. E., 1989, Assump- tions, Beliefs and Probabilities, Artificial Intelligence 41 (1989/90):65-77. Moral, S., and Wilson, N., 1994, Markov Chain Monte- Carlo Algorithms for the Calculation of Dempster- Shafer Belief, technical report, in preparation. Orponen, P., 1990, Dempster’s rule is # P-complete, Artificial Intelligence, 44: 245-253. Pearl, J., 1988, Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference, Morgan Kaufmann Publishers Inc. Pearl, J., 1990, Reasoning with Belief Functions: An Analysis of Compatibility, International Journal of Approximate Reasoning, 4(5/6), 363-390. Provan, G. M., 1990, A Logic-Based Analysis of Dempster-Shafer Theory, International Journal of Ap- proximate Reasoning 4: 451-495. Ruspini, E. H., 1987, Epistemic Logics, Probability and the Calculus of Evidence, Proc., 10th Interna- tional Joint Conference on AI (IJCAI-87), Milan, 924 931. Shafer, G., 1976, A Mathematical Theory of Evidence, Princeton University Press, Princeton, NJ. Shafer, G., 1981, Constructive Probability, Synthese, 48: l-60. Shafer, G., 1990, Perspectives on the Theory and Prac- tice of Belief Functions, International Journal of Ap- proximate Reasoning 4: 323-362. Shafer, G., 1992, Rejoinders to Comments on “Per- spectives on the Theory and Practice of Belief Func- tions” , International Journal of Approximate Reason- ing, 6, No. 3, 445-480. Smets, Ph., 1988, Belief Functions, in Non-standard Logics for Automated Reasoning, P. Smets, E. Mam- dami, D. Dubois and H. Prade, Academic Press, Lon- don. Smith, A.F.M., and Roberts, G.O., 1993, Bayesian Computation via the Gibbs Sampler and Related Markov Chain Monte-Carlo methods (with discussion). J. Royal Statistical Society B 55, 3-23. Wilson, N., 1989, Justification, Computational Effi- ciency and Generalisation of the Dempster-Shafer The- ory, Research Report no. 15, June 1989, Dept. of Computing and Mathematical Sciences, Oxford Poly- technic., to appear in Artificial Intelligence. Wilson, N., 1991, A Monte-Carlo Algorithm for Dempster-Shafer Belief, Proc. 7th Conference on Un- certainty in Artificial Intelligence, B. D’Ambrosio, P. Smets and P. Bonissone (eds.), Morgan Kaufmann, 414417. Wilson, N., 1993, The Assumptions Behind Dempster’s Rule, Proceedings of the Ninth Conference of Uncer- tainty in Artificial Intelligence (UAI93), David Heck- erman and Abe Mamdani (eds.), Morgan Kaufmann Publishers, San Mateo, California, 527-534. 274 Uncertainty Management | 1994 | 135 |
1,470 | Paul O’Rorke Department of Information and Computer Science University of California, Irvine, CA 927174425 ororkeQics.uci.edu Abstract This paper describes a new method, called Decision-Theoretic Horn Abduction (DTHA), for generating and focusing on the most important explanations. A procedure is given that can be used iteratively to generate a sequence of expla- nations from the most to the least important. The new method considers both the likelihood and utility of partial explanations and is applica ble to a wide range of tasks. This paper shows how it applies to an important engineering design task, namely Failure Modes and Effects Analysis (FMEA). A concrete example illustrates the ad- vantages of the general approach in the context of FMEA. Introduction Abduction, the process of finding and evaluating ex- planations, is important in a number of areas of AI, including diagnosis and natural language understand- ing. One of the difficulties associated with abduction is that there are far too many explanations and it is difficult to focus on the best ones. The definition of “best” and of methods for comparing and evaluating explanations is also difficult. Many of the most ad- vanced abduction methods address these problems by focusing on and preferring the most likely explanations (taking a Bayesian probabilistic approach to gauging likelihood). Poole (1992, 1993) describes a general ap preach called Probabilistic Horn Abduction (PHA) and shows how it can be applied to tasks such as diagnosis. Probabilistic approaches represent an advance ex- tending and improving previous methods. However, further improvement is needed because sometimes the most likely explanations are not the most important ones. For an example, consider a diagnostic situation involving symptoms that usually indicate a benign con- dition. But assume that sometimes these symptoms indicate a malignant and frequently fatal disorder. It may well be desirable to focus attention first on the diagnosis that corresponds to the potentially deadly problem even if it is far less likely. The method described in this paper - Decision- Theoretic Horn Abduction (DTHA) - focuses on the most important explanations, not just the most likely ones. Importance is measured using decision-theory, which extends probability theory by combining numer- ical measures of likelihood or probability with mea- sures of value or utility. The DTHA method extends Poole’s (1992) Probabilistic Horn Abduction (PHA) procedure for finding maximally likely explanations of individual observations. DTHA considers multi- ple observations (a finite number of “outcomes” in the terminology of decision-theory) with differing im- portance (as indicated by given numerical “utility” scores). DTHA computes the importance of alter- native explanations by multiplying the utility of the outcomes by the products of the prior probabilities of the assumptions underlying the corresponding expla- nations. Extreme values (whether maximal or mini- mal) are considered to be more important. DTHA fo cuses on the most important explanation: it will work on unlikely explanations if the outcomes are sufficiently valuable and it will pursue explanations of less valuable outcomes if they are sufficiently likely. ecision-Theoretic orn uction A version of Decision-Theoretic Horn Abduction based closely on Poole’s (1992) PHA procedure is given in table 1. The inputs are: a finite list of “assumables” , a, and corresponding probabilities pr , . . . , p, ; aQ’&rit e list of inconsistent assumptions nogood( ai, oj ), where i, j E (1,. . . , m}; a Probabilistic Horn Abduc- tion theory; and a finite set of outcomes or, . . . , o, and corresponding utilities ~1, . . . , w. The output is (one of) the most important explanation-outcome pair(s). The notation used in table 1 is to be understood as follows. For each value of i from one to n, ;pi contains done and Pi contains partial explanations of outcome oi. Partial explanations have the form (A, p, C) where A contains the assumptions made so far, p is the prob- ability of the partial explanation and C is a conjunc- tion of conditions that remain to be explained. The probability p of the partial explanation is the prior probability of the set of assumptions A. Note that it is important to distinguish the probabilities associ- ated with sets of assumptions and partial explanations Uncertainty Management 275 From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. Table 1: A Version of Decision-Theoretic Horn Abduc- tion (DTHA) Initialization: for i = 1,. . . ,n set Vi := 8 and set 7-3 := {(0,WOi)) Procedure: Do while 3i such that 15 i < n A Pi # 8 1. let i E {l,... , n} and the corresponding (A,&, C) E P. be such that pi x ui = max $j X Uj j=l,...,n 2. set Pi := Pi \ (~AIW~~ 3. if C = true (a) then if ok(A) then set z)i := ‘Di U(A), output A, and halt (b) else let C = a A R i. for each rule h c B such that mgu(a, h) exists, set 0 := mgu(a,h) and set Pi := Pi U ((A,&,BA R)d ii. if 3j E (1,. . . , m} such that a = aj and &(AU{aj}) then set Pi := Pi U {(AU {aj},& x pj, R)) from the probabilities associated with individual &s- sumptions, although they are related. The connection is that p = naiEApi under the independence assump tions associated with PHA. The meaning of the re- maining notation is as follows. For i = 1, . . . , n let fii = II1=(A,p,C)EPi P* At the beginning of execution of the procedure, each & represents the probability of one of the most likely partial explanations of the outcome oi. This explanation, denoted (A, $i, C), is at least as likely as any other explanation of the i-th outcome in Pi. The initialization step of the procedure sets all the groups of done explanations to the empty set 8. The sets of partial explanations Pi are initialized to sin- gletons containing the explanations (0,1.0, oi). This is because the initial goal is to explain the outcome, the initial estimate bounding the probability of the goal from above is one, and no assumptions have been made yet in pursuit of this goal. The body of the procedure is as follows. Step 1 selects the most important partial explana- tion to work on (if there is a non-empty set of partial explanations). The importance of a partial explana- tion (A, p, C) of outcome oi is defined here as p x ~a. This measure of importance is desirable because it is larger for more likely explanations and for outcomes with larger utilities. Step 1 finds a particular value of i and a partial explanation for outcome oi with maximal importance over aEZ known partial explanations - in- cluding explanations of other outcomes. Keep in mind that the importance and probability associated with a partial explanation are an upper bound: additional assumptions are often needed to complete the explana- tion and these assumptions reduce the probability of the original partial explanation (see Step 3(b)ii). For this reason, the “most important” partial explanation at one stage may produce explanations that are less important later on. Step 1 and the following steps en- sure that DTHA “loses interest” in such explanations once they become less important than some alternative candidate. The remaining steps execute a round of Probabilistic Horn Abduction focused on (A, $i, 6) and oi. Step 2 deletes the partial explanation that is about to be processed. If the explanation is complete, step 3a checks its assumptions to see whether they are ac- ceptable. If so, it records the explanation, commu- nicates it to the user, and halts processing (at least for now). The acceptability test olc has two compo- nents: 1) a check new that makes certain that the same explanation of the same outcome has not been seen before and 2) a limited consistency check con- sistent that ensures that no pair of assumptions in the explanation is an instance of a nogood. (To be specific, olc( A, i) G new(A, i) A consistent(A) where new(A,i) G [$D E ;Pi[D C A]] and consistent(A) E v{aj, ak) c Abky[nogood(~,Y) + @[(aj, ak) = (2, YPIIH If th e partial explanation is not complete, there is more work to be done. At least one condition (a) remains to be proven, so the following steps focus on it. Backward chaining occurs in step 3(b)i. If there is a rule (with head h and body B) that concludes the condition that is desired, that condition is deleted and the conditions of the rule are added in its place to the remaining conditions R. Assumptions are made in step S(b)ii. If the condition to be explained is assum- able, it is deleted from the conditions to be explained and added to the assumptions supporting the expla- nation. The probability of the partial explanation is reduced by multiplying it by the prior probability of the new assumption. The backward chaining and assumption steps are mutually exclusive under the assumptions made in Probabilistic Horn Abduction. Once one or the other step is executed, the procedure is repeated. Then, the selection step (1) may shift attention elsewhere. The procedure given in table 1 halts when it arrives at a single explanation of a single outcome. However, if the main body - procedure - is called again, it will deliver the next most important explanation, and so on, until the partial explanations are exhausted for all the outcomes. A variant of this procedure suitable for finding the most costly, most likely potential outcomes can be had by representing costs as negative utilities and taking min instead of max in step 1. This version will be discussed below. Equivalently, the absolute value can be taken before taking the max or the negative utilities can all be made positive prior to input. The version of DTHA described in table 1 is suit- able for abductive planning (Elkan, 1990; KrivZi6 & Bratko, 1993). In this application, the goal is to find a set of assumptions and a chain of inferences spec- 276 Uncertainty Management ifying a course of action leading to an outcome that event reported in the Los Angeles Times (Nauss, 1993) maximizes expected utility. It is also possible to ap- involved engine fires in GM’s new Saturn line caused ply a variant of DTHA to consider potentially harmful by a short circuit in the generator leading to electri- outcomes. The goal then is ultimately to minimize cal overloads in a wiring harness. All Saturn sedans, distress. This is done by identifying the most likely coupes, and station wagons built between 1991 and and most harmful outcomes so that something can be 1993 were recalled. Automotive industry analysts es- done to avert or avoid them. Failure Modes and Ef- timated the direct cost of the recall as $8-$20 million. fects Analysis (FMEA) is a good example of this type This example indicates that the costs that can be in- of application. FMEA is used to illustrate the method curred when potential faults are not anticipated can in the following section. be substantial. ethod for Faihre odes and Effects Analysis This section presents a novel abductive approach to Failure Modes and Effects Analysis (FMEA). This form of reliability analysis and risk assessment is of- ten required by governments of their contractors and in turn it is often required by these companies of their subcontractors. FMEA is extensively used in the aerospace and automotive industries worldwide. The stated purposes of FMEA are to determine the conse- quences of all possible failure modes of all components of a system, to assess risks associated with failures, and to recommend actions intended to reduce risks. (Henley & Kumamoto, 1991) In FMEA, pairs of failure modes and outcomes are prioritized using so-called “risk priority numbers (RPNs) .” These numbers take into account three things: the undesirability of the effects caused by fail- ures, their likelihood, and their detectability. This pa- per ignores the issue of detectability and considers only how the likelihoods and costs of failures can be used to prioritize the automatic construction of FMEAs. The key ideas are: 1) to associate FMEAs with ex- planations of how undesirable effects can be caused by failures and 2) to order the generation of FMEAs so that the most likely and most undesirable outcomes are considered first. The first idea is implemented by specifying a model and/or theory of the system under- going FMEA, including the following: 1) normal and failure modes and their associated prior probabilities 2) outcomes and their utilities; and 3) rules govern- ing causal connections between the failure modes and outcomes. The second idea is implemented by using DTHA and by considering a partial explanation PI to be more important than a partial explanation P2 if the probability of PI times the cost of the associated out- come is larger than the probability of P2 times the cost of its outcome. A Model of an Automotive System An automotive example is given in this section as an illustration of the ideas and as a demonstration of how to implement FMEA as a special case of DTHA. The example is loosely based on real events. The first event involved engine fires in General Mo- tors cars. The fires were caused by a fuel hose coming loose and leaking. This prompted a recall. A similar Another event involved solder joints in the circuit connecting an electronic controller to a sensor. Most modern cars have electronic controllers that regulate ignition, fuel injection, and so on. Sensors placed in strategic parts of the engine provide the controller with the information needed to optimize the performance of the engine. (Schultz, Lees, & Heyn, 1990) In pre-sales field testing by another manufacturer of an engine with an electronic controller and sensors, it was discovered that a wire connected to a sensor’s housing could come loose due to faulty solder joints. Without the information provided by the sensor, the electronic controller cannot regulate the engine prop erly. This can result in an unacceptable increase in emissions (to a level above the maximum allowed by government regulations). The problem was caught be- fore the cars were distributed to the public, so a costly recall was avoided. Computerized FMEA will enable us to anticipate similar problems prior to field test- ing during the final stages of design, thus avoiding the manufacture of trouble-prone systems. The following example shows how DTHA can be ap plied to FMEA and illustrates the behavior of DTHA in the context of FMEA. Recall that this approach re- quires a model including normal and failure modes, probabilities, outcomes and their utilities, plus causal connections between failure modes and outcomes. The model provided in the example is sketched in figure 1. There are three subsystems modeled as chambers con- nected together by pipes. Fuel enters the fuel pump, then goes through the delivery pipe to the injection sys- tem. Next, fuel is injected into the combustion cham- ber through a nozzle. From the combustion chamber, the (burned) fuel goes through an exhaust pipe to the exhaust manifold. The model also has an electrical circuit. Two wires connect an electronic controller to a sensor (which might be in the injection system, the fuel pump, or elsewhere in the car, e.g., the tachome- ter). Connections are modeled explicitly as camp+ nents: pipes are connected to chambers by pipe joints and wires are connected to other components by solder joints. The structural part of the model comprises the components and connections just described. The model also provides information about normal and failure modes and how often they occur. All the basic components are either in a normal or a broken state.l Most of them are considered to be highly reli- ‘Failure rates and states are assigned to basic but not Uncertainty Management 277 Figure 1: Sketch of an Automotive System fuel ifljectii sy5t6m cindustii chamber able: the prior probability of the broken state is only 0.0001 so the probability of normality is 0.9999.2 The connections are less reliable: the prior probability of breakage for pipe joints and solder joints is 0.01. Sen- sors are the least reliable components: the prior proh ability that a sensor is broken is 0.05. The outcomes of concern in this case are 1) there might be a fire in the engine compartment, 2) the ex- haust emissions might be too high, and 3) the engine might run inefficiently. These outcomes are assigned costs of 106, 105, and lo4 respectively. The immediate causes of the outcomes are specified by rules. Two rules specify two independent causes of fires in the engine compartment due to fuel leaks in a pipe and a pipe joint near hot parts of the engine. Another rule states that the exhaust will be dirty if the electronic controller is uninformed. The final rule says that the engine will be less efficient if the electronic controller is uninformed. Another set of rules specifies different aspects of the behaviors and functions of the components in the sys- tem. These rules provide the remaining connections linking the outcomes to their possible causes in terms of failure modes of the components. Two rules spec- ify normal and failure modes of pipes and pipe joints. Normally, pipes connecting chambers cause the prop agation of their contents. (To avoid cycles, flows are considered to be unidirectional.) When pipes or joints are broken, they leak. A fact states that the fuel pump supplies the fuel injection system with fuel. Additional rules specify that the electronic controller will be unin- formed if a sensor is broken, or if the circuit connecting to complex components. aThe states of components are considered to be mutually exclusive and exhaustive. the electronic controller and the sensor is broken. A rule states that a circuit is broken if there is a compo nent in the circuit that is broken. The rules for failure modes of solder joints say that when a connection be- tween two wires is broken, the voltage on the wires goes to zero. FMEA for the Automotive System This section describes the behavior of the Decision- Theoretic Horn Abduction algorithm when it is in- voked repeatedly given the model of the previous sec- tion as input. The following text summarizes a trace produced by an implementation of DTHA in PRO- LOG. The three outcomes are labelled in descending order on their costs as F, H, and L for engine fire, high emissions, and low efficiency respectively. Let these la- bels be variables that stand for the weighted cost of the most likely cause of the corresponding outcome. The variables are initialized with the outcome’s costs since they might be unconditionally true. The labels are intended to be mnemonic and their alphabetic order reflects their initial numerical order: F > H > L. As the analysis proceeds, the variables will be updated and their order will change. F = lo6 > H = lo5 > L = lo4 - First, an at- tempt is made to explain the ” gine fire” outcome without making any assumptions. The attempt fails but two partial explanations involving assumptions are added to the set of partial explanations for this out- come: one corresponds to a broken pipe and the other to a broken pipe joint. The pipe joint is considered to be relatively unreliable (ppj = 10m2) while the pipe is considered to be one of the more reliable (J+ = 10s4) components. So the leading possibility is that the pipe joint will leak and cause an engine fire. This has a weighted cost of low2 x lo6 = 104. hMmi - Now “high emissions” cost t an the most likely possible cause of engine fires so it is pursued. Hypotheses about a faulty sensor (ps = 5 x 10B2) and solder joints (psi = 10e2) and wires (pw = 10w4) are added to the set of partial explanations for high emissions. The most likely hypothesis is that a sensor will be faulty. The corresponding weighted cost is 5 x 103. F = lo4 = L > H = 5 x lo3 - The focus returns to engine fires3 A possible cause, that the second pipe joint will break, is found and printed out. The next most likely explanations (involving wires) have weighted costs of 102. L = lo4 > H = 5 x IO3 > F = IO2 efficiency becomes the most important outcome. The same conditions that can contribute to high emissions can contribute to low efficiency so they are added to the set of partial explanations for low efficiency as well. 3Low efficien y c could have been chosen at this point since it has the same weighted cost. 278 Uncertainty Management Table 2: DTHA-FMEA on the Automotive Example Causes & Consequences pipe joint 24 F L sensor l+ H solder joints 1,2,3, and 4+ H sensor l-+ L solder joints 1,2,3, and 44 L delivery pipe-, F wires 1 and 2-, H wires 1 and 24 L Priorities >H>L >F=L =L>H >H>F >L>F >L>F >F>H =F>H >H>L >L Again, the most likely is sensor failure (P = .05). So the weighted cost associated with the most likely cause of low efficiencv is now 5 x 102. ~H=5x10a>L=5x102>F=102~-Nowthe most important outcome is high emissions and the sen- sor is the most likely possible cause of this problem. This fact is reported to the user. This process will continue as long as the outputs are sufficiently important to the user. In this example, a complete list of causeconsequence pairs is possible and is shown in table 2. The behavior of DTHA on the example is also summarized in table 2. The cause- consequence pairs are shown in order of generation in the first column. The consequences together with the second column shows the priorities of the system at each step. Failure Modes and Effects Analyses were done for the top priority outcome at each step resulting in the corresponding cause-consequence pairs. Although the sensor is five times more likely to break than the pipe joint, the joint is chosen for the first FMEA be&use the. outcome engine fire is ten times more costly than high emissions which is the most costly consequence of-the sensor failing. Next, the pos- sible causes of failure of the sensor circuit are enumer- ated. All of these are considered before the remaining cause of the most costly outcome because that cause (failure of the delivery pipe) is so unlikely. Finally, the most reliable components whose failure could lead to the less important outcomes are considered in turn. Ignoring the initial consideration of each outcome, the pattern in the example was F then H then L followed by a return to F then H then L. elation to Previous Work on abduction in AI dates back to Pople (1973). There are many different approaches to explana- tion construction and evaluation, including case-based (Eeake, 1992) and connectionist (Thagard, 1989) methods that address many of the same issues. The abduction method described in the present paper is a descendent of logic-based methods for generating ex- planations embodied in Theorist (Poole, Goebel, & Aleliunas, 1987) and Tacitus (Hobbs, Stickel, Martin,. & Edwards, 1988). Recently, logical and symbolic ap proaches to abduction have been extended by adding probability and the most probable explanations have been considered to be the best ones. For example, Peng and Reggia (1990) adapt a probabilistic approach to diagnosis viewing it as a special case of abduction. Charniak and Goldman (1993) take a probabilistic ap preach to plan recognition, a special case of abduc- tion that occurs in natural language processing. Poole (1992, 1993) d escribes a general integration of logical and probabilistic approaches to explanatory reasoning called Probabilistic Horn Abduction (PHA). A major weakness of these methods is that they do not take utilities or values into consideration when they eval- uate and search for explanations. It is important to do so in many practical situations, for example in ab- ductive planning and in considering failures that might cause undesirable outcomes (as in FMEA) . The abuctive approach to FMEA differs from exist- ing AI approaches to FMEA described in (Hunt, Price, & Lee, 1993; Price, Hunt, Lee, & Ormsby, 1992) - largely due to the difference between postdiction and prediction. The abductive approach is an example of postdiction since it infers possible causes or reasons for given effects. Previous approaches use prediction to infer possible effects from given causes. Some use a simple forward-chaining simulator for prediction. More sophisticated qualitative and model-based reasoning is used for prediction in the approaches cited above. The main advantage of predictive (especially model-based) approaches to FMEA is that they can generate con- sequences that were not anticipated in advance. The main advantage of the DTHA approach is that it auto- matically focuses on the most important FMEAs first, although all FMEAs can be generated if required. An- other advantage is that it works for multiple faults. Limitations; Although it appears to be adequate for significant prac- tical tasks such as FMEA, the method described here is limited due due to the assumptions employed. For ex- ample, the model of outcomes and utilities is extremely simple: it is assumed that outcomes and utilities are given by the user. This is reasonable in the context of FMEA, since designers typically have a finite and small number of outcomes they are concerned about and they usually know the utilities of these undesir- able effects of component failures. Hut in more com- plex situations, methods for acquiring and calculating utilities from users and from more basic information will be needed. Work on these issues in the new field of Decision-Theoretic Planning seems relevant. Conclusion This paper provides a new method that generates ex- planations focusing on the most important ones. Im- Uncertainty Management 279 portance is measured taking into account utilities or values in addition to probabilities so the method is called Decision-Theoretic Horn Abduction (DTHA). The addition of utilities is an important improvement over existing probabilistic approaches because in many situations the most likely explanations are not the best ones to focus on or generate. For example, in diagno sis, the most dangerous disease that explains the symp toms should often be considered before more common but less dangerous disorders. In abductive planning, in determining the best explanations of how to achieve a goal one should often take into consideration the value of the goal relative to alternatives and the costs of the actions involved in the plans in addition to the likeli- hood of success. The paper provides an example illustrating the gen- eral DTHA method in the context of a task, Failure Modes Effects Analysis (FMEA), that involves utili- ties in addition to probabilities. In this context, the explanations correspond to assumptions about various components being in failure or normal modes and these cause outcomes that correspond to costly effects that the designers wish to avoid. The example demonstrates that the method is capable of keeping priorities straight in deciding which explanation-outcome pairs to gener- ate. The method shifts the focus of attention: starting on one outcome, moving to another, and returning to an earlier focus. When the more costly or valuable outcome is at least as probable, or not too improb- able, it is pursued. On the other hand, the method focuses attention on the most important explanations and outcomes even when this requires switching at- tention from more to less costly or from more to less probable outcomes. Aclcnowledgments The author is grateful to the reviewers; to David Poole for conversations and for Prolog code implementing PHA; and to Chris Price, Andy Ormsby, and David Pugh for conversations and for a demo of their FMEA system FLAME II. The author was a consultant with FAW (the Research Institute for Applied Knowledge Processing ) during the research and the development of the prototype described here. Riidiger Wirth, FAW FMEA group leader, provided assistance and support. References Charniak, E., & Goldman, R. P. (1993). A Bayesian model of plan recognition. Artificial Intelligence, 64, 53-79. Elkan, C. (1990). Incremental, approximate plan- ning. The Eighth N&onaZ Conference on Artificial Intelligence (pp. 145-159). San Mateo, CA: AAAI Press/The MIT Press. Henley, E. J., & Kumamoto, H. (1991). Probabilistic risk assessment: Reliability engineefing, design, and anaZysis. Piscataway, NJ: IEEE Press. Hobbs, J. R., Stickel, M., Martin, P., & Edwards, D. (1988). Interpretation as abduction. Proceedings of the Twenty Sixth Annual Meeting of the Association for Computat iona Linguistics (pp. 95-103). Buf- falo, NY: The Association for Computational Lin- guistics. Hunt, 3. E., Price, 6. J., & Lee, M. H. (1993). Au- tomating the FMEA process. Intel&gent Systems Engineering, Z(2), 119-132. KriviEi6, K., & Bratko, I. (1993). Abductive plan- ning with qualitative models of dynamic systems. In N. Piera-Carrete, & M. G. Singh (Eds.), Qualitative Reasoning and Decision Technologies (pp. 389-395). Barcelona: CIMNE. Leake, D. B. (1992). Evaluating explanations: A Con- tent theoq. Hillsdale, NJ: Lawrence Erlbaum Asso- ciates. Nauss, D. W. (1993). Nearly all Saturns recalled over potential engine fires. Los Angeles Times (Orange County Edition). Los Angeles, CA: Al and A19. Peng, Y., & Reggia, J. A. (1990). Abductive inference methods for diagnostic problem solving. New York: Springer-Verlag. Poole, D. (1992). Logic programming, abduction and probability. Proceedings of the International Conference on Fifth Genemtion Computer Systems (FGCS-92) (pp. 530-538). Tokyo. Poole, D. (1993). Probabilistic Horn abduction and Bayesian networks. ArtijkiaZ Intelligence, 64, 81- 129. Poole, D. L., Goebel, R., & Aleliunas, R. (1987). The- orist: A logical reasoning system for defaults and di- agnosis. In N. Cercone, & G. McCalla (Eds.), The Knowledge Frontier: Essays in the Representation of Knowledge. New York: Springer-Verlag. Pople, H. E. (1973). On the mechanization of ab ductive logic. Proceedings of the Third International Joint Confenznce on Artificial Intelligence (pp. 147- 152). San Mateo, CA: Morgan Kaufmann. Price, C. J., Hunt, J. E., Lee, M. H., & Ormsby, A. R. T. (1992). A model-based approach to the au- tomation of failure mode effects analysis for design. Proceedings of the Institution of Mechanical Engi- neers, Part D: Journal of Automobile Engineering, 206, 206-291. Schultz, M., Lees, A. W., & Heyn, E. V. (Eds.). (1999). What’s wrong with my car? A guide to trou- bleshooting wmmon mechanical and performance problems. Mount Vernon, New York: Consumers Union. Thagard, P. (1989). Explanatory coherence. The Be- havioral and Brain Sciences, 1 Z( 3), 435-502. 280 Uncertainty Management | 1994 | 136 |
1,471 | The Emergence of Ordered elief from Initial Ignorance Paul Snow P.O. Box 6134 Concord, NH 03303-6134 paulsnow@delphi.com AbStXlWt Some simple assumptions about prior ignorance, and the idea that a sticiently arresting contrast in the likelihoods of evidence will elicit belief that one proposition is at least as belief-worthy as another, lead to a partial ordering of propositions without the use of any hind of prior probability. The partial ordering is mt a posterior probability distribution, but does share some intuitively pleasing properties of a probability, such as complementarity. Deciding the order (if any) between two disjunctions depends only on the highest likelihood disjunct in each, and so query handling in partitioned domains is efficient. In the event that an ordinary probability distribution is required for coherent decision making one can be quickly calculated from the partial order. Introduction Z&mmznce is the unwillingness to order any of two or more sentences according to their belief-worthiness, unless one sentence implies the other. The unwillingness may be a matter of choice, as when a scientist wishes to interpret evidence about rival hypotheses without taking into account any personal views about the prior likeliness of the various rivals (Berger and Berry 1988). Other times, the unwillingness may be involuntary, as when there is no simply no basis for holding any opinion about the relative likeliness of the sentences in question. However it arises, ignorance is not fkithfully represented by any single probability distribution over the sentences. Whatever probabilities are assigned to the sentences, those probabilities are ordered with respect to one another, even though the sentences themselves generally are not. (For discussion of similar problems when representing ignorance in other uncertainty calculi, see Shenoy 1993.) The approach developed in this paper avoids the assessment of a prior probability distribution under ignorance. Nevertheless, the emergence of ordered belief from prior ignorance retains a distinctly probabilistic flavor. Notation and Assumptions about orance The notation S >e> T will denote the condition that the believer asserts that sentence S is, with a warrant satisfact the believer, at least as belief-worthy as sentence Tin of evidence e. If evidence e does not lead the believer to assert such an ordering of sentences S and T, then we write S ?e? T Note that this is distinct from asserting the condrary of S >e T, which would be holding that S is less belief worthy than T. The condition of having no relevant evidence is indicated by the particle d, as in S ?I#? T which expression denotes that there is no ordering between some sentences S and Tin the absence of evidence. We assume that the sentences of interest belong to a partiti domain, which is defined as follows: Btbinition. A partitioned domain is a set comprising: (i) the always-true sentence, denoted true (ii) the always false sentence, denoted false (iii) two or more mutually exclusive sentences, called atoms (iv) well-formed expressions involving atoms, or, and parentheses, called simple tdbjtmctiom rmed expressions involving simple disjunctions, or, not, and parentheses We shall also assume throughout that the atoms in the domain are collectively exhaustive, that is, exactly one of the atoms is true. This additional assumption places little epistemological burden on the believer (at worst, it means that one of the atoms is “none of the other atoms are true”), and has the convenient effect that every sentence in the domain has an equivalent simple disjunction. Uncertainty Management 281 From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. Cur first assumptions about ignorance, and the conquest of ignorance by evidence express the following ideas. If no evidence has yet been observed, and the question of relative belief-worthiness is not answerable on logical grounds, then there is no satisfactory warrant to order one sentence ahead of another. Even after evidence has been observed, the question may remain open. Once a commitment to an ordering is made, then other commitments may be inferred by conditional probability considerations, or by a fundamental belief-ordering consistency principle of the kind discussed by Sugeno (unpublished dissertation, cited in Prade 1985). The formal assumptions are: Al. (Lack of explicit non-trivial prior orderings) For any sentences S and T, S>nil>T impliesthatTimpliesS. A2. (Lack of implicit non-trivial prior orderings) Values for conditional probabilities and orderings among them are neither known nor assumed if those values or orderings imply non-trivial constraints on the prior probabilities. A3. (Consistency) For all evidence e, includiig nil, and any sentences S, S’, T and T’, if S’ implies S, then S >e> S’; ifS’impliesSandS’>e>T,thenS>e>T; if T’ implies T and S Be> T, then S >e> T’. A4. (Impartiality) If S >e> T, and S’ and T’ are sentences, and S is exclusive of T , then ifS*isexclusiveof Tandp(eI S’)>=p(ej S), then S’ >e> T, and ifSisexclusiveof T’andp(e(T)>=p(e(T’), then S Be> T’. A5. (Recovery from ignorance about atoms) For exclusive atoms s and t, and non-nil evidence e where p( e 1 s ) > 0, a necessary and sufficient condition for s >e> t is that ftv,W= 9 P=ll where q is a real number chosen by the believer, and f( , , ) is a real-valued function chosen by the believer which is increasinginp(e~s)anddecreasinginp(ejt),andsuch that a necessary condition for [As. l] to hold is that p( e I s ) isstrictlygreaterthanp(eIt),andsuchthatp(ejt)=Ois not a necessary condition for [A5.1] to hold. Ad. (Quasi-add&iv@) For any sentences S, T, and U where ( S and U ) and ( T and U ) are both fkdse, and for all evidence e, including nil, SorU>e>TorU ifandonlyif S>e>T An Inference Rule for Overcoming Ignorance III Simple Disjunctions Assumptions A3 and A4 have a strong consequence when the propositions of interest belong to a partitioned domain. It is easy to show that ifD is a simple disjunction, then the conditional p( e I D ) is a convex combiition of the p( e 1 d )‘s, the conditionals for the evidence given each of the atoms within D. Thus, WlW =< qinmWd) Ill eorem 1. Let S and T be simple disjunctions which are mutually exclusive, and let s and t be atoms where p( e I s ) and p( e ] t ) are the greatest conditional probabilities for non-nil evidence e given atoms in S and T respectively. S a+ T ifand only if s >e> t. of. S >e> T implies s >e> T by A4 and [l], which implies s >e> t by A3. Conversely, s >e t implies s >e> T by A4 and [l], which implies S be> T by A3. N The theorem and assumption A5 lead to the following rule for deciding whether observed evidence e bearing on the states supports the assertion of S >e> T under certain circumstances: rence Rule. If S and T are simple disjunctions with no statesincommon,andifsandtaresuchthatp(ejs)and p( e I t ) are the greatest conditional probabilities for the evidence e given any atom in S and T respectively, then a sufficient condition for S >e> T is that f( e, s, t ) * q, where f( , , ) and q are as described in assumption A!% This inference rule is strong enough by itself to handle problems like statistical hypothesis testing, where typically, disjoint propositions are compared, and often only one pair of propositions in a domain is of interest at any one time. Some further development to be introduced later will use the rule in a decision procedure which is applicable to all non-trivial ordering questions in partitioned domains. efinition. A partial qu&ata~w probability is a p order of the sentences in a partitioned domain, such that, for all evidence e, including nil, and any sentences S, T, and u: (i) (boundedness) true >e> S and S >e> false ~;$$titity) ( S >e> T ) d ( T >e> U ) implies that (ii) (quasi-additivity) if S and U and T d U are both false, then 282 Uncertainty Management (SorU)>e>(TorU)ifandonlyifS>e>T. This definition is designed to echo that of an ordinary qualitative probability (de Finetti, 1937), difkring only in beii a partial, rather than a complete, ordering. Within a partial qualitative probability, any ordering question involving simple disjunctions can be resolved by the theorem and the inference rule. To decide whether S>e>T: (1) Eliminate fiorn S and T all the atoms common to both, leaving S’ and T’. (2) If S* and T* are both empty, then S>e T; if S* is empty and T* is not, then not ( S >e> T ); if T* is empty and S* is not, then S >e> T. Otherwise, apply the Inference Rule derived from Theorem 1 to S* and T*; S >e> Tjust in case S* >e T*. Partial qualitative probabilities also share an intuitively appealmg property with ordinary probability distributions: 2. (Complementarity) If S and T are simple disjunctions, and “>e” is a partial qualitative probability, then S >e> T implies uot(T)>e>not(S) Proof. Let C be the disjunction of atoms common to S and T, S’ be the atoms in S and not in T, and T’ be the atoms in T and not in S. Then by quasi-additivity, S’ >e> T’. Let Q be the disjunction of atoms not in S and not in T. soot ( S ) is T’ or Q, and not ( T ) is S’ or Q. Since S’ >e> T’, then by quasi-additivity, ( S’ or Q ) >e> ( T’ or Q ), or not ( T ) %P not ( S ). // rdering Satis@ Al-A6 is ualitative Probability Lemma. If A, B, C, and D are simple or empty (containing no atoms except those that are false given the evidence) disjunctions, and there is no atom in common between A and B, nor any atom in common between C and D, then AhPBandC>e>D implies (AorC)>e>(BorD) Proof. If ( B or D ) implies ( A or C ), then the required ordering holds. Suppose that is not the case. If B is empty or D is empty, then the lemma is trivial. If A is empty, then B is empty, and if C is empty, then D is empty [A3]. Suppose none of them are empty. For orderings to be asserted, evidence must be non-nil. Let a, b, c, and d be the atoms such that p( e I atom ) is greatest among atoms in A, B, C, and D respectively. WOLG, suppose that p( e I a ) >= p( e I c ). Let AC and BD disjoin the atoms that are peculhu to (A or 6) and (B or D) respectively. By AS and theorem 3, f ( e, c, d ) >= q, and since the diction is increasing in p( e I second argument),f(e,a,d)>=q.Sincef(e,a,b)>=qaswell, then a X+ [the atom in 8~ D) with the greatest p( e I atom )]. By AS, s a and d have different p( e I atom )(s, and so they must be distinct, and a must be distinct from all other atoms in D for the same reason; a is distinct fkom all atoms in B by hypothesis. So, a is in AC. BD is not empty, because we suppose no implication, so theorem 3 applies. The required ordering follows fkom A6. N that the property proven in the lemma is generally in conventional probabilistic reasoning systems. holds for systems satis&@ Al-A6 is closely related to theorem 3 and the inference rule. In the absence of prior information or logical grounds to resolve the question, what matters in the comparison of sentences is the best- supported atom peculiar to each sentence. Thus, even A QP C may have atoms in common with B .~r D, this does not disrupt the ranking of their best-supported atoms (unless there are no atoms peculiar to each sentence, in which case, the order is logically determined). exwem 6. Any ordering satis@ing a partial qualitative probability. assumptions Al-A6 is oundedness: Since false implies S, so S Be> by A.3, and since S implies e%+SbyA3. Qu~~additivi~: Assumption A6. ~ra~si~i~: Let A be the disjunction of the atoms common to each of S, T, and U, B the atoms common to S and T alone, C those common to S and U alone, D those for T and U alone, and S*, T*, and U* those atoms unique to S, T, and U respectively. If e is nil, then T implies S and U implies T, so U implies S, and transitivity holds. Suppose, then, that e is not nil. Let a be the maximum conditional probability for e among the atoms of A, and b, c, d, s, t, and u be the corresponding quantities for B, C, D, S*, T*, and U*, respectively. By quasi-additivity, we have S >e T implies C or S* >e> D or T*. Similarly, T >e> U implies B or T* >e> C or U*. We wish to show that B or S* >e> D or U*. Since C or S* and D or T* have no atom in common, and nor do B or T* and C or U*, we apply the lemma to get CorS*orBorT* >e>DorT*orCorU* which by quasi-additivity simplifies to BorS*>e>DorU* as required. II Uncertainty Management 283 A Note o In the assumption, we required that p( e I s ) be strictly greater than p( e 1 t ) in order for s >@ t to hold when p( e I s ) is positive. We now present an example where if AS called for a weak inequality, then the resulting ordering would fail to be a partial qualitative probability. All letters are as in the transitivity portion of the proof of the theorem of the last section, and once again, we have S >e T and T Be> U. An assignment of values for the atomic conditional probabilities consistent with this, and the modification of AS to allow ordering assertions on weak inequalities, is: d = S, s = .4, u = .S, b = .4, t = .6, and c = .6 It is easy to confirm that under a weak inequality rule, C or S* >e> D or T*, the quasi-additive condition for S %P T, and B or T* Be> C or U*, the condition for T >e> U. If the ordering is transitive, then S >e> U, and if it is quasi-additive, then B or S* >e> D or U*, so by theorem 3, it must be that either b or s is no smaller than both d and u. Neither is the case, since b is less than d or u, and so is s. ion of Priors eliefs A partial qualitative probability ordering possesses many of the intuitively appealing properties of a probability distribution. Nevertheless, it lacks the coherence of beliefs thought to be demanded in practical decision making problems, and provided by probability distributions (Lindley 1982), or in weaker form by set estimates (Kyburg and Pittarelli 1992). Faced a similar conflict between the demands of modeling beliefs with Dempster-Shafer-style belief functions and the demands of coherence in action under risk, Smets (et al. 1991; Dubois et al. 1993) has proposed a two-tier system of belief representation, his “Transferable Belief Model”. Up until action is required, beliefs are represented by the less-than-fully coherent D-S formalism (Smets’ “credal” phase). Once action is called for, the original formalism is mapped onto a probability distribution, and that probabiity is used for decision making (Smets’ “pignistic” phase). Once called into action, the probability distribution is also subject to revision in the face of further evidence using Bayesian methods. At some point, therefore, the user of the ignorance representation may find it expedient to convert the orderings revealed by the evidence into an ordinary probabiity estimate, to use that estimate for decision making, and to apply further evidence to it using Bayes’ theorem in the usual way. Because of the restricted form of possible orderings consistent with theorem 1 and partial qualitative probabiity, it is quite tractable to use the asserted orderings to derive a usefbl “surrogate” probability distribution when the number of atoms in the domain is finite. It is generally impossible to have a truly agreeing single probability distribution, i.e., some distribution in which p( S I e ) >= p( T I e ) if and only if S >e> T. That’s because any probability distribution is a complete ordering’ rather than the partial ordering that arises from the assumptions. But it is easy to compute a probability distribution where for every S >e+ T, the probabilities are ordered p( S I e ) >= p( T I e ). The permissible orderings entail a single system of simultaneous linear constraints, each (apart from the total probability constraint) either of the form p(s)>=c (for atoms s where there is no distinct atom t such that s >0 t) where c is a non-negative constant which doesn’t depend on the atom s, or else of the form (for atoms s where there is one or more t such that s >e> t) where the summation is over all atoms s’ such that s >@ s’. Since any atom s is ordered ahead of the disjunction of all the atoms s’ such that s >e> s’, the system has exactly one more non-redundant constraint than the number of atoms in the domain (the single total probability constraint is the extra constraint). In order for the system to be consistent, that is, to have any solution, the constant c is bounded above by some positive quantity. It is easy to show that if c is chosen to equal that upper bound, then the system has a unique solution. The following algorithm computes the permissible upper bound on c and the associated unique solution to the system with effort that is linear in the number of atoms under discussion. Algorithm for Computing Maximal c and Corresponding Solution For N atoms, establish arrays: Weight [ 1 ..N ] For each atom’ the multiple of c that satisfies the order constraints Runsum [ l..N ] For atom indexed I, the sum of Weight [ l]throughWeight[I] Prob [ l..N ] The conditional probabilities for the evidence given each atom and scalar quantities: Index As the name implies, an Index Cutoff An index’ the least value where f ( P [ Index 1, P[Cutoff])<q Last The value of Runsum [ Cutoff - 1 1, or 1 if 284 Uncertainty Management cutoff = 1 BEGIN 1. Sort Prob [ ] in ascending order. 2. Initialize Cutoff = Last = Weight [ 1 ] = Runsum [ 1 ] 1. 3.fbIndex=2..N while f ( Prob [ Index 1, Prob [ Cutoff ] ) >= q Last := Runsum [ Cutoff ] cutoff := cutoff + 1 end while Weight [ Index ] := Last Runsum [ Index ] := Weight [ Index ] + Runsum [ Index -11 end for 4. The maximum possible value of c is 1 / Runsum [ N 1; if c is set to this maximum’ then the unique solution of the linear system is p( Index ) = Weight [ Index ] / Runsum [ N ]. END Since Cuto#always increases in value, and never exceeds Index, it is easy to confirm that the effort required by the above algorithm is linear in the number of atoms. Choosing Other Values for c The single solution’ maximum c approach is computationally simple, and places the least possible burden on subsequent evidence to overcome the low probability value assigned to the least favored atoms should one of them turn out to be true. On the other hand, smaller values of c may be preferred. In that case, the constraints describe a convex set of probability distributions, a set which contains all probabiity distributions which display all of the orderings asserted by the partial qualitative probability. One reason for preferring a lower value of the constant c might be that the user prefers to use some particular other single probability distribution’ for example, the maximum entropy distribution over all distributions consistent with the asserted ordering constraints. Such a distribution can be found using numerical or analytical optimization methods over the system with c = 0. Again, the simple form of the solution set, whether described in vertex or constraint form, should be an asset in searching for a congenial probability distribution. (There are exactly as many vertices as there are atoms, and the vertices are simple to enumerate using the information about Weight [] and Runsum [] produced by the algorithm of the last section.) Another occasion for choosing a smaller c is when the user is content to represent beliefs for decision and action in convex set form. Although the convex set formalism lacks the full coherence of a singleton distribution, there is a considerable and growing literature which suggests methods for using convex sets in decision (see, for example, Sterling and Morrell 1991 for a review). Because of the small number of vertices, revision of the convex set in the light of fbrther evidence is tract le (Levi MO), and as with any convex set, revision can also be performed by a transformation of the system’s coeflicients (Snow 199 1). It can be shown that there are positive values of c such that the convex set represents only the orderings asserted by the partial qualitative probability. Among these, the largest such value will ordinarily be preferred since that choice places the least burden on subsequent evidence to reveal the truth of the least fivored atoms should that happen to be necessary. Finding the largest such c requires about the same effort as enumerating the vertices with a known c, that is, order N2. A full however, is beyond the scope of Conchsions Assumptions Al-A6 describe an intuitively appealing way that evidence can overcome initial ignorance. Although the mechanism is Bayesian, in that conditional probabilities are compared, there are no prior probabilities. Nevertheless, the inferences that arise from the assumptions retain some of the characteristics of probability distributions, including complementarity, and if normatively coherent behavior in gambling is required, then probabilities can be computed on demand. Query handling and the calculation of coherent probabilities are both computationally inexpensive. References Berger, J. 0. and D. A. Berry, Statistical analysis and the illusion of objectivity, American Scientist 76, 159-165, 1988. de Finetti, B., La prevision, ses lois logiques, ses sources subjectives, Annales de Unstitut Henri Poincare 7, 1-68, 1937 (English translation by H.E. Kyburg, Jr. in H.E. Kyburg, Jr. and H.E. Smokler, eds., Sties in Subjective Probability, New York: Wiley, 1964). Dubois, D., H. Prade, and P. Smets, Representing partial ignorance, Workshop on Higher Order Uncertainty, George Mason University, July 1993. Kyburg, H. E., Jr. and M. Pittarelli, Some problems for convex Bayesians, in D. Dubois, M.P. Welhnan, B. D’Ambrosio, and P. Smets (eds.), Uncertainty in Art@cial Intelligence, San Mateo, CA: Morgan Kadnann, 149- 154,1992. Levi, I., I;he Enterprise of Knawledge, Cambridge, MA: MIT Press, 1980. Uncertainty Management 285 Lindley, D. V., Scoring rules and the inevitabiity of and P. Siegel (eds.), Symbolic and Quantitative probability, Intematiod StarisficaI Review 50, l-26 (with Approaches to Vkcertairdy, Berlin: Springer-Verlag, commentaries), 1982. Lecture Notes in Computer Science S48,9 l-96, 199 1. Prade, H., A computational approach to approximate and plausible reasoning with applications to expert systems, I.. Transactions on Pattern Analysis and k&chine Intelligence 7,260-283, 1985. Snow, P., Improved posterior probability estimates &om prior and conditional linear constraint systems, IEEE Tranwctions on Systents, Man, and Cybernetics 21, 464. 469, 1991. Shenoy, P.P., Modeling ignorance in uncertainty theories, Workshop on Higher Order Uncertainty, 1993. Smets, P., Y-T. Hsia, A. Safl!otti, R Kermes, H. Xu, and E. Umkehrer, The transferable belief model, in R. Kruse Sterling. W. C. and D. R. Morrell, Convex Bayes decision theory, IEEE Transactions on Systems, Man, and Cybernetics 21, 173-183, 1991. 286 Uncertainty Management | 1994 | 137 |
1,472 | The Hazards of Fancy Backtracking Andrew B. Baker* Computational Intelligence Research Laboratory 1269 University of Oregon Eugene, Oregon 97403 bakerQcs.uoregon.edu Abstract There has been some recent interest in intelli- gent backtracking procedures that can return to the source of a dif&ulty without erasing the in- termediate work. In this paper, we show that for some problems it can be counterproductive to do this, and in fact that such “inteIIigence” can cause an exponentkd increase in the size of the ultimate search space. We discuss the reason for this phenomenon, and we present one way to deal with it. 1 Introduction We are interested in systematic search techniques for solving constraint satisfaction problems. There has been some recent work on intelligent backtracking pro- cedures that can return to the source of a di&ulty without erasing the intermediate work. In this paper, we will argue that these procedures have a substantial drawback, but first let us see why they might make sense. Consider an example from (Ginsberg 1993). Suppose we are coloring a map of the United States (subject to the usual constraint that only some fixed set of colors may be used, and adjacent states cannot be the same color). Let us assume that we first color the states along the Mississippi, thus dividing the rest of the problem into two independent parts. We now color some of the western states, then we color some eastern states, and then we return to the west. Assume further that upon our return to the west we immediately get stuck: we find a western state that we cannot color. What do we do? Ordinary chronological backtracking (depth-first search) would bayktrack to the most recent decision, but this would be a state east of the Mississippi and hence irrelevant; the search procedure would only ad- dress the real problem after trying every possible col- oring for the previous eastern states. *This work has been supported by the Air Force Office of Scientific Research under grant number 92-0693 and by ARPA/Rome Labs under grant numbers F30602-91-C-0036 and F30602-93-C-00031. 288 Constraint Satisfaction Backjumping (Gaschnig 1979) is somewhat more in- telligent; it would immediately jump back to some state adjacent to the one that we cannot color. In the process of doing this, however, it would erase all the intervening work, i.e., it would uncolor the whole eastern section of the country. This is unfortunate; it means that each time we backjump in this fashion, we will have to start solving the eastern subproblem all over again. Ginsberg has recently introduced dynamic back- tracking (Ginsberg 1993) to address this difhculty. In dynamic backtracking, one moves to the source of the problem without erasing the intermediate work. Of course, simply retaining the values of the intervening variables is not enough; if these values turn out to be wrong, we will need to know where we were in the search space so that we can continue the search sys- tematically. In order to do this, dynamic backtracking accumulates nogoods to keep track of portions of the space that have been ruled out. Taken to an extreme, this would end up being very similar to dependency-directed backtracking (Stall- man & Sussman 1977). Although dependency-directed backtracking does not save intermediate values, it saves enough dependency information for it to quickly re- cover its position in the search space. Unfortunately, dependency-directed backtracking saves far too much information. Since it learns a new nogood from every backtrack point, it generally requires an exponential amount of memory - and for each move in the search space, it may have to wade through a great many of these nogoods. Dynamic backtracking, on the other hand, only keeps nogoods that are “relevant” to the current position in the search space. It not only learns new nogoods; it also throws aways those old nogoods that are no longer applicable. Dynamic backtracking, then, would seem to be a happy medium between backjumping and full dependency-directed backtracking. Furthermore, Ginsberg has presented empirical evidence that dy- namic backtracking outperforms backjumping on the problem of solving crossword puzzles (Ginsberg 1993). Unfortunately, as we will soon see, dynamic back- From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. tracking has problems of its own. The plan of the paper is as follows. The next sec- tion reviews the details of dynamic backtracking. Sec- tion 3 describes an experiment comparing the perfor- mance of dynamic backtracking with that of depth- first search and backjumping on a problem class that has become somewhat of a standard benchmark. We will see that dynamic backtracking is worse by a fac- tor exponential in the size of the problem. Note that this will not be simply the usual complaint that in- telligent search schemes often have a lot of overhead. Rather, our complaint will be that the effective search space itself becomes larger; even if dynamic backtrack- ing could be implemented without any additional over- head, it would still be far less efficient than the other algorithms. Section 4 contains both our analysis of what is going wrong with dynamic backtracking and an experiment consistent with our view. In Section 5, we describe a modification to dynamic backtracking that appears to fix the problem. Concluding remarks are in Section 6. 2 Dynamic backtracking Let us begin by reviewing the definition of a constraint satisfaction problem, or CSP. Definition 1 A constraint satisfaction problem (V, D, C) is defined by a finite set of variables V, a finite set of values D,, for each v E V, and a finite set of constraints C, where each constraint (W, P) E C consists of a list of variables W = (WI, . . . , wk) C V and a predicate on these variables P E D,, x- - ax Dwk. A solution to the problem is a total assignment f of values to variables, such that for each v E V, f(v) E D,, and for each constraint ((WI,. . . , wk), P), (f (Wl), * * -9 f(wk)) E P. Like depth-first search, dynamic backtracking works with partial solutions; a partial solution to a CSP is an assignment of values to some subset of the variables, where the assignment satisfies all of the constraints that apply to this particular subset. The algorithm starts by initializing the partial solution to have an empty domain, and then it gradually extends this so- lution. As the algorithm proceeds, it will derive new constraints, or “nogoods,” that rule out portions of the search space that contain no solutions. Eventually, the algorithm will either derive the empty nogood, prov- ing that the problem is unsolvable, or it will succeed in constructing a total solution that satisfies all of the constraints. We will always write the nogoods in di- rected form; e.g., tvl = ql) A ” - A (vk-1 = qk-1) * vk # qk tells us that variables vr through vk cannot simultane- ously have the values q1 through qk respectively. The main innovation of dynamic backtracking (com- pared to dependency-directed backtracking) is that it only retains nogoods whose left-hand sides are cur- rently true. That is to say that if the above nogood were stored, then 211 through Vk-1 would have to have the indicated values (and since the current partial so- lution has to respect the nogoods as well as the original constraints, VI, would either have some value other than qk or be unbound). If at some point, one of the left- hand variables were changed, then the nogood would have to be deleted since it would no longer be “rel- evant .” Because of this relevance requirement, it is easy to compute the currently permissible values for any variable. Furthermore, if all of the values for some variable are eliminated by nogoods, then one can re- solve these nogoods together to generate a new nogood. For example, assuming that D,, = { 1,2}, we could re- solve with h = a) A (v3 = c) 3 vQ # 1 (212 = b) A (213 = c) j VQ # 2 to obtain cvl = a) A (~2 = b) =+- 213 # c In order for our partial solution to remain consistent with the nogoods, we would have to simultaneously unbind vs. This corresponds to backjumping from vQ to v3, but without erasing any intermediate work. Note that we had to make a decision about which variable to put on the right-hand side of the new nogood. The rule of dynamic backtracking is that the right-hand variable must always be the one that was most recently assigned a value; this is absolutely crucial, as without this restriction, the algorithm would not be guaranteed to terminate. The only thing left to mention is how nogoods get acquired in the first place. Before we try to bind a new variable, we will check the consistency of each possible value’ for this variable with the values of all currently bound variables. If a constraint would be violated, we write the constraint as a directed nogood with the new variable on the right-hand side. We have now reviewed all the major ideas of dynamic backtracking, so we will give the algorithm below in a somewhat informal style. For the precise mathematical definitions, see (Ginsberg 1993). Procedure DYNAMIC-BACKTRACKING Initialize the partial assignment f to have the empty domain, and the set of nogoods l? to be the empty set. At all times, f will satisfy the nogoods in I’ as well as the original constraints. If f is a total assignment, then return f as the answer. Otherwise, choose an unassigned variable v and for each possible value of this variable that would cause a constraint violation, add the appro- priate nogood to I’. If variable v has some value z that is not ruled out by any nogood, then set f(v) = 2, and return to step 2. ‘A value is possible if it is not eliminated by a nogood. Advances in Backtracking 289 4. Each value of v violates a nogood. Resolve these no- goods together to generate a new nogood that does not mention v. If it is the empty nogood, then re- turn “unsatisfiable” as the answer. Otherwise, write it with its chronologically most recent variable (say, w) on the right-hand side, add this directed nogood toI’,and caJl ERASE-VARIABLE(W). If each valueof w now violates a nogood, then set v = w and return to step 4; otherwise, return to step 2. Procedure ERASE-VARIABLE(W) 1. Remove w from the domain of f. 2. For each nogood y E I’ whose left-hand side men- tions w,caU DELETE-NOGOOD( Procedure DELETE-NOGOOD 1. Remove 7 from I’. Each variable-value pair can have at most one no- good at a given time, so it is easy to see that the algo- rithm only requires a polynomial amount of memory. In (Ginsberg 1993), it is proven that dynamic back- tracking always terminates with a correct answer. This is the theory of dynamic backtracking. How well does it do in practice? 3 Experiments To compare dynamic backtracking with depth-first search and backjumping, we will use randomly- generated propositional satisfiability problems, or to be more specific, random J-SAT problems with n variables and m clauses.2 Since a SAT problem is just a Boolean CSP, the above discussion applies directly. Each clause will be chosen independently using the uniform distri- bution over the ( t )23 non-redundant S-literal clauses. It turns out that the hardest random S-SAT problems appear to arise at the “crossover point” where the ra- tio of clauses to variables is such that about half the problems are satisfiable (Mitchell, Selman, & Levesque 1992); the best current estimate for the location of this crossover point is at m = 4.24n + 6.21 (Crawford & Auton 1993). Several recent authors have used these crossover-point 3-SAT problems to measure the perfor- mance of their algorithms (Crawford & Auton 1993; Selman, Levesque, & Mitchell 1992). In the dynamic backtracking algorithm, step 2 leaves open the choice of which variable to select next; back- tracking and backjumping have similar indetermina- ties. We used the following variable-selection heuris- tics: 1. If there is an unassigned variable with one of its two values currently eliminated by a nogood, then choose that variable. ‘Each clause in a 3-SAT problem is a disjunction of three liter&. A literal is either a propositional variable or its negation. 290 Constraint Satisfaction 2. 3. Otherwise, if there is an unassigned variable that appears in a clause in which all the other literals have been assigned false, then choose that variable. Otherwise, choose the unassigned variable that ap- pears in the most binary clauses. A binary clause is a clause in which exactly two literals are unvalued, and all the rest are false.3 The first heuristic is just a typical backtracking con- vention, and in fact is intrinsically part of depth-first search and backjumping. The second heuristic is unit propagation, a standard part of the Davis-Putnam procedure for propositional satisfiability (Davis, Loge- mann, & Loveland 1962; Davis & Putnam 1960). The last heuristic is also a fairly common SAT heuristic; see for example (Crawford & Auton 1993; Zabih & McAllester 1988). These heuristics choose variables that are highly constrained and constraining in an at- tempt to make the ultimate search space as small as possible. For our experiments, we varied the number of vari- ables n from 10 to 60 in increments of 10. For each value of n we generated random crossover-point problems4 until we had accumulated 100 satisfiable and 100 unsatisfiable instances. We then ran each of the three algorithms on the 200 instances in each prob- lem set. The mean number of times that a variable is assigned a value is displayed in Table 1. Dynamic backtracking appears to be worse than the other two algorithms by a factor exponential in the size of the problem; this is rather surprising. Because of the lack of structure in these randomly-generated problems, we might not expect dynamic backtracking to be significantly better than the other algorithms, but why would it be worse? This question is of more than academic interest. Some real-world search prob- lems may turn out to be similar in some respects to the crossword puzzles on which dynamic backtracking does well, while being similar in other respects to these random 3-SAT problems - and as we can see from Ta- ble 1, even a small “random 3-SAT component” will be enough to make dynamic backtracking virtually use- less. 4 Analysis To understand what is going wrong with dynamic backtracking, consider the following abstract SAT ex- ample: a + z (1) a la (2) -a + b (3) b * c (4) ‘On the very first iteration in a S-SAT problem, there will not yet be any binary clauses, so instead choose the variable that appears in the most clauses overall. 4The numbers of clauses that we used were 49, 91, 133, 176, 218, and 261 respectively. Average Number of Assignments Variables Depth-First Search Backjumping Dynamic Backtracking 10 20 20 22 20 54 54 94 30 120 120 643 40 217 216 4,532 50 388 387 31,297 60 709 705 212,596 Table 1: A comparison using randomly-generated S-SAT problems. c a d (5) x a ld (6) Formula (1) represents the clause TZV s; we have writ- ten it in the directed form above to suggest how it will be used in our example. The remaining formulas cor- respond to groups of clauses; to indicate this, we have written them using the double arrow (+) . Formula (2) represents some number of clauses that can be used to prove that a is contradictory. Formula (3) represents some set of clauses showing that if a is false, then b must be true; similar remarks apply to the remaining formulas. These formulas will also represent the no- goods that will eventually be learned. Imagine dynamic backtracking exploring the search space in the order suggested above. First it sets a true, and then it concludes z using unit resolution (and adds a nogood corresponding to (1)). Then after some amount of further search, it finds that a has to be false. So it erases a, adds the nogood (2), and then deletes the nogood (1) since it is no longer “relevant.” Note that it does not delete the proposition z - the whole point of dynamic backtracking is to preserve this intermediate work. It will then set a false, and after some more search will learn nogoods (3)-(5), and set b, c and d true. It will then go on to discover that x and d cannot both be true, so it will have to add a new nogood (6) and erase d. The rule, remember, is that the most recently valued variable goes on the right-hand side of the nogood. Nogoods (5) and (6) are resolved together to produce the nogood 2 a 1c (7) where once again, since c is the most recent variable, it must be the one that is retracted and placed on the right-hand side of the nogood; and when c is retracted, nogood (5) must be deleted also. Continuing in this fashion, dynamic backtracking will derive the nogoods x * lb (8) x * a (9) The values of b and a will be erased, and nogoods (4) and (3) will be deleted. Finally, (2) and (9) will b e resolved together produc- ing * 1x (10) The value of x will be erased, nogoods (6)-(g) will be deleted, and the search procedure will then go on to rediscover (3)-(5) all over again. By contrast, backtracking and backjumping would erase x before (or at the same time as) erasing a. They could then proceed to solve the rest of the problem without being encumbered by this leftover inference. It might help to think of this in terms of search trees even though dynamic backtracking is not really searching a tree. By failing to retract x, dynamic backtracking is in a sense choosing to “branch” on x before branching on a through d. This virtually doubles the size of the ultimate search space. This example has been a bit involved, and so far it has only demonstrated that it is possible for dynamic backtracking to be worse than the simpler methods; why would it be worse in the average case? The answer lies in the heuristics that are being used to guide the search. At each stage, a good search algorithm will try to select the variable that will make the remaining search space as small as possible. The appropriate choice will depend heavily on the values of previous variables. Unit propagation, as in equation (l), is an obvious ex- ample: if a is true, then we should immediately set z true as well; but if a is false, then there is no longer any particular reason to branch on x. After a is un- set, our variable-selection heuristic would most likely choose to branch on a variable other than a; branch- ing on x anyway is tantamount to randomly corrupting this heuristic. Now, dynamic backtracking does not really “branch” on variables since it has the ability to jump around in the search space. As we have seen, however, the decision not to erase x amounts to the same thing. In short, the leftover work that dynamic backtracking tries so hard to preserve often does more harm than good because it perpetuates decisions whose heuristic justifications have expired. This analysis suggests that if we were to eliminate the heuristics, then dynamic backtracking would no longer be defeated by the other search methods. Ta- ble 2 contains the results of such an experiment. It is important to note that all of the previously listed heuristics (including unit resolution!) were disabled for the purpose of this experiment; at each stage, we simply chose the first unbound variable (using some Advances in Backtracking 291 Average Number of Assignments Variables Depth-First Search Backjumping Dynamic Backtracking 10 77 61 51 20 2,243 750 478 30 53,007 7,210 3,741 Table 2: The same comparison as Table 1, but with all variable-selection heuristics disabled. I Average Number of Assignments Variables Depth-First Search v Backjumping Dynamic Backtracking 10 20 20 20 20 54 54 53 30 120 120 118 40 217 216 209 50 388 387 375 60 709 705 672 Table 3: The same comparison as Table 1, but with dynamic backtracking modified to undo unit propagation when it backtracks. fixed ordering). For each value of n listed, we used the same 200 random problems that were generated earlier. The results in Table 2 are as expected. All of the algorithms fare far worse than before, but at least dy- namic backtracking is not worse than the others. In fact, it is a bit better than backjumping and substan- tially better than backtracking. So given that there is nothing intrinsically wrong with dynamic backtrack- ing, the challenge is to modify it in order to reduce or eliminate its negative interaction with our search heuristics. 5 Solution We have to balance two considerations. When back- tracking, we would like to preserve as much nontrivial work as possible. On the other hand, we do not want to leave a lot of “junk” lying around whose main ef- fect is to degrade the effectiveness of the heuristics. In general, it is not obvious how to strike the appropriate balance. For the propositional case, however, there is a simple modification that seems to help, namely, un- doing unit propagation when backtracking. We will need the following definition: Definition 2 Let v be a variable (in a Boolean CSP) that is currently assigned a value. A nogood whose conclusion eliminates the other value for v will be said to justify this assignment. If a value is justified by a nogood, and this nogood is deleted at some point, then the value should be erased as well. Selecting the given value was once a good heuristic decision, but now that its justification has been deleted, the value would probably just get in the way. Therefore, we will rewrite DELETE-NOGOOD as follows, and leave the rest of dynamic backtracking in- tact: Procedure DELETE-NOGOOD 1. Remove y from I’. 2. For each variable w justified by y, call ERASE- VARIABLE(w). Note that ERASE-VARIABLE calls DELETE-NOGOOD in turn; the two procedures are mutually recursive. This corresponds to the possibility of undoing a cas- cade of unit resolutions. Like Ginsberg’s original al- gorithm, this modified version is sound and complete, uses only polynomial space, and can solve the the union of several independent problems in time proportional to the sum of that required for the original problems. We ran this modified procedure on the same ex- periments as before, and the results are in Table 3. Happily, dynamic backtracking no longer blows up the search space. It does not do much good either, but there may well be other examples for which this mod- ified version of dynamic backtracking is the method of choice. How will this apply to non-Boolean problems? First of all, for non-Boolean CSPS, the problem is not quite as dire. Suppose a variable has twenty possible values, all but two of which are eliminated by nogoods. Sup- pose further that on this basis, one of the remaining values is assigned to the variable. If one of the eighteen nogoods is later eliminated, then the variable will still have but three possibilities and will probably remain a good choice. It is only in the Boolean problems that an assignment can go all the way from being totally justi- fied to totally unjustified with the deletion of a single nogood. Nonetheless, in experiments by Jonsson and Ginsberg it was found that dynamic backtracking of- ten did worse than depth-first search when coloring random graphs (Jonsson & Ginsberg 1993). Perhaps some variant of our new method would help on these 292 Constraint Satisfaction problems. One idea would be to delete a value if it loses a certain number (or percentage) of the nogoods that once supported it. 6 Conclusion Although we have presented this research in terms of Ginsberg’s dynamic backtracking algorithm, the im- plications are much broader. Any systematic search algorithm that learns and forgets nogoods as it moves laterally through a search space will have to address- in some way or another-the problem that we have discussed. The fundamental problem is that when a decision is retracted, there may be subsequent deci- sions whose justifications are thereby undercut. While there is no logical reason to retract these decisions as well, there may be good heuristic reasons for doing so. On the other hand, the solution that we have pre- sented is not the only one possible, and it is proba- bly not the best one either. Instead of erasing a vari- able that has lost its heuristic justification, it would be better to keep the value around, but in the event of a contradiction remember to backtrack on this vari- able instead of a later one. With standard dynamic backtracking, however, we do not have this option; we always have to backtrack on the most recent vari- able in the new nogood. Ginsberg and McAllester have recently developed partial-order dynamic back- trucking (Ginsberg & McAllester 1994), a variant of dynamic backtracking that relaxes this restriction to some extent, and it might be interesting to explore some of the possibilities that this more general method makes possible. Perhaps the main purpose of this paper is to sound a note of caution with regard to the new search algo- rithms. Ginsberg claims in one of his theorems that dy- namic backtracking “can be expected to expand fewer nodes than backjumping provided that the goal nodes are distributed randomly in the search space” (Gins- berg 1993). In the presence of search heuristics, this is false. For example, the goal nodes in unsatisfiable 3-SAT problems are certainly randomly distributed (since there are not any goal nodes), and yet standard dynamic backtracking can take orders of magnitude longer to search the space. Therefore, while there are some obvious benefits to the new backtracking techniques, the reader should be aware that there are also some hazards. Acknowledgments I would like to thank all the members of CIRL, and especially Matthew Ginsberg and James Crawford, for many useful discussions. References Crawford, J. M., and Auton, L. D. 1993. Experi- mental results on the crossover point in satisfiability problems. In Proceedings of the Eleventh National Conference on Artificial Intelligence, 21-27. Davis, M., and Putnam, H. 1960. A computing pro- cedure for quantification theory. Journal of the Asso- ciation for Computing Machinery 7~201-215. Davis, M.; Logemann, G.; and Loveland, D. 1962. A machine program for theorem-proving. Communica- tions of the ACM 5:394-397. Gaschnig, J. 1979. Performance measurement and analysis of certain search algorithms. Technical Re- port CMU-CS-79-124, Carnegie-Mellon University. Ginsberg, M. L., and McAllester, D. A. 1994. GSAT and dynamic backtracking. In Proceedings of the Fourth International Conference on Principles of Knowledge Representation and Reasoning. Ginsberg, M. L. 1993. Dynamic backtracking. Jour- nal of Artificial Intelligence Research 1:25-46. Jonsson, A. K., and Ginsberg, M. L. 1993. Ex- perimenting with new systematic and nonsystematic search procedures. In Proceedings of the AAAI Spring Symposium on AI and NP-Hard Problems. Mitchell, D.; Selman, B.; and Levesque, H. 1992. Hard and easy distributions of SAT problems. In Pro- ceedings of the Tenth National Conference on Artiji- cial Intelligence, 459-465. Selman, B.; Levesque, H.; and Mitchell, D. 1992. A new method for solving hard satisfiability problems. In Proceedings of the Tenth National Conference on Artificial Intelligence, 440-446. Stallman, R. M., and Sussman, G. J. 1977. Forward reasoning and dependency-directed backtracking in a system for computer-aided circuit analysis. Artificiul Intelligence 9:135-196. Zabih, R., and McAllester, D. 1988. A rearrangement search strategy for determining propositional satisfi- ability. In Proceedings of the Seventh National Con- ference on Artificial Intelligence, 155-160. Advances in Backtracking 293 | 1994 | 138 |
1,473 | Dead-end driven learning * Daniel Frost and Rina Dechter Dept. of Information and Computer Science University of California, Irvine, CA 92717 { dfrost ,dechter)@ics.uci.edu Abstract The paper evaluates the effectiveness of learning for speeding up the solution of constraint satisfaction problems. It extends previous work (Dechter 1990) by introducing a new and powerful variant of learn- ing and by presenting an extensive empirical study on much larger and more difficult problem instances. Our results show that learning can speed up backjumping when using either a fixed or dynamic variable order- ing. However, the improvement with a dynamic vari- able ordering is not as great, and for some classes of problems learning is helpful only when a limit is placed on the size of new constraints learned. 1. Introduction Our goal in this paper is to study the effect of Zearn- ing in speeding up the solution of constraint prob- lems. The function of learning in problem solving is to record in a useful way some information which is explicated during the search, so that it can be reused either later on the same problem instance, or on simi- lar instances which arise subsequently. The approach we take involves a during-search transformation of the problem representation into one that may be searched more effectively. This is done by enriching the prob- lem description by new constraints (sometimes called nogoods), which do not change the set of solutions, but make certain information explicit. The idea is to learn from dead-ends; whenever a dead-end is reached we record a constraint explicated by the dead-end. This type of learning has been presented in dependency-directed backtracking strategies in the TMS community (Stallman & Sussman 1977), and within intelligent backtracking for Prolog(Bruynooghe & Pereira 1984). Recently, it was treated more sys- tematically by Dechter (1990) within the constraint network framework. Different variants of learning were examined there, while taking into account the trade- off between the overhead of learning and performance *This work was partially supported by NSF grant IRI- 9157636, by Air Force Office of Scientific Research grant AFOSR 900136 and by grants from Toshiba of America and Xerox. improvement. The results, although preliminary, indi- cated that learning could be cost-effective. The present study extends (Dechter 1990) in several ways. First, a new variant of learning, called jump- back learning, is introduced and is shown empirically to be superior to other types of learning. Secondly, we experiment with and without restrictions on the size of the constraints learned. Thirdly, we use a highly efficient version of backjumping as a comparison refer- ence. Finally, our experiments use larger and harder problem instances than previously studied. 2. Definitions and A Constraint Network consists of a set of n vari- ables, X1,. . .,Xn; their respective value domains, a, * - -, D,; and a set of constraints. A constraint Ci(Xil) . . . , Xij) is a subset of the Cartesian prod- uct Di, x . . . x Dij, consisting of all tuples of values for a subset (Xii,. . . , Xii) of the variables which are compatible with each other. A solution is an assign- ment of values to all the variables such that all the constraints are satisfied. Sometimes the goal is to find all solutions; in this paper, however, we focus on the task of finding one solution, or proving that no solution exists. A constraint satisfaction problem (CSP) can be associated with a constraint graph consisting of a node for each variable and an arc connecting each pair of variables that are contained in a constraint. A binary CSP is one in which each of the constraints involves at most two variables. Backjumping Many algorithms have been proposed for solving CSPs. See (Dechter 1992; Mackworth 1992) for reviews. One algorithm that was shown always to dominate naive backtracking is backjumping (Gaschnig, 1979; Dechter, 1990). Like backtracking, backjumping considers each variable in some order and assigns to each successive variable a value from its domain which is consistent with the values assigned to the preceding variables. When a variable is encountered such that none of its possible values is consistent with previous assignments (a situation referred to as a dead-end), a backjump 294 Constraint Satisfaction From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. Figure 1: A small CSP. Note that the d&allowed pairs are shown on each arc. takes place. The idea is to jump back over several irrelevant variables to a variable which is more directly responsible for the current conflict. The backjumping algorithm identifies a jump-buck set, that is, a subset of the variables preceding the dead-end variable which are inconsistent with all its values, and continues search from the last variable in this set. If that variable has no untried values left, then a pseudo dead-end arises and further backjumping occurs. Consider, for instance, the CSP represented by the graph in Fig. 1. Each node represents a variable that Ean take on-a value from within the oval, and the bi- nary constraint between connected variables is speci- fied along the arcs by the disallowed value pairs. If the variables are ordered (Xi, X5, X2, Xa, X4) and a dead- end is reached at X4, the backjumping algorithm will jump back to X5, since X4 is not connected to Xs or x2. The version of backjumping we use here is a combi- nation of Gaschnig’s (1979) backjumping and Dechter’s (1990) graph-b ased backjumping, as proposed by Prosser (1993). P rosser calls the algorithm conflict- directed backjumping. In this version, the jump-back set is created by recording, for each value ZI of V, the variable to be instantiated next, the first past variable (relative to the ordering) whose assigned value conflicts with V = v. The algorithm will be combined with both fixed and dynamic variable orderings. Variable Ordering Heuristics It is well known that variable ordering affects tremen- dously the size of the search space. In previous studies it has been shown that the min-width ordering is a very effective fixed ordering (Dechter & Meiri 1989), while dynamic variable ordering (Haralick & Elliott 1980; Purdom 1983; Zabih & MeAllester 1988) frequently yields best performance. We incorporate both strate- gies in our experiments. The minimum width (MW or min-width) heuristic (Freuder 1982) d or ers the variables from last to first by selecting, at each stage, a variable in the constraint graph that connects to the minimal number of vari- ables that have not yet been selected. For instance, the ordering Xi, X5, X2, Xs, X4 is a min-width order- ing of the graph in Fig. 1. Dynamic variable ordering (DVO) allows the order of variables to change during search. The version we use selects at each point the variable with the smallest remaining domain size, when only values that are con- sistent with all instantiated variables are considered. Ties are broken randomly. The variable that partici- pates in the most constraints is selected to be first in the ordering. If any future variable has an empty do- main, then it is moved to be the next in the ordering, and a dead-end will occur on that variable. Otherwise, a variable with the smallest domain size is selected (similar to unit-propagation in Boolean satisfiability problems). 3. Learning Algorithms In a dead-end at Xi, when the current instantiation S=(Xi =zi,. . .,Xi-i=~-1) cannot be extended by any value of Xi, we say that S is a conflict set. An op- portunity to learn new constraints is presented when- ever backjumping encounters a dead-end, since had the problem included an explicit constraint prohibiting the dead-end’s conflict-set, the dead-end would have been avoided. To learn at a dead-end, we record a new con- straint which makes explicit an incompatibility among variable assignments that already existed, implicitly. The trade-off involved is in possibly finding out earlier in the remaining search that a given path cannot lead to a solution, versus the cost of having to process a more extensive database of constraints. There is no point in recording S as a constraint at this stage, because this state will not recur. However, if S contains one or more subsets that are also in conflict with Xi, then recording these smaller conflict sets as constraints may prove useful in the continued explo- ration of the search space because future states may contain these subsets. Different types of learning differ in the way they identify smaller conflict sets. In (Dechter 1990) learn- ing is characterized as being either deep or shallow. Deep learning only records minimal conflict sets, that is, those that do not have subsets which are conflict sets. Shallow learning allows recording non-minimal conflict sets as well. Learning can also be characterized by order, the maximum constraint size that is recorded. In (Dechter 1990) experiments were limited to record- ing unary and binary constraints, since constraints in- volving more variables are applicable less frequently, require more space to store, and are more expensive to consult. In this paper we experiment with four types of learn- ing: graph- based shallow learning, value-based shal- low learning, and deep learning, already presented in (Dechter 1990), as well as a new type, called jump-buck learning. Advances in Backtracking 295 In value-based learning all irrelevant variable- value pairs are removed from the initial conflict set S. If a variable-value pair Xj = ~j doesn’t conflict with any value of the dead-end variable then it is re- dundant and can be eliminated. For instance, if we try to solve the problem in Fig. 1 with the ordering (Xi, X2, X3, X4, Xz), after instantiating Xi = a, X2 = b, X3 = b, X4 = c, the dead-end at X5 will cause value- based learning to record (Xi = a, X2 = b, X4 = c), since the pair Xs = b is compatible with all values of Xz. Since we can pre-compute in O(n2k) time a table that will tell us whether Xi = ~j conflicts with any value of each other variable, the complexity of value-based learning at each dead-end is O(n). Graph-based shallow learning is a relaxed ver- sion of value-based learning, where information on con- flicts is derived from the constraint graph alone. This may be particularly useful on sparse graphs. For in- stance, in Fig. 1 graph-based shallow learning will record (Xi = a, X2 = b, X3 = b, X4 = c) as a conflict set relative to X5, since all variables are connected to Xg. The complexity of learning at each dead-end here is O(n), since each variable is connected to at most n - 1 other variables. Jump-back learning uses as the conflict-set the jump-back set that is explicated by the backjumping algorithm itself. Recall that conflict-directed back- jumping examines, starting from the first variable, each instantiated variable and includes it in the jump-back set if it conflicts with a value of the current variable that previously did not conflict with any variable. For instance in Fig. 1, when using the same ordering and reaching the dead-end at Xg, jump-back learning will record (Xi = a, X2 = b) as a new constraint. These two variables are selected because the algorithm first looks at Xi = a and notes that it conflicts with X5 = z and Xg =y. Proceeding to X2 = b, the conflict with X5 = z is noted. At this point all values of X5 have been ruled out, and the conflict set is complete. Since the conflict set is already assembled by the underlying backjump- ing algorithm, the added complexity of computing the conflict set is constant. In deep learning all and only minimal conflict sets are recorded. In Fig. 1, deep learning will record two minimal conflict sets, (Xi = a, X2 = b) and (Xi = u,x4 = c). Although this form of learning is the most accurate, its cost is prohibitive and in the worst- case is exponential in the size of the initial conflict set (Dechter 1990). 4. Complexity of backtracking with learning We will now show that graph-based learning yields a useful complexity bound on the algorithm perfor- mance, relative to a graph parameter known as w*. Since graph-based learning is the most conservative learning algorithm (when no order restrictions are im- posed), the bound is applicable to all the variants of Figure 2: Empirically determined values of C that gen- erate 50% solvable CSP instances. K = 3 for this data. learning we discuss. Given a constraint graph and a fixed ordering of the nodes d, the width of a node is the number of arcs that connect- that node to previous ones, called its parents. The width of the graph relative to d is the maximum width of all nodes in the graph. The induced graph is created by considering each node in the original graph in order from last to first, and adding arcs connecting each of its parents to each other parent. The induced width of an ordering, w*(d), is the width of its induced graph. Theorem 1: Let d be an ordering and let w*(d) be its induced width. Any backtrack algorithm us- ing ordering d and graph-based learning has a space *d complexity of O((nk)w ( 1) and a time complexity of 0((2nk)W*(d)). Proof: Due to graph-based learning there is a one- to-one correspondence between dead-ends and con- flict sets. It- can be shown that backtracking with graph-based learning along d records conflict-sets of size w*(d) or less. Therefore the number of dead-ends is bounded by w*(d) c ( y )k’ = O((nk)“*(d)). i=l This gives the space complexity. Since deciding that a dead-end occurred requires testing all constraints con- taining the dead-end variable and at most w*(d) prior variables, at most 0(2w*(d)) constraints are checked per dead-end, yielding a time bound of 5. Methodology and Results The experiments reported in this paper were run on random instances generated using a four parameter model: N, I<, T and C. The problem instances have N variables, each having a domain of size K. The problems we experiment with always start off as bi- nary CSPs, but can become non-binary as constraints involving more than two variables are added by learn- ing. The parameter T (tightness) specifies a fraction 296 Constraint Satisfaction N K Statistic No With this type of learning Learning Graph- based Value- based Jump back Deep 25 3 CC 16,930 30,636 29,185 10,203 117,556 DE 156 178 181 82 67 CPU sets 0.048 0.083 0.077 0.032 0.325 NGs 178 181 82 153 Size 11.6 6.8 3.5 3.4 25 6 CC 274,133 1,340,512 1,428,109 330,672 55,771,462 DE 2,777 2,833 2,932 1,276 832 CPU sets 0.777 2.067 2.183 0.667 78.283 NGs 2,833 2,932 1,276 1894 Size 11.2 10.4 5.2 4.4 50 3 cc 303,668 8,051,435 7,111,384 119,642 27,134,341 DE 2,205 5,107 4,512 437 333 CPU sets 1.298 11.492 9.913 0.367 44.788 NGs 5,107 4,512 437 654 Size 20.6 13.6 4.6 4.2 Figure 3: Detailed results of comparing backjumping with no learning to backjumping with each of four kinds of learning. T = l/9 and C is set to the cross-over point. See the text for discussion. of the K2 value pairs in each constraint that are disal- lowed by the constraint. The incompatible pairs in a constraint are selected randomly from a uniform distri- bution, but each constraint will always have the same fraction T of such incompatible pairs. T ranges from 0 to 1, with a low value of T, such as l/9, termed a loose or relaxed constraint. The fourth parameter, C, speci- fies the number of constraints out of the N * (N - 1)/2 possible. Constraints are chosen randomly from a uni- form distribution. As in previous studies (Cheeseman, Kanefsky, & Taylor 1991; Mitchell, Selman, & Levesque 1992), we observed that the hardest instances tend to be found where about half the problems are solvable and half are not (the “cross-over” point). Most of our experiments were conducted with instances drawn from this 50% range; the necessary parameter combinations were de- termined experimentally (Frost & Dechter 1994) and are given in Fig. 2. Results We first compared the effectiveness of the four learn- ing schemes. Fig. 3 presents a summary of experiments with sets of problems of several sizes (N) and number of values (K). 100 problems in each class were gener- ated and solved by five algorithms: backjumping with- out learning, and then backjumping with each of the four types of learning. In all cases a min-width vari- able ordering was applied and no bound was placed on the size of the constraints recorded. For each problem instance and for each algorithm we recorded the num- ber of consistency checks (CC), the number of (non- pseudo) dead-ends (DE), the CPU time (CPU sets), the number of new nogoods recorded (NGs), and the average size of (number of variables in) the learned con- straints. A consistency check is recorded each time the algorithm checks if the values of two or more variables are consistent with respect to the constraint between them. The number of dead-ends is a measure of the size of the search space explicated. All experiments were run using a single program with as much shared code and data structures as possible. Therefore we believe CPU time is a meaningful comparative measure. This experiment demonstrated that only the new jump-back type of learning was effective on these rea- sonably large size problems. Once the superiority of jump-back learning was established we stopped exper- imenting with other types of learning. In the following discussion and figures, all references to learning should be taken to mean jump-back learning. To determine whether learning would be effective for CSPs with many variables, in our next set of experi- ments we generated instances from parameters K = 3, T= l/9, N = {25,50,75, loo}, and C set to the appro- priate cross-over points. We used backjumping with a min-width ordering on 200 instances in each cate- gory, both with and without learning. (No limit was placed on the order of learning.) The mean numbers of consistency checks, dead-ends, and CPU seconds are reported in Fig. 4. The results were encouraging: by all measures learning provided a substantial improve- ment when added to backjumping. (Experiments with T = 2/9 and T = 3/9, not reported here due to space, show a similar pattern.) Our only reservation was that from other work we knew that a dynamic variable or- dering can be a significant improvement over the static min-width. Would learning be able to improve back- jumping with DVO? To find out, we ran another set of experiments, using the same instances, plus some generated with higher values of N; this time backjumping used a dynamic variable ordering. We also experimented with vari- Advances in Backtracking 297 T= l/9 lo6 number of variables (N) number of variables (N) number of variables (N) DE lo6 No learning - Learningo.......o Figure 4: Comparison of BJ+MW with and without learning (of unlimited order); K = 3. ous orders of learning. Recall that in i-order learning, new constraints are recorded only if they include i or fewer variables. In (Dechter 1990) experiments were conducted with first and second order learning. Here, we tried second-, third-, and fourth-order learning, as well as learning without restriction on the size of new constraints. However, only third-order and unlimited- order learning are reported, due to space constraints, in Fig. 6. We make several observations from these data. First, learning becomes more effective as the number of vari- ables in the problem increases. With DVO, when N < 100, the absence or presence of learning, of what- ever order, makes very little difference. With the pow- erful DVO ordering, there are too few dead-ends for learning to be useful, or for the overhead of learning to cause problems. As N increases from 100 on up, learning becomes more effective. For instance, looking at data for T = 2/9, and comparing CPU time for No Learning with CPU time for unlimited order learning, we see improvements at N = 100 of 1.6 (0.815 / 0.499), at N = 150 of 6.9 (25.463 / 3.710), and at N = 200 of 7.8 (170.990 / 21.808). A second observation is that when the individual constraints are loose (T= l/9), learning is at best only slightly helpful and can sometimes deteriorate perfor- mance. The reason is that the conflict-sets with loose constraints tend to be larger, since each variable in the conflict set can invalidate (in the case of T= l/9) only one value from the dead-end variable’s domain. Thirdly, we note that as the order of learning be- comes higher, the size of the search space decreases (as measured by the number of dead-ends), but the amount of work at each node increases, indicated by the larger count of consistency checks. For instance, the data for N = 200, T= 2/9, show that in going from third-order learning to unlimited order learning, dead- ends go down slightly while consistency checks increase by a factor of five. The overall CPU time for the two 298 Constraint Satisfaction versions is almost identical, because in our implemen- tation consistency checking is implemented very effi- ciently. If the cost to perform each consistency check were higher in relation to the cost to expand the search to a new node, unlimited learning might require more CPU time than restricted order learning. CPU seconds number of constraints (C) No learning - Third-order learning O- - - --o Figure 5: BJ+DVO without learning and with third- order learning, for N=300, K=3, T=2/9, and non-50% values of C. All problems with C 5 740 were solvable; all with C 2 1080 had no solution. As expected, learning is more effective on problem instances that have more dead-ends and larger search spaces, where there are more opportunities for each learned constraint to be useful. Comparing the means CPU seconds lo7 T= l/9 106 number of variables (N) lo7 T= 2/g 106 number of variables (N) DE lo* DE number of variables (N) number of variables (N) number of variables (N) CPU seconds number of variables (N) No learning - Third-order learning o- - - -0 Unlimited order learning 0. . . . . . .o Figure 6: Comparison of BJ+DVO, without learning, and with third-order and unlimited order learning; K = 3. of 200 problem instances solved both with and with- out learning can obscure the trend that the improve- ment from learning is generally much greater for the very hardest instances of the population. For instance, the data for N = 200, T = 2/9 show that the mean CPU time for 200 instances is 170.990 without learning and 21.808 with learning, improving by a factor of 7.8 (170.990 / 21.808). If we just consider the 20 problem instances out of the 200 which required the most CPU time to be solved without learning, the mean CPU time of those 20 instances without learning is 1107.18, and 72.48 with unlimited order learning. The improvement for the hardest problems is a factor of 15, about twice that of the entire sample. Fig. 5 shows that with large enough N, problems do not have to be drawn from the 50% satisfiable area in order to be hard enough for learning tc help. Learn- ing was especially valuable on extremely hard solvable problems generated by slightly underconstrained val- ues for C. For instance, at N = 300, K = 3, T= 2/9, C= 680, the hardest problem (out of 200 instances) took 47 CPU hours without learning, and under one CPU minute with learning, The next four hardest problems took 4% as much CPU time with learning as without. Controlling the order of learning has a greater im- pact on the constraints recorded as N increases. We see this in Fig. 7 (drawn from the same set of experi- ments as Fig. 6), where the average constraint size in- creases for unlimited order learning, but not for third- order. The primary cause of this effect is that learned non-binary constraints are becoming part of conflict- sets. The first constraint learned with these parame- ters (particularly K = 3) can have at most three vari- ables in it, one eliminating each value of the dead-end. Once a 3-variable constraint exists, it may contribute two variables to a conflict set, and thus a four variable conflict set can arise. For N = 250, the largest conflict set we observed had 11 elements. Recording such a constraint is unlikely to be helpful later in the search. It is worth noting that we did not find the space requirements of learning to be overwhelming, as has been reported by some researchers. For instance, the average problem at N = 250 and T = 2/9 took about 100 CPU seconds and recorded about 2600 new con- straints (with unlimited order learning). Each con- straint requires fewer than 25 bytes of memory, so the total added memory is well under one megabyte. We found that computer memory is not the limiting factor; time is. Advances in Backtracking 299 Figure 7: Figures for T = 2/9; learning with BJ+DVO. “Learned” is number of new constraints learned; “Avg. Size” is the average number of variables in the constraints. 5. Conclusions We have introduced a new variant of learning, called jump-back learning, which is more powerful than pre- vious versions. Our experiments show that it is very effective when augmented on top of an efficient ver- sion of backjumping, resulting in at least an order of magnitude reduction in CPU time for some problems. Learning seems to be particularly effective when ap- plied to instances that are large or hard, since it re- quires many dead-ends to be able to augment the ini- tial problem in a significant way. However, on easy problems with few dead-ends, learning will add little if any cost, thus perhaps making it particularly suitable for situations in which there is a wide variation in the hardness of individual problems. In this way learning is superior to other CSP techniques which modify the initial problem, such as by enforcing a certain order of consistency, since the cost will not be incurred on very easy problems. Moreover, we have shown that the per- formance of any backtracking algorithm with learning, using a fixed ordering, is bounded by exp(w*). An important parameter when applying learning is the order, or maximum size of the constraints learned. With no restriction on the order, it is possible to learn very large constraints that will be unlikely to prune the remaining search space. We plan to study the re- lationship between K, the size of the domain, and the optimal order of learning. Clearly with higher values of K, the order may need to be higher, especially for loose constraints, possibly rendering learning less effective. References Bruynooghe, M., and Pereira, L. M. 1984. Deduc- tion revision by intelligent backtracking. In Campbell, J. A., ed., Implementation of Prolog. Ellis Horwood. 194-215. Cheeseman, P.; Kanefsky, B.; and Taylor, W. M. 1991. Where the really hard problems are. In Pro- ceedings of the International Joint Conference on Ar- tificial Intelligence, 331-337. Dechter, R., and Meiri, I. 1989. Experimental evalu- ation of preprocessing techniques in constraint satis- 300 Constraint Satisfaction faction problems. In International Joint Conference on Artificial Intelligence, 271-277. Dechter, R. 1990. Enhancement Schemes for Con- straint Processing: Backjumping, Learning, and Cut- set Decomposition. Artificial Intelligence 41:273-312. Dechter, R. 1992. Constraint networks. In Encyclope- dia of Artificial Intelligence. John Wiley & Sons, 2nd edition. Freuder, E. C. 1982. A sufficient condition for backtrack-free search. JACM 21( 11):958-965. Frost, D., and Dechter, R. 1994. Search for the best constraint satisfaction search. In Proceedings of the Twelfth National Conference on Artificial Intelli- gence. Gaschnig, J. 1979. Performance measurement and analysis of certain search algorithms. Technical Re- port CMU-CS-79-124, Carnegie Mellon University. Haralick, R. M., and Elliott, G. L. 1980. Increas- ing Tree Search Efficiency for Constraint Satisfaction Problems. Artificial Intelligence 14:263-313. Mackworth, A. K. 1992. Constraint satisfaction prob- lems. In Encyclopedia of Artificial Intelligence. John Wiley & Sons, 2nd edition. Mitchell, D.; Selman, B.; and Levesque, H. 1992. Hard and Easy Distributions of SAT Problems. In Proceedings of the Tenth National Conference on Ar- tificial Intelligence, 459-465. Prosser, P. 1993. Hybrid Algorithms for the Con- straint Satisfaction Problem. Computational Intelli- gence 9(3):268-299. Purdom, P. W. 1983. Search Rearrangement Back- tracking and Polynomial Average Time. Artificial In- telligence 21:117-133. Stallman, R. M., and Sussman, G. S. 1977. Forward reasoning and dependency-directed backtracking in a system for computer-aided circuit analysis. Artificial Intelligence 9:135-196. Zabih, R., and McAllester, D. 1988. A Rearrange- ment Search Strategy for Determining Propositional Satisfiability. In Proceedings of the Seventh National Conference on Artificial Intelligence, 155-160. | 1994 | 139 |
1,474 | iscoverin roeedural Executio David Gadbois and Daniel Miranker University of Texas at Austin Deptartment of Computer Sciences Taylor Hall 2.124 Austin, TX 78712-1188 gadbois@cs.utexas.edu, miranker@cs.utexas.edu Abstract Executing production system programs involves di- rectly or indirectly executing an interpretive match/- select/act cycle. An optimal compilation of a pro- duction system program would generate code that re- quires no appeal to an interpreter to mediate control. However, while a great deal of implicit control infor- mation is available in rule-based programs, it is gen- erally impossible to avoid deferring some decisions to run-time. We introduce an approach that may resolve this prob- lem. We propose an execution model that permits the execution of the usual cycle when necessary and otherwise executes procedural functions. The system is best characterized as a rule rewrite sys- tem where rules are replaced with chunks, such that the chunks may have procedural components. Cor- rectness for replacement of rules is derived by a con- strained abstract evaluation of the entire program vi- a a general-purpose theorem prover at compile time. The analysis gives a global dependency analysis of the interaction of each of the rules. For a group of popular benchmark programs, we show that there is ample opportunity to automatically sub stitute interpretive pattern matching with procedural elements and a concomitant improvement in perfor- mance. 1 Introduction Naive interpretation of production system programs in- volves a match/select/act cycle. The interpreter checks each rule’s antecedent conditions to see if it is satis- fied by the database, picks one of the satisfied rules to fire, and performs its consequent actions. The cycle repeats until no more rules are satisfied. Ap- proaches to improving the performance of programs and knowledge bases written in rule based form en- compass improved incremental match algorithms, such as TREAT and RETE, local optimizations such as sharing and join-optimization, as well as parallelism and learning (Forgy 1982; Miranker & Lofaso 1991; Ishida 1991; Cupta 1988; Miranker et al. 1990; Laird, Newell, & Rosenbloom 1986). The current implementations are adequate for small- scale main-memory based production systems. Howev- er, for very large systems or ones in which access to the working memory is mediated by a database manager, the current compilation technology falls short (Stone- braker 1992). Direct implementation in terms of the match/select/act cycle can lead to grossly inefficient executions. To be specific: The execution cycle imposes a considerable overhead in maintaining global run-time data structures that may not be necessary while executing a particular fragment of a program. Performing some actions incrementally over a num- ber of cycles, such as maintaining various lists in sorted order, often imposes considerable algorithmic overhead. The cycle does not take advantage of statically de- terminable relations among rules - much control information implicit in a program is constantly re- computed at run-time. The execution cycle imposes strict synchronization constraints that inhibit distributed mappings of pro- duction system execution. We claim that an optimal compilation of a produc- _ _ _ _ tion system program would generate code that requires no appeal to the interpreter to mediate control. While this ultimate goal is probably impossible in the gen- eral case, we have developed techniques that allow for a substantial reduction in the amount of control me- diation required. This approach makes plausible the goal of obtaining execution performance for declara- tive programs within the range of equivalent procedu- ral ones. While there has been some work on explicitly rep- resenting procedural control knowledge at a linguis- tic level in production system programs (Georgeff & Bonollo 1983) as well as automatically determining control information (Stolfo & Harrison 19’79), we assert that within every declarative program is a procedural one waiting to be discovered, and that it is the job of a sufficiently smart compiler to extract that procedural program. Enabling Technologies 459 From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. In this paper we introduce an approach that resolves this problem. We propose an execution model that permits the execution of the usual cycle when neces- sary, but otherwise executes specific, procedural con- trol. The key is to arrange for the both control styles to leave the system in a consistent state for the other to take over. Our system is best contrasted with the SOAR sys- tem (Laird, Newell, & Rosenbloom 1986). SOAR is a learning system that augments a program with new rules, called chunks. Chunks are synthesized from the existing rules of the program and are expressed in the original source language of the rule program. The out- put of the SOAR system may then be fed back to itself for ongoing improvement. We loosen the requirement of systemic learning: Our system replaces (rewrites) rules drawing from a language that includes procedural constructs. Consequently, our transformations provide a single compile-time performance improvement. It is, nevertheless, the case that our analysis can and should be reapplied to already transformed programs. The procedural constructs have explicit representations for binding points, search state, type information, and conditional and loop constructs as well as various com- piler bookkeeping data. Our technique represents a two-fold expansion of earlier work on the decomposition of rule programs in- to collections of smaller concurrently executable rule programs (Kuo, Miranker, & Browne 1991; Schmolze 1991; Ishida 1991; Kuo, Moldovan, & Cha 1990). First, we go beyond identifying weak mutual exclusion rela- tionships by formalizing and identifying control flow precedence relationships. This is necessary to deter- mine when rules may be replaced instead of simply adding new rules. We determine the necessary data relationships using an automatic theorem prover to de- termine the strongest possible conditions on data val- ues. Second, the resulting dataflow graph is expanded to include additional relationships and to used to de- termine possibilities of transformation. Our results to date include: identifying opportunities for exiting the match/- select/act cycle and replacing portions of the exe- cution with procedural code. finding opportunities for “RETE style sharing” that go beyond simple common subexpression elimina- tion. detecting, at compile time, rule instantiations that can be fired concurrently, but in previous work elud- ed parallel execution. using the above techniques to find and obtain sig- nificant performance improvements, both in terms of decreased cycle count and lower running time, in the execution of a suite of benchmark programs. The remainder of this report is organized as follows. In Section 2, we describe the architecture of the S- TAR (STatic Analysis of Rules) system and the global Preprocessor Theorem prover I 1 - Rule analysis I’ I Type analysis I I I I I Evaluation planner V I I Transformer Figure 1: STAR system architecture analysis it performs. Section 3 covers three optimizing transformations for production system languages. Em- pirical evidence of the opportunities for transformation in standard OPS5 benchmark programs and the results of applying them are presented in Section 4. Finally, in Section 5 we summaries our results and suggest di- rections for further research. 2 System Architecture The global analysis approach presented here general- izes the one presented in (Ishida 1991). The STAR system parses a complete OPS5 (Forgy 1981) source program into an internal representation called the Ab- stract Rule Language (ARL.) For each condition ele- ment and working memory modifying action pair in the program, the system computes the set theoretic relationships between them. Given this information, the system can compute control relations between the program’s rules. Using this global information, the system finds opportunities to replace groups of rules with procedural chunks. The process iterates until no more transformations apply. Figure diagram of the system architecture. 1 shows a block Type relations We consider the condition elements of the LHS of rules and the working memory modifying actions of their RHSs as first-order formulae, which we will call types, and define relations between them that can be seen as the set-theoretic relations of subsumption, intersec- tion, and disjointness on their extensions. A working memory element belongs to a type (i.e., is a member of its extension) if the element satisfies the type formula. 460 Enabling Technologies While it is ‘ce&inly possible to compute the type The precise type relations so obtained are of gener- relations in an ad hoc manner, doing so is tricky, error- prone, and difficult to extend to new kinds of literals al utility. Computing the static clustering algorithm (for example, to those whose terms may contain refer- ences to function symbols, or even to completely differ- for parallel rul&execution given in (Kuo, Miranker, & ent rule languages.) We have instead implemented the type analysis computation using an automated theo- Browne 1991) using the subsumption relation instead rem prover. of the syntactic relations used in the report yields as Propositions corresponding to assertions of the above relations may be constructed directly out of condition element and action pairs and can be passed good, or, in some cases, much better results. to the prover for validation. The STAR system uses the Otter resolution-based prover (McCune 1990). Rule relations We wish to characterize the possible sequences of rule firings. Given that an instantiation for a particular rule has fired, we characterize the rules that may fire next with three overlapping relations. The relations are similar to those defined in (Kuo, Moldovan, & Cha 1990). Given the relations between the types of the working memory modifying actions of one rule and those in the LBS of another, we can define useful control relation- ships between the two rules. We are chiefly interested in -whether the results of firing one rule may or may not lead to new instantiation for other rules. There are three possibilities: Enables One rule enables another if the type of its head literal is subsumed by that of a literal in the body of the other rule. Results from evaluating the first rule will definitely feed into an evaluation of the second. Maybe enables One rule might enable another if the type of its head literal intersects with that of a literal in the body of another rule. Results from evaluating the first rule may lead the second rule to produce new results. Doesn’t enable If the type of the head literal of one rule is disjoint from that of the body of another rule, then evalu- ating the first rule will have no direct effect on the evaluation of the second. In general, it is not possible to determine exact- ly which rule may follow another without particular knowledge of the specific instantiation and the con- tents of the database. In some cases, though, the set of possibilities can be sufficiently narrowed so that spe- cific code may be generated for each case such that, with succinct dynamic condition analysis, we may de- cide at run-time which code segment to execute with- out having to incur the overhead of executing the full match/select/act cycle. The three, rule jamming, loop rolling, and branch 3 factoring, capture several major idioms used to ex- ptimizing To date we have defined three transformations for pro- press procedural control in production system language duction system programs that can be applied to reduce the overall number of cycles required to execute the program. These optimizations are far from exhaustive programs. Each define conditions sufficient to deter- and are less general than they can be. Our goals in- mine when a group of rules may be replaced with a clude the development of a large number of special and general purpose transformations. procedural chunk. In order to replace a set of rules with a single, proce- dural rule, it is necessary to determine that no execu- tion path other than the intentionally transformed one can “pass through” the chunk of rules. More formally, for any initial execution state, a transformation must preserve a non-empty subset of the final states of the original program (if it had any) and as well as preserve termination. To satisfy these correctness requirements, the condi- tions for determining whether a particular transforma- tion applies requires information about all the rules in the system. It does not suffice to determine, say, that execution of one rule is always followed by another in order to replace the two with a single rule; it is also necessary to determine that no other rule can ever lead to the firing of the second. The three optimizations mentioned here feed into each other. The most important one in terms of per- formance, loop rolling, is often enabled by performing the others. For each transformation, we give an example of the applied transformation in an OPS5-like syntax. We must stress here, though, that the ARL target lan- guage into which the source rules are transformed is not OPS5. In contrast to chunking in SOAR, which transforms rules from the source language back into the same source language, ARL has explicit represen- tations for binding points, search state, type informa- tion, and conditional and loop constructs as well as various compiler bookkeeping data. As such, it anal- ogous to a the register transfers language common in conventional compilers. Using a different language in the compiler has several advantages. The chief among them is that it obviates the need to m-recover control information after each round of transformation. Waving explicit representa- tions for the procedural constructs makes it easier to generate code for them as well as to specify transfor- mations that build on others. As an internal language, ARL can have more special-purpose rough edges and Enabling Technologies 461 (p make-big-block (goal tstate make-big-block) + (make block tsize IO)) (p note-big-block (block Isize >= 10 tqual-size nil) + (modify 1 tqual-size big)) Figure 2: Rules that may be jammed together. (p make-big-block (goal fstate make-big-block) + (make block fsize 10 fqual-size big)) Figure 3: A jammed rule. is more flexible language. than a general-purpose programming Rule Jamming A common idiom in production system programming is to divide a series of operations upon memory tuples across several rules so that the result of firing one rule enables another to do its job. Rule jamming compress- es the operations of the multiple rules into a single one. In Figure 2, the note-size rule triggers off the make-big-block rule to (eventually) modify every block. If there are no. other dependencies on use of the qual-size of the block or modification of its nu- merical size, then the computation of the qual-size may be added on to block creation and done in one fell swoop as in Figure 3. If, after jamming the two rules together, there are no other dependencies that could lead to the note-big-block rules firing, then it may be deleted entirely. Rule jamming is a form of static chunking (Laird, Newell, & Rosenbloom 1986). As such, the transforma- tion is vulnerable to the problem of expensive chunks (Tambe & Newell 1988). A naive implementation may compute large Cartesian products where a search in- volving multiple rules would have cut off before getting too deep. If the evaluation technique includes an in- telligent backtracking scheme and forward information about possible join targets, however, a jammed rule can avoid this problem. In any case, the size of the products are the same, and so asymptotically no more work will be done. Branch Factoring The only form of conditional execution available to production system programs is through the use of mul- tiple rules that are otherwise distinguished only by the (p note-large-blocks (block tsize > 10 tqual-size nil) -+ (modify 2 fqual-size big)) (p note-small-blocks (block fsize <= 10 tqual-size nil) * (modify 2 tqual-size small)) Figure 4: Unfactored rules. (p note-block-size (goal tstate note-size) (block fsize <n> tqual-size nil) 4 (if <x0 <= 10 then (modify 2 tqual-size small) else (modify 2 tqual-size big))) Figure 5: A factored rule. branch conditions. Multiple rules, with the same join conditions but differing in tests upon constant values, can be merged into a single rule that tests for the con- stant in the execution phase of rule firing. In Figure 4, the two rules note-large-blocks and note-small-blocks differ by only a constant test on the size of a block. Since, in this case, all blocks must be examined, it is desirable to match against all the blocks and decide which size category a block belongs to on the right-hand side of a single rule as in Figure 5. For branch factoring, the goal is to find types in the antecedent conditions of each of several rules that can be merged into a single more general type such that the pairwise differences between the original types are unique. The rules can then be merged into a single rule that replaces the specific types with the general one and tests for exactly which one is satisfied in the action part of the rule. Branch factoring allows sharing of code for rules be- yond simple common subexpressions. It differs from RETEstyle sharing in that it effectively provides a two-level network where the output of the first net- work is fed back into the second. This kind of sharing is particularly important in ac- tive database settings, where a search for instantiations of a factored rule may require only a single scan of the database, whereas searching for instantiations for dis- tinct rules may require two. Loop Rolling Production system languages typically have no iter- ation construct. Loop rolling involves detecting situ- ations where multiple tuples are operated upon in a 462 Enabling Technologies (p note-large-blocks (goal fstate note-large-blocks) (block Isize > 10 tqual-size nil) 3 (modify 2 fqual-size big)) (p done-noting-large-blocks (goal. tstate note-large-blocks) - (block Isize > 10 fqual-size nil) 3 (modify 1 Istate do-something-else)) Figure 6: An unrolled loop. (p note-large-blocks (goal tstate note-large-blocks) all (block tsize > 10 fqual-size nil) 3 (modify 2 tqual-size big) (modify 1 tstate do-something-else)) Figure 7: A rolled loop. similar manner and arranging for the operations to oc- cur outside the system cycle. There has been some work on adding set-oriented constructs to rule system languages (Delcambre & Etheredge 1988; Widom & Finkelstein 1989). Our ap- proach avoids the tricky problem of specifying a pre- cise and general semantics for set-oriented constructs by leaving their definition within the compiler. We describe an optimization for a simple case of loop rolling that captures a fairly common iteration idiom. The conditions for this optimization are some- what complex in presentation but fairly simple in con- cept. The trick is to find a type in a pair of rules that serves as the invariant condition for the loop. One of the rules is called the “body” of the loop, and the other is the “guard.” It must be the case that no other rules reference the invariant condition, so that the rules are active exclusively. The body and guard rules must not interfere with each other. The body rule must “count down” the type being operated upon, and the guard rule must check to see when there are no more data elements left. The transformed rule performs all the actions of the body rule that would have been spread out over a number of cycles in a single one. In Figure 6, the rule note-large-blocks serves as the body rule, and done-noting-large-blocks as the guard. The body repeated removes blocks with a null qual-size attribute, and the guard checks to see if there are any such blocks left. The type of the goal element (assuming there are no other references to it in the program) serves as the invariant. The resulting rolled loop appears in Figure 7. In main memory implementations, loop rolling can I I I Rule I Branch I Loop 1 Program Rules Jamming Factoring Rolling Manners 8 0 0 2 Waltz 33 0 ARP 111 4 ifi 2 6 Weaver 637 18 56 11 Figure 8: Occurrences mark programs. of transformations in bench- significantly reduce the overhead required to maintain run-time data structures. For example, most systems maintain lists of instantiations in sorted order. When one instantiation is produced at a time, maintaining the lists is an O(n’) insertion sort. If a number of instantiations are produced at once, a much more effi- cient O(n logn) sorting algorithm may be used. In the contexts of database integration and paral- lel execution, loop rolling passes more information to the query processor at once, allowing it to do a better jobs of optimizing selection and join sequences. Set- oriented constructs may appear explicitly in the 8;en- erated code for the rolled loops, and so fewer queries can be issued, and the ones remaining are more easily parallelized. In both cases, loop rolling reduces the activity of the interpreter by eliminating an invocation of it upon each loop iteration. In parallel implementations the interpreter calls can require a considerable amount of overhead. Loop rolling can reduce the asymptoptic complex- ity of program execution. Using conventional match- ing algorithms, worst case matching complexity is IV”, where W is the size of the working memory and n is largest number of condition elements in a singIe rule. If it is possible to roll all firings of the bounding rule into a single cycle, matching complexity is reduced by an exponential factor. 4 xperimental esult#s We have examined a number of commonly-used OPS5 benchmark programs for the applicability of the transformations given in Section 3. The programs are extensively studied in (&ant et al. 1991). Figure 4 summarizes the programs and opportunities for trans- formations. Some small programs, such as Manners, do not exhibit many possibilities for the three trans- formations described in this report. Figure 4 shows the number of interpreter cycles needed to execture the benchmarks with and without applying the transformations. Figure 4 shows the wall- clock execution times of the programs. 5 Conclusions We have described an architecture for an optimizing compiler for production system languages and several EnablingTechnologies 463 Figure 9: Interpreter cycles needed formed and transformed programs. execute untrans- Figure 10: Run-time (seconds) performance of un- transformed and transformed programs. optimizations it can perform. Besides the three men- tioned in this report, there are a number of other pos- sible transformations that can be developed using this framework. Due to the generality of using first-order representations and using a general-purpose theorem prover to compute their relationship, the framework can be tailored to deal with the peculiarities of particu- lar production system languages and exploit additional opportunities for optimizations. The key to the approach is the wholesale replace- ment of rules whenever global static analysis can de- termine that the structure of the rules and the seman- tics of the program allow a single execution path. The resulting program then avoids the overhead of deter- mining that path at run-time. We have demonstrated good execution improve- ments using only a few transformations. Armed with a sufficient number of these transformations, a pro- duction system compiler can arrange to significantly reduce the number of match/select/act cycles needed to execute a program. 6 Acknowledgments This research was supported by the State of Texas Ad- vanced Technology Program, the University of Texas Applied Research Laboratories Internal Research and Development Program, and ARPA, grant number DABT63-92-0042. References Brant, D.; Lofaso, B.; Gross, T.; and Miranker, D. P. 1991. Effects of Database Size on Rule system Per- formance: Five Case Studies. In Proc. 17th VLDB Conf. Delcambre, L. M. L., and Etheredge, J. N. 1988. The Relational Production Language: A Production Language for Relational Databases. In Proc. 2nd Int’l Conf on Expert Database Systems, 153-162. Forgy, C. L. 1981. OPS5 User’s Manual. Technical report, Department of Computer Science, Carnegie Mellon University. Forgy, C. L. 1982. RETE: A Fast Match Algorithm for the Many Pattern/Many Object Pattern Match Probem. Artificial Intelligence 19:17-37. Georgeff, M., and Bonollo, U. 1983. Procedural Ex- pert Systems. In Proc. Int’l Joint Conf. on AI, 151- 157. William Kaufmann, Inc. Gupta, A. 1988. Parallelism in Production Systems. Pittman/Morgan-Kaufman. Ishida, T. 1991. Parallel Rule Firings in Production Systems. IEEE Trans. on Knowledge and Data En- gineering 3(1). Kuo, C.-M.; Miranker, D. P.; and Browne, J. C. 1991. On the Performance of the CREL System. Journal of Parallel and Distributed Computing 13(4):424-441. Kuo, S.; Moldovan, D.; and Cha, S. 1990. Control in Production Systems with Multiple Rule Firings. In Proc. IEEE Int ‘1 Conf. on Parallel Processing, vol- ume II, 243-2246. IEEE. Laird, J.; Newell, A.; and Rosenbloom, P. 1986. Soar: An Architecture for General Intelligence. Artificial Intelligence 33( l):l-64. McCune, W. W. 1990. OTTER 2.0 Users Guide. Technical Report ANL-90/9, Argonne National Lab- oratory. Miranker, D., and Lofaso, B. J. 1991. The Organiza- tion and Performance of a TREAT Based Production System Compiler. IEEE Trans. on Knowledge and Data Engineering 3-10. Miranker, D. P.; Brant, D.; Lofaso, B.; and Gadbois, D. 1990. On the Performance of Lazy Matching in Production Systems. In Proc. Nat. Conf. on Artificial Intelligence, 685-692. AAAI Press. Piatetsky-Shapiro, G., and Frawley, W., eds. 1991. Knowlegde Discover in Databases. AAAI Press and MIT Press. Schmolze, J. 1991. Guaranteeing Serializable Results in Synchronous Parallel Production Systems. J. of Parallel and Dist. Corn. 13(4):348-364. Stolfo, S. J., and Harrison, M. C. 1979. Auto- matic Discovery of Heuristics for Nondeterministic Programs. In 6th Int’l Joint Conf. on AI. Stonebraker, M. 1992. The Integration of Rule Sys- tems and Database Systems. IEEE Dan. on Knowl- edge and Data Engineering 415-423. Tambe, M., and Newell, A. 1988. Some Chunks Are Expensive. In Proc. Int’l Conf on Machine Learning. Widom, J., and Finkelstein, S. 1989. A Syntax and Semantics for Set-Oriented Production Rules in Rela- tional Database Systems. Technical Report RJ 6880 (65706), IBM Almaden Research Center. 464 Enabling Technologies | 1994 | 14 |
1,475 | In search of the best constraint satisfaction search * Daniel Frost and Rina Dechter Dept. of Information and Computer Science University of California, Irvine, CA 92717 { dfrost ,dechter}@ics.uci.edu Abstract We present the results of an empirical study of several constraint satisfaction search algorithms and heuris- tics. Using a random problem generator that allows us to create instances with given characteristics, we show how the relative performance of various search meth- ods varies with the number of variables, the tightness of the constraints, and the sparseness of the constraint graph. A version of backjumping using a dynamic variable ordering heuristic is shown to be extremely effective on a wide range of problems. We conducted our experiments with problem instances drawn from the 50% satisfiable range. 1. Introduction We are interested in studying the behavior of algo- rithms and heuristics that can solve large and hard constraint satisfaction problems via systematic search. Our approach is to focus on the average-case behavior of several search algorithms, all variations of backtrack- ing search, by analyzing their performance over a large number of randomly generated problem instances. Ex- perimental evaluation of search methods may allow us to identify properties that cannot yet be identified for- mally. Because CSPs are an NP-complete problem, the worst-case performance of any algorithm that solves them is exponential. Nevertheless, the average-case performance between different algorithms, determined experimentally, can vary by several orders of magni- tude. An alternative to our approach is to do some form of average-case analysis. An average-case analysis re- quires, however, a precise characterization of the distri- bution of the input instances. Such a characterization is often not available. There are limitations to the approach we pursue here. The most important is that the model we use to generate random problems may not correspond to *This work was partially supported by NSF grant IRI- 9157636, by Air Force Ofice of Scientific Research grant AFOSR 900136 and by grants from Toshiba of America and Xerox. the type of problems which a practitioner actually en- counters, possibly rendering our results of little or no relevance. Another problem is that subtle biases, if not outright bugs, in our implementation may skew the results. The only safeguard against such bias is the repetition of our experiments, or similar ones, by others; to facilitate such repetition we have made our instance generating program available by FTP’. In the following section we define formally constraint satisfaction problems and describe briefly the algo- rithms and heuristics to be studied. We then show that the linear relationship between the number of con- straints and the number of variables at the 50% solv- able region, observed for S-SAT problems by (Mitchell, Selman, & Levesque 1992; Crawford & Auton 1983), is observed only approximately for binary CSPs with more than two values per variable. We conducted our experiments with problems drawn from this region. Section 3 describes those studies, which involved back- tracking, backjumping, backmarking, forward check- ing, two variable ordering heuristics, and a new value ordering heuristic called sticking values. The results of these experiments show that backjumping with a dynamic variable ordering is a very effective combina- tion, and also that backmarking and the sticking values heuristic can significantly improve backjumping with a fixed variable ordering. The final section states our conclusions. 2. efinitions and Algorithms A constraint satisfaction problem (CSP) is represented by a constraint network, consisting of a set of n variables, X1, . . . , X,; their respective value domains, %..., D,; and a set of constraints. A constraint Ci(Xil, . . . , Xij) is a subset of the Cartesian prod- uct Di, x... x Dij, consisting of all tuples of values for a subset (Xi,, . . . , Xij) of the variables which are com- patible with each other. A solution is an assignment of values to all the variables such that no constraint is ‘ftp to ics.uci.edu, login as “anonymous,” give your e- mail address as password, enter “cd /pub/CSP-repository,” and read the README file for further information. Advances in Backtracking 301 From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. violated; a problem with a solution is termed saiisfi- able. Sometimes it is desired to find all solutions; in this paper, however, we focus on the task of finding one solution, or proving that no solution exists. A binary CSP is one in which each of the constraints involves at most two variables. A constraint satisfaction problem can be represented by a constraint graph consisting of a node for each variable and an arc connecting each pair of variables that are contained in a constraint. Algorithms and Heuristics Our experiments were conducted with backtracking (Bitner & Reingold 1985), backmarking (Gaschnig 1979; Haralick & Elliott 1980), forward checking (Har- alick & Elliott 1980), and a version of backjumping (Gaschnig 1979; Dechter 1990) proposed in (Prosser 1993) and called there conflict-directed backjumping. Space does not permit more than a brief discussion of these algorithms. All are based on the idea of con- sidering the variables one at a time, during a forward phase, and instantiating the current variable V with a value from its domain that does not violate any con- straint either between V and all previously instantiated variables (backtracking, backmarking, and backjump- ing) or between V and the last remaining value of any future, uninstantiated variable (forward checking). If V has no such non-conflicting value, then a dead-end occurs, and in the backwards phase a previously in- stantiated variable is selected and re-instantiated with another value from its domain. With backtracking, the variable chosen to be re-instantiated after a dead-end is always the most recently instantiated variable; hence backtracking is often called chronological backtracking. Backjumping, in contrast, can in response to a dead- end identify a variable U, not necessarily the most re- cently instantiated, which is connected in some way to the dead-end. The algorithm then “jumps back” to U, uninstantiates all variables more recent than U, and tries to find a new value for U from its domain. The version of backjumping we use is very effective in choosing the best variable to jump back to. Determining whether a potential value for a variable violates a constraint with another variable is called a consistency check. Because consistency checking is per- formed so frequently, it constitutes a major part of the work performed by all of these algorithms. Hence a count of the number of consistency checks is a com- mon measure of the overall work of an algorithm. Backmarking is a version of backtracking that can re- duce the number of consistency checks required by backtracking without changing the search space that is explored. By recording, for each value of a vari- able, the shallowest variable-value pair with which it was inconsistent, if any, backmarking can eliminate the need to repeat unnecessarily checks which have been performed before and will again succeed or fail. Although backmarking per se is an algorithm based on backtracking, its consistency check avoiding tech- niques can be applied to backjumping (Nadel 1989; Prosser 1983). In our experiments we evaluate the suc- cess of integrating backjumping and backmarking. The forward checking algorithm uses a look-ahead approach: before a value is chosen for V, consistency checking is done with all future (uninstantiated) vari- ables. Any conflicting value in a future variable W is removed temporarily from W’s domain, and if this results in W having an empty domain then the value under consideration for V is rejected. We used two variable ordering heuristics, min-width and dynamic variable ordering, in our experiments. The minimum width (MW or min-width) heuristic (Freuder 1982) d or ers the variables from last to first by repeatedly selecting a variable in the constraint graph that connects to the minimal number of vari- ables that have not yet been selected. Min-width is a static ordering that is computed once before the algorithm begins. In a dynamic variable ordering (DVO) scheme (Haralick & Elliott 1980; Purdom 1983; Zabih & McAllester 1988) the variable order can be different in different branches of the search tree. Our implementation selects at each step the variable with the smallest remaining domain size, when only values that are consistent with all instantiated variables are considered. Ties are broken randomly, and the variable participating in the most constraints is selected to be first. We also experimented with a new value ordering heuristic for backjumping called sticking value. The notion is to remember the value a variable is assigned during the forward phase, and then to select that value, if it is consistent, the next time the same variable needs to be instantiated during a forward phase. (If the “sticking value” is not consistent, then another value is chosen arbitrarily.) The intuition is that if the value was successful once, it may be useful to try it first later on in the search. This heuristic is in- spired by local repair strategies (Minton et al. 1992; Selman, Levesque, & Mitchell 1992) in which all vari- ables are instantiated, and then until a solution is found the values of individual variables are changed, but never uninstantiated. Before jumping to our empirical results, we want to mention that the backjumping algorithm when used with a fixed ordering has a nice graph-based complex- ity bound. Given a graph G, a dfs ordering of the nodes is an ordering generated by a depth first search traver- sal on G, generating a DFS tree (Even 1979). We have shown elsewhere the following theorem: Theorem l(Collin, Dechter, & Katz 1991): Let G be a constraint network and let d be a dfs ordering of G whose DFS tree has depth m. Backjumping on d is qexpw. 3. Methodology and Results The experiments reported in this paper were run on random instances generated using a model that takes 302 Constraint SatisfacLm Figure 1: The “C” columns show values of C which empirically produce 50% solvable problems, using the model described in the text and the given values of N, I<, and T. The “C/N” column shows the value from the “C” column to its left, divided by the current value for N. “**” indicates that at this setting of N, K and T, even the maximum possible value of C produced only satisfiable instances. A blank entry signifies that problems generated with these parameters were too large to run. four parameters: N, I<, T and C. The problem in- stances are binary CSPs with N variables, each having a domain of size K. The parameter 7’ (tightness) spec- ifies a fraction of the IiT2 value pairs in each constraint that are disallowed by the constraint. The value pairs to be disallowed by the constraint are selected ran- domly from a uniform distribution, but each constraint has the same fraction T of such incompatible pairs. T ranges from 0 to 1, with a low value of T, such as l/9, termed a loose or relaxed constraint. The param- eter C specifies the number of constraints out of the N *( N - I)/2 possible. The specific constraints are cho- sen randomly from a uniform distribution. This model is the binary CSP analog of the Random KSAT model described in (Mitchell, Selman, & Levesque 1992). Although our random generator can create ex- tremely hard instances, they may not be typical of actual problems encountered in applications. There- fore, in order to capture a wider variety of instances we introduce another generator, the chain model, that creates problems with a specific structure. A chain problem instance is created by generating several dis- joint subproblems, called nodes, with our general gen- erator described above, ordering them arbitrarily, and then joining them sequentially so that a single con- straint connects one variable in one subproblem with one variable in the next. 58% Solvable Points for CSPs All experiments reported in this paper were run with combinations of N, K, T and C that produces prob- lem instances which are about 50% solvable (some- times called the “cross-over” point). These combina- tions were determined empirically, and are reported in Fig. 1. To find cross-over points we selected values of N, I< and T, and then varied C, generating 250 or more instances from each set of parameters until half of the problems had solutions. Sometimes no value of C resulted in exactly 50% satisfiable; for instance with N= 50, I( = 6,T = 12/36 we found with C = 194 that 46% of the instances had solutions, while with C = 193 54% did. In such cases we report the value of C that came closest to 50%. For some settings of N, Ii’ and T, all values of C pro- duce only satisfiable instances. Since generally there is an inverse relationship between T, the tightness of each Advances in Backtracking 303 constraint, and C, the number r of constraints, this sit- uation occurs when the constraints are so loose that even with C at its maximum value, N * (N - 1)/2, no unsatisfiable instances result. Our data indicate that this phenomenon only occurs at small values of N. I N 11 BT+MW 1 BJ+MW 1 BT+DVO I BJ+DVO 1 K=9 T=9/81 15 5,844 724 673 673 25 859,802 116,382 1,929 1,924 35 119,547,843 219,601 217,453 K=9 T=18/81 15 110,242 48,732 2,428 2,426 25 15,734,382 6,841,255 253,289 252,581 35 392.776.002 17.988.106 17.901.386 I I I I I I , I I I II K=9 T=27/81 1 7 15 106,762 73,541 10,660 10,648 25 1,099,838 583,038 55,402 54,885 35 4.868.528 201.658 189.634 Figure 2: Comparison of backjumping and backtrack- ing with min-width and dynamic variable ordering. Each number represents mean consistency checks over 1000 instances. The chart is blank where no experi- ments were conducted because the problems became too large for the algorithm. We often found that the peak of difficulty, as mea- sured by mean consistency checks or mean CPU time, is not exactly at the 50% point, but instead around the 10% to 30% solvable point, and the level of difficulty at this peak is about 5% to 10% higher than at the 50% point. We nevertheless decided to use the 50% satisfi- able point, since it is algorithm independent. The pre- cise value of C that produces the peak of difficulty can vary depending on algorithm, since some approaches handle satisfiable instances more efficiently. In contrast to the findings of (Mitchell, Selman, & Figure 3: Comparison of backjumping and backtrack- ing with min-width and dynamic variable ordering, us- ing “chain” problems with 15-variable nodes. K=3, T=1/9, and N = 15 * “Nodes”. Each number repre- sents mean consistency checks over 1000 instances. Levesque 1992; Crawford & Auton 1983) for S-SAT, we did not observe a precise linear relationship between the number of variables and the number of constraints (which are equivalent to clauses in CNF). The ratio of C to N appears to be asymptotically linear, but it is impossible to be certain of this from our data. Static and Dynamic Variable Orderings In our first set of experiments we wanted to assess the merits of static and dynamic variable orderings when used with backtracking and backjumping. As the data from Fig. 2 indicate, DVO prunes the search space so effectively that when using it the distinction between backtracking and backjumping is not significant until the number of variables becomes quite large. An excep- tion to this general trend occurs when using backtrack- ing with dynamic variable ordering on sparse graphs. For example, with N = 100, K = 3, and T= 3/9, C is set to 169, which creates a very sparse graph that oc- casionally consists of two or more disjoint sub-graphs. If one of the sub-graphs has no solution, backtracking will still explore its search space repeatedly while find- ing solutions to the other sub-graphs. Because back- jumping jumps between connected variables, in effect it solves the disconnected sub-graphs separately, and if one of them has no solution the backjumping algorithm will halt once that search space is explored. Thus the data in Fig. 2 show that backtracking, even with dy- namic variable ordering, can be extremely inefficient on large CSPs that may have disjoint sub-graphs. Figure 4: Data with N = 75, K = 3, drawn from the same experiments as in Fig. 2. The column “C/2775” indicates the ratio of constraints to the maximumpos- sible for N = 75. At large N, the combination of DVO and backjump- ing is particularly felicitous. Backjumping is more ef- fective on sparser constraint graphs, since the average 304 Constraint Satisfaction 6 35 4136 639,699 646,529 6 35 8136 78,217 79,527 6 35 12136 18.404 18.981 I I I 6 35 16j36 1 6;863 1 71125 9 25 9181 1,929 1,935 9 25 18/81 253,289 255,589 9 25 27181 55,402 56,006 9 25 36181 17.976 18,274 Figure 5: Comparison of backtracking and forward checking with DVO. Each number is the mean con- sistency checks over 1000 instances. size of each “jump” increases with increasing sparse- ness. DVO, in contrast, tends to function better when there are many constraints, since each constraint pro- vides information it can utilize in deciding on the next variable. We assessed this observation quantitatively by recording the frequency with which backjumping with DVO selected a variable that only had one re- maining compatible value. This is the situation where DVO can most effectively prune the search space, since it is acting exactly like unit-propagation in boolean sat- isfiability problems, and making the forced choice of variable instantiation as early as possible. See Fig. 4, where the column labelled “DVO single” shows how likely DVO was to find a variable with one remain- ing consistent value, for one setting of N and Ii’. The decreasing frequency of single-valued variables as the constraint graph becomes sparse indicates that DVO has to make a less-informed choice about the variable to choose next. For the backjumping algorithm with a MW ordering we recorded the average size of the jump at a dead-end, that is, how many variables were passed over between the dead-end variable and the variable jumped back to. With backtracking this statistic would always be 1. This statistic is reported in the “MW jmp size” column in Fig. 4, and shows how backjumping jumps further on sparser graphs. Dynamic variable order was somewhat less successful when applied to the chain type problems. With these structured problems we were able to experiment with much larger instances, up to 450 variables organized as thirty 15-variable nodes. The data in Fig. 3 show that backjumping was more effective on this type of problem than was DVO, and the combination of the two was over an order of magnitude better than either approach alone. Forward Checking A benefit of studying algorithms by observing their average-case behavior is that it is sometimes possible to determine which component of an algorithm is actually responsible for its performance. For instance, forward checking is often acclaimed as a particularly good al- gorithm (Nadel 1989). We note that it is possible to implement just part of forward checking as a variable ordering heuristic: if instantiating a variable with a certain value will cause a future variable to be a dead- end, then rearrange the variable ordering to make that future variable the next variable. The result is essen- tially backtracking with DVO. This method does not do all of forward checking, which would require reject- ing the value that causes the future dead-end. In Fig. 5 we compare forward checking with backtracking, using DVO for both algorithms. The result is approximately equivalent performance. Thus we suggest that forward checking should be recognized more as a valuable vari- able ordering heuristic than as a powerful algorithm. ackmarking and sticking values The next set of experiments was designed to deter- mine whether backmarking and sticking values, alone or in combination, could improve the performance of backjumping under a static min-width ordering. (We plan to report on backmarking and sticking values with dynamic variable ordering in future work.) Since backmarking and sticking values remember informa- tion about the history of the search in order to guide future search, we report on CPU time as well as con- sistency checks (see Fig. 6). Is the overhead of main- taining additional information less than the cost of the saved consistency checks ? Only by examining CPU time can we really tell. We implemented all the algo- rithms and heuristics described in this paper in a single C program, with common data structures, subroutines, and programmer skill, so we believe comparing CPU times is meaningful, though not definitive. Our experiments as summarized in Fig. 6 show that both backmarking and sticking values offer significant improvement when integrated with backjumping, usu- ally reducing CPU time by a half or a third. As ex- pected, the improvement in consistency checks is much greater, but both enhancements seem to be cost effec- tive. Backmarking offers more improvement than does sticking values. Both techniques are more effective on the problems with smaller domain sizes; at K = 9 the benefit of sticking values in terms of reduced CPU time has almost disappeared. Backmarking helps back- jumping over all the problem types we studied. The results from chain problems did not vary significantly from those of the unstructured problems. 4. Conclusions We have several results from experimenting with larger and harder CSPs than have been reported before. Backjumping with dynamic variable ordering seems in general to be a powerful complete search algorithm. The two components complement each other, with backjumping stronger on sparser, more structured, and possibly disjoint graphs. We have shown that the Advances in Backtracking 305 Figure 6: Results from experiments with backjumping, backmarking and sticking values. Each number is the mean of 1000 instances, and a min-width ordering was used throughout. power of forward checking is mostly subsumed by a dy- namic variable ordering heuristic. We have introduced a new value ordering heuristic called sticking values and shown that it can significantly improve backjump- ing when the variables’ domains are relatively small. We have also shown that the backmarking technique can be applied to backjumping with good results over a wide range of problems. One result visible in all our experiments is that among problems with a given number of variables, and drawn from the 50% satisfiable region, those with many loose constraints are much harder than those with fewer and tighter constraints. This is consis- tent with tightness properties shown in (van Beek & Dechter 1994). The pattern is not always observed for low values of N and T, since there may be no 50% region at all. We have also shown that the linear rela- tionship between variables and clauses observed with boolean satisfiability problems at the cross-over point is not found with CSPs generated by our model. References Bitner, J. R., and Reingold, E. 1985. Backtrack pro- gramming techniques. Communications of the ACM 18:651-656. Collin, Z.; Dechter, R.; and Katz, S. 1991. On the Feasibility of Distributed Constraint Satisfaction. In Proceedings of the International Joint Conference on Artificial Intelligence, 318-324. Crawford, J. M., and Auton, L. D. 1983. Experi- mental results on the crossover point in satisfiability problems. In Proceedings of the Eleventh National Conference on Artificial Intelligence, 21-27. Dechter, R. 1990. Enhancement Schemes for Con- straint Processing: Backjumping, Learning, and Cut- set Decomposition. Artificial Intelligence 41:273-312. Even, S. 1979. Graph Algorithms. Maryland: Com- puter Science Press. 306 Constraint Satisfaction Freuder, E. C. 1982. A sufficient condition for backtrack-free search. JACM 21(11):958-965. Gaschnig, J. 1979. Performance measurement and analysis of certain search algorithms. Technical Re- port CMU-CS-79-124, Carnegie Mellon University. Haralick, R. M., and Elliott, G. L. 1980. Increas- ing Tree Search Efficiency for Constraint Satisfaction Problems. Artificial Intelligence 14:263-313. Minton, S.; Johnson, M. D.; Phillips, A. B.; and Laird, P. 1992. Minimizing conflicts: a heuristic re- pair method for constraint satisfaction and scheduling problems. Artificial Intelligence 58( l-3):161-205. Mitchell, D.; Selman, B.; and Levesque, H. 1992. Hard and Easy Distributions of SAT Problems. In Proceedings of the Tenth National Conference on Ar- tificial Intelligence, 459-465. Nadel, B. A. 1989. Constraint satisfaction algorithms. Computational Intelligence 5~188-224. Prosser, P. 1983. BM + BJ = BMJ. In Proceedings of the Ninth Conference on Artificial Intelligence for Applications, 257-262. Purdom, P. W. 1983. Search Rearrangement Back- tracking and Polynomial Average Time. Artifkial In- Prosser, P. 1993. Hybrid Algorithms for the Con- telligence 21~117-133. Selman, B.; Levesque, H.; and Mitchell, D. 1992. A straint Satisfaction Problem. Computational Intelli- New Method for Solving Hard Satisfiability Problems. In Proceedings of the Tenth National Conference on Artificial Intelligence, 440-446. gence 9(3):268-299. van Beek, P., and Dechter, R. 1994. Constraint tight- ness versus global consistency. In Proc. of KR-94. Zabih, R., and McAllester, D. 1988. A Rearrange- ment Search Strategy for Determining Propositional Satisfiability. In Proceedings of the Seventh National Conference on Artificial Intelligence, 155-160. | 1994 | 140 |
1,476 | Solution Reuse in Dynamic Constraint Satisfaction Problems Gkrard Verfaillie and Thomas Schiex ONERA-CERT 2 avenue Edouard Belin, BP 4025 31055 Toulouse Cedex, Prance (verfail,schiex)Ocert.fr Abstract Many AI problems can be modeled as constraint satisfaction problems (CSP), but many of them are actually dynamic: the set of constraints to consider evolves because of the environment, the user or other agents in the framework of a dis- tributed system. In this context, computing a new solution from scratch after each problem change is possible, but has two important draw- backs: inefficiency and instability of the suc- cessive solutions. In this paper, we propose a method for reusing any previous solution and producing a new one by local changes on the previous one. First we give the key idea and the corresponding algorithm. Then we establish its properties: termination, correctness and com- pleteness. We show how it can be used to pro- duce a solution, either from an empty assign- ment, or from any previous assignment and how it can be improved using filtering or learning methods, such as forward-checking or nogood- recording. Experimental results related to effi- ciency and stability are given, with comparisons with well known algorithms such as backtrack, heuristic repair or dynamic backtracking. Problem description Recently, much effort has been spent to increase the efficiency of the constraint satisfaction algorithms: fil- tering, learning and decomposition techniques, im- proved backtracking, use of efficient representations and heuristics . . .This effort resulted in the design of constraint reasoning tools which were used to solve nu- merous real problems. However all these techniques assume that the set of variables and constraints which compose the CSP is completely known and fixed. This is a strong limitation when dealing with real situations where the CSP under consideration may evolve because of: e the environment: evolution of the set of tasks to be performed and/or of their execution conditions in scheduling applications; e the user: evolution of the user requirements in the framework of an interactive design; o other agents in the framework of a distributed sys- tem. The notion of dynamic CSP (DCSP) (Dechter & Dechter 1988) h as been introduced to represent such situations. A DCSP is a sequence of CSPs, where each one differs from the previous one by the addition or removal of some constraints. It is indeed easy to see that all the possible changes to a CSP (constraint or domain modifications, variable additions or removals) can be expressed in terms of constraint additions or removals. To solve such a sequence of CSPs, it is always pos- sible to solve each one from scratch, as it has been done for the first one. But this naive method, which remembers nothing from the previous reasoning, has two important drawbacks: 0 ineflciency, which may be unacceptable in the framework of real time applications (planning, scheduling, etc.), where the time allowed for replan- ning is limited; CB instability of the successive solutions, which may be unpleasant in the framework of an interactive design or a planning activity, if some work has been started on the basis of the previous solution. The existing methods can be classified in three groups: 0 heuristic methods, which consist of using any pre- vious consistent assignment (complete or not) as a heuristic in the framework of the current CSP (Hen- tenryck & Provost 1991); Zocal repair methods, which consist of starting from any previous consistent assignment (complete or not) and of repairing it, using a sequence of local modifications (modifications of only one variable as- signment) (Minton et al. 1992; Selman, Levesque, & Mitchell 1992; Ghedira 1993); constraint recording methods, which consist of recording any kind of constraint which can be de- duced in the framework of a CSP and its justifica tion, in order to reuse it in the framework of any new Advances in Backtracking 397 From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. CSP which includes this justification(de Kleer 1989; Hentenryck & Provost 1991; Schiex & Verfaillie 1993). The methods of the first two groups aim at impro- ving both efficiency and stability, whereas those of the last group only aim at improving efficiency. A little apart from the previous ones, a fourth group gathers methods which aim at minimizing the distance between successive solutions (Bellicha 1993). Key idea The proposed method originated in previous studies for the French Space Agency (CNES) (Badie & Verfaillie 1989) which aimed at designing a scheduling system for a remote sensing satellite (SPOT). In this problem, the set of tasks to be performed evolved each day be- cause of the arrival of new tasks and the achievement of previous ones. One of the requirements was to dis- turb as little as possible the previous scheduling when entering a new task. For solving such a problem, the following idea was used: it is possible to enter a new task t iff there exists for t a location such that all the tasks whose location is incompatible with t’s location can be removed and entered again one after another, without modifying t’s location. In terms of CSP, the same idea can be expressed as follows: let us consider a binary CSP; let A be a con- sistent assignment of a subset V of the variables;’ let v be a variable which does not belong to V; we can assign v i.e., obtain a consistent assignment of V u (v) iff there exists a value val of v such that we can assign val to v, remove all the assignments (v’, vaZ’) which are inconsistent with (v, val) and assign these unassigned variables again one after another, without modifying v’s assignment. If the assignment AU((v, val)) is con- sistent, there is no variable to unassign and the solu- tion is immediate. Note that it is only for the sake of simplicity that we consider here a binary CSP. As we will see afterwards, the proposed method deals with general n-ary CSPs. With such a method, for which we use the name local changes (Zc) and which clearly belongs to the second group (Zocal repair methods), solving a CSP looks like solving a fifteen puzzle problem: a sequence of variable assignment changes which allows any consistent assign- ment to be extended to a larger consistent one. Algorithm The corresponding lows: algorithm can be described as fol- WSP) return Zc-variabZes(!8,8, variabZes(csp)) ‘An assignment A of a subset of the CSP variables is consistent iff all the constraints assigned by A are satisfied; a constraint c is assigned by an assignment A iff all its variables are assigned by A. Zc-variabZes(V1, V2, Vs) ; VI is a set of assigned and fixed variables ; V2 is a set of assigned and not fixed variables ; V3 is a set of unassigned variables if V3 = 0 then return success else let v be a variable chosen in V3 let d be its domain if Zc-variabZe(V1, V2, v, d) = faiZuTe then return failure else return Zc-variabZes(V1, Vz U (v), V3 - Zc-variabZe(V1, Vz, v, d) ifd=0 then return failure else let val be a value chosen in d save-assignments(&) assign-variabZe(v, vaZ) if Zc-vaZue(V1, Vz, 21, val) = success then return success else unassign-vaTiabZe(v) restore-assignments( Vs) return Zc-variabZe(V1, Vi, v, d - (val}) Zc-vaZue( VI, V2, v, val) let be Al = assignment(&) let be Al2 = asaignment(V1 U V2) if A1 U ((v, vaZ)) is inconsistent then return fuiZuTe else if A12 U (( 21, vaZ)) is consistent then return success else let V3 a non empty subset of V2 such that let A123 = assignment(V1 U V2 - V3) Al23 U ((8, val)) is consistent unassign-variabZes( V3) return Zc-variabZes(V1 U (v), ‘& - v3, v3) Properties Let us consider the following theorems: Theorem 1 If the CSP csp is consistent (resp. in- consistent), the pTOCedUTe call Zc(csp) returns success (resp. failure); in case of success, the result is a con- sistent assignment of csp’s variables. Theorem 2 Let VI and V2 be two disjunct sets of assigned variables and let V3 be a set of unassigned variabzes; let be V = VI u V2 U V3; Zet be Al = assignment; if there exists (resp. does not exist) a consistent assignment A of V, such that AJv,= Al, the pTOCedUTt? call Zc-variabZes(V1, V2, V3) Tetums suc- cess (resp. faizure);’ in case of success, the result is a consistent assignment of V. Theorem 3 Let VI and V2 be two disjunct sets of as- signed variables; Zet v be an unassigned variable; let d be its domain; let be V = VI u I.5 u (v); let be 2Let A be an assignment of a subset V of the CSP vari- ables and V' be a subset of V; the notation AJvt designates the restriction of A to V'. 308 Constraint Satisfaction Al = assignment(&); if there exists (resp. does since Al U(( v, val)} is consistent, the procedure does not exist) a consistent assignment A of V, such that not immediatelyreturn faiZure; either A~~u((v, val)} A J.v,= Al, the pTOCedUTe call Zc-variabZe(V;, V2, v, d) is consistent and the procedure returns immediately returns success (resp. failure); in case of success, the success, with a consistent assignment of V, or it is result is a consistent assignment of V. not and: Theorem 4 Let VI and V2 be two disjunct sets of variabzes; Zet v be an unassigned variable; let vaZ be one of its possible vakes; let be V = VI U V2 U (v); let be Al = assignnaent(V1); if there exists (resp. does not exist) a consistent assignment A of V, such that A Iv~u+,)= AI U (( v,vaZ)}, the procedure call Zc-vaZue(Vl) V2, v, val) returns success (resp. failure); in case of success, the result is a consistent assignment OfV. - there exists a non empty subset V3 of V2 such that A123 U ((21, vaZ)} is consistent: for example, 55; Theorem 1 expresses the termination, correctness and compZeteness properties of the algorithm. Theo- rems 2, 3, 4 express the same properties for the proce- dures Zc-variables, Zc-variable and Zc-value. It is easy to show that Theorem 1 (resp. 2 and 3) is a straigthforward consequence of Theorem 2 (resp. 3 and 4). Let us consider the set V23 of the not fixed variables (V23 = V2 U 55 for the procedure Zc-variabZes, V23 = V2 U (v) for the procedures Zc-variable and Zc-vaZue). It is just as easy to show that, if Theorem 3 (resp. Theorem 4) holds when 1 V23 1 < k, then Theorem 2 (resp. Theorem 3) holds under the same condition. Let us now use an induction on the cardinal of V23 to prove Theorems 2, 3 and 4. Let us assume that (V23 1 = 1 and let us prove The- orem 4 in this case. Let us consider a procedure call Zc-vaZue(V~,0,v, val): - whatever the set chosen for V3, the call to Zc- variables returns success with a consistent assign- ment of V, according to the induction assumption; e let us now assume that there exists no consistent assignment A of V, such that A .lvlu{v)= Al U ((v, vaz)}; since Al2 U (( 21, val)) is inconsistent, the procedure does not immediately return success; ei- ther ArU((v, val)) is inconsistent and the procedure returns immediately failure, or it is not and: - there exists a non empty subset V3 of 55 such that A123 u((val)} is consistent: for example, Vz; - whatever the set chosen for V3, the call to Zc- variabzes returns failure, according to the induc- tion assumption. Theorem 4 and consequently Theorems 3 and 4 are proven, when IV231 = k. They are therefore proven whatever the cardinal of V23. That allows us to con- clude that Theorem 1 is proven i.e., that the algorithm described above ends, is correct and complete. Practical use Prom a practical point of view, the problem is now to choose a set V3 that is as small as possible, in order to reduce the number of variables that need to be unas- signed and subsequently reassigned. let us assume that there exists a consistent assign- ment A of V, such that A~v,~{~)= A1 U ((v, val)); since V = VI U (v), Al U ((v, val)) and Al2 U ((v, val)) are equal and consistent and the proce- dure returns success; the resulting assignment Al U {(v,vaZ)) of V is consistent; let us now assume that there exists no consistent assignment A of V, such that A ~v,~(,,)= A1 U ((v,vaZ)}; since V = VI U (v}, Al U ((v, val)} is inconsistent and the procedure returns failure. Theorem 4, and consequently Theorem 3 and 2 are proven in this particular case. Let us assume that Theorems 2, 3 and 4 hold when IV231 < k and let us prove that they hold when IV231 = k. Let us first consider Theorem 4 and a procedure call Zc-vaZue(V1, V2, v, val), with IV2 I = k - 1. Let us note that, when the procedure Zc-variabZes is recur- sively called, its arguments satisfy the following rela- tions: Vi U Vi = V2 (IVi31 = k- 1) and V{UViUV. = VI U V2 U (v} = V. This allows us to use the induction assumption: e let us assume that there exists a consistent assign- ment A of V, such that AJv~~{~}= A1 U ((v, vaZ)}; In the general case of n-ary CSPs, a simple method consists of choosing one variable to be unassigned for each constraint which is unsatisfied by the as- signment Al2 U (( v, val)). The resulting assignment A123 u ((%vaz)) is consistent, since all the previously unsatisfied constraints are no longer assigned, but we have no guarantee that the resulting set V3 is one of the smallest ones. Note that it does not modify the termination, correctness and completeness properties of the algorithm. It may only alter its results in terms of efficiency and stability. We did not compare the cost of searching for one of the smallest sets of variables to be unassigned with the resulting saving. In the particular case of binary CSPs, a simpler method consists of unassigning each variable whose as- signment is inconsistent with (v, val). The resulting set V3 is the smallest one. It is important to note that this algorithm is able to solve any CSP, either starting from an empty assign- ment (from scratch), or starting from any previous as- signment. The description above (see Algorithm) cor- responds to the first situation. In the second one, if A is the starting assignment, then the first step con- sists of producing a consistent assignment A’ that is Advances in Backtracking 309 included in A and as large as possible. The method presented above can be used. If V2 (resp. Vs) is the resulting set of assigned (resp. unassigned) varia- bles, the CSP can be solved using the procedure call Zc-variabZes( 0, Vz, V3) (no fixed variable). Comparisons and improvements The resulting algorithm is related to the backjumping (Dechter 1990; Prosser 1993), i&e&gent backtracking (Bruynooghe 1981), dy namic backtracking (Ginsberg 1993) and heuristic repair (Minton et al. 1992) algo- rithms, but is nevertheless different from each of them. Like the first one, it avoids useless backtracking on choices which are not involved in the current conflict. Like the following two ones, it avoids, when backtra- cking, undoing choices which are not involved in the current conflict. Like the last one, it allows the search to be started from any previous assignment. But back- jumping, inteZZigent and dynamic backtracking are not built for dealing with dynamic CSPs, and heuristic re- pair uses the usual backtracking mechanism. Finally, ZocaZ changes combines the advantages of an efficient backtracking mechanism with an ability to start the search from any previous assignment. Moreover, it can be improved, without any problem, by using any filtering or learning method, such as forward-checking or nogood-recording (Schiex & Verfail- lie 1993). The only difference is the following one: for backtrack, forward-checking and nogood-recording are applied from the assigned variables; for ZocaZ changes, as for heuristic repair, they are applied from the as- signed and fixed variables. Note that the combination of local changes and nogood-recording is an example of solution and reasoning reuse. Experiments In order to provide useful comparisons, eight algo- rithms have been implemented on the basis of the following four basic algorithms: backtrack (bt), dy- namic backtracking (dbt), heuristic repair (hrp) and ZocaZ changes (Zc), using conjlict directed backjumping (cbj) and backward (bc) or forward-checking (fc): bt- cbj-bc, bt-cbj-fc, dbt-bc, dbt-fc, hrp-cbj-bc, hrp-cbj-fc, Zc-bc and Zc-fc. Each time there is no ambiguity, we will use the abbreviations bt, dbt, hrp and Zc to designate these al- gorithms. Note that dbt and Zc can not be improved by cbj, because they already use a more powerful back- tracking mechanism. Heuristics For each algorithm, we used the following simple yet efficient heuristics: m choice of the variable to be assigned, unassigned or reassigned: choose the variable whose domain is the smallest one; e choice of the value: - for bt and dbt: first use the value the variable had in the previous solution, if it exists; - for hrp and Zc: choose the value which minimizes the number of unsatisfied constraints. In the case of bt, dbt and hrp, the previous solution is recorded, if it exists. In the case of bt and dbt, it is used in the framework of the choice of the value. In the case of hrp, it is used as a starting assignment. In the case of Zc, the greatest consistent assignment previously found (a solution if the previous problem is consistent) is also recorded and used as a starting assignment. For the four algorithms, two trivial cases are solved without any search: the previous CSP is consistent (resp. inconsistent) and there is no added (resp. re- moved) constraint. CSP generation Following (Hubbe & Freuder 1992), we randomly ge- nerated a set of problems where: the number nv of variables is equal to 15; for each variable, the cardinality of its domain is randomly generated between 6 and 16; all the constraints are binary; the connectivity con of the constraint graph i.e., the ratio between the number of constraints and the number of possible constraints, takes five possible values: 0.2, 0.4, 0.6, 0.8 and 1; the mean tightness rnt of the constraints i.e., the mean ratio between the number of forbidden pairs of values and the number of possible pairs of values, takes five possible values: 0.1, 0.3, 0.5, 0.7 and 0.9; for a given value of mt, the tightness of each cons- traint is randomly generated between mt - 0.1 and mt + 0.1. the size ch of the changes i.e., the ratio between the number of additions or removals and the number of constraints, takes six possible values: 0.01, 0.02, 0.04, 0.08 and 0.16 et 0.32. For each of the 25 possible pairs (con, mt), 5 pro- blems were generated. For each of the 125 resulting initial problems and for each of the 6 possible values of ch, a sequence of 10 changes was generated, with the same probability for additions and removals. Measures In terms of efficiency, the three usual measures were performed: number of nodes, number of constraint checks and cpu time. In terms of stability, the dis- tance between two successive solutions i.e., the num- ber of variables which are differently assigned in both solutions, was measured each time both exist. 310 Constraint Satisfaction nv = 15, 6 5 dom 5 16, ch = 0.04 number of constraint checks mt con 0.2 0.4 0.6 0.8 1 0.1 1 0.3 backward checking 1 0.5 C bt 12 hrp 13 dbt 12 lc 3 C bt 32 hrp 33 dbt 32 lc 8 C bt 56 hrp 57 dbt 56 lc 19 C bt 75 hw 63 dbt 68 IC 25 C bt 0 hrp 0 dbt 0 lc 0 C bt 10 c bt 127 ci bt 21 954 hrp 11 hrp 24 hv 30 330 dbt 10 dbt 23 dbt 2 508 lc 4 lc 27 lc 248 C bt 84 ci bt 21 536 i bt 788 hr P 61 hrP 100 752 hrp 110 240 dbt 45 dbt 11 020 dbt 326 lc 39 lc 6257 lc 471 C bt 518 i bt 29 601 i bt 3 189 hrp 472 hv 159 511 hw 27 617 dbt 104 dbt 8 050 dbt 802 lc 89 lc 17 399 lc 941 C bt 13 558 i bt 2 777 i bt 72 hrp 81 291 hv 125 966 hrp 10 046 dbt 5 126 dbt 1 235 dbt 57 lc 10 591 lc 1 470 lc 131 ci bt 45 179 i bt 1 424 i bt 370 hrp 469 701 hrp 110 541 hrp 3 459 dbt 19 265 dbt 805 dbt 171 lc 132 203 lc 3 500 lc 181 forward checking mt 0.1 1 0.3 0.7 0.9 i bt 96 hw 3 862 dbt 21 lc 6 i bt 297 hrp 10 263 dbt 95 hv 2 050 dbt 5 i bt 8 hrp 2 161 dbt 7 con 0.2 c bt 144 c bt 92 hrp 16 hrP 15 dbt 144 dbt 93 lc 23 lc 21 0.4 c bt 346 c bt 254 hrp 39 hw 63 dbt 346 dbt 261 lc 75 lc 104 0.6 c bt 548 c bt 321 hrp 70 hrp 191 dbt 548 dbt 341 lc 143 lc 205 0.8 c bt 558 c bt 1 379 hrp 76 hrp 6 791 dbt 564 dbt 1 761 lc 185 lc 2 081 1 c bt 0 ci bt 8 092 hrp 0 hrp 98 755 dbt 0 dbt 10 573 lc 0 lc 37 891 Results lc bt 104 hrp 40 dbt 106 lc 45 ci bt 1 245 hw 6 732 dbt 1 554 lc 1 953 i bt 2 336 hrp 12 392 dbt 2 749 lc 5 526 i bt 987 hrp 6 521 dbt 858 lc 757 i bt 1 857 hrp 4 772 dbt 1 687 lc 1 373 The two tables above show the mean number of cons- traint checks, when solving dynamic problems with changes of intermediate size (ch = 0.04). The first one show the results obtained when using backward- checking: bt-cbj-bc, dbt-bc, hrp-cbj-bc and Zc-bc. The second one show the same results obtained when using forward-checking: bt-cbj-fc, dbt-fc, hrp-cbj-fc and Zc-fc. nc the top left corner of each cell, a letter c (resp. i) means that all the problems are consistent (resp. in- consistent). Two letters (ci) mean that some of them are consistent and the others inconsistent. The less ci bt 386 i bt 24 hv 2 591 b 11 dbt 257 dbt 17 lc 113 lc 3 i bt 260 i bt 29 hrp 2 850 hrp 69 dbt 233 dbt 29 lc 275 lc 27 i bt 316 i bt 11 hv 1 253 hw 348 dbt 314 dbt 11 lc 468 lc 7 i bt 185 i bt 15 hrp 562 hv 7 dbt 169 dbt 15 lc 100 lc 2 i bt 279 i bt 28 b 746 hrp 53 dbt 281 dbt 28 lc 124 lc 6 (resp. more) constrained problems, with small (resp. large) values for con and mt i.e., with few loose (resp. many tight) constraints, are in the top left (resp. bot- tom right) of each table. ,Each number is the mean value of a set of 50 re- sults (5 * 10 dynamic problems). In each cell, the algo- rithm(s) which provides the best result is(are) pointed out in bold. Analysis As it has been previously observed (Cheeseman, Kanef- sky, & Taylor 1991), the hardest problems are neither 0.7 0.9 Advances in Backtracking 311 the least constrained (solution quickly found), nor the most constrained (inconsistency quickly established), but the intermediate ones, for them it is difficult to establish the consistency or the inconsistency. If we consider the first table, with backward- checking, we can see that: hrp is efficient on the least constrained problems, but inefficient and sometimes very inefficient on the others; dbt is always better than bt and the best one on the intermediate problems; Zc is almost always better than hrp and the best one, both on the least constrained problems and the most constrained ones; it is better on loosely connected problems than on the others; it is nevertheless ineffi- cient on intermediate strongly connected problems. If we consider the second table, with forwad checking, the previous lessons must be modified, be- cause forward-checking benefits bt and hrp more than dbt and Zc (the number of constraint checks is roughly divided by 12 for bt and hrp, by 3 for dbt and Zc): e hrp becomes problems; the best one on the least constrained e bt and dbt are the best ones on the intermediate ones; o Zc remains the best one on the most constrained ones. Note that these results might be different in case of n-ary constraints on which forward-checking is less efficient. We do not show any results related to the cpu time because number of constraint checks and cpu time are strongly correlated, in spite of a little overhead for hrp and Zc (around 850 constraint checks per second for bt and dbt, around 650 for hrp and Zc; this aspect depends widely on the implementation). More surprising, the four algorithms provide very similar results in terms of distance between successive solutions. It seems to be the result of the mechanisms of each algorithm and of the heuristics used to choose the value. Note finally that, although hrp and Zc provide better results with small changes than with large ones, the results obtained with other change sizes do not modify the previous lessons. Conclusion Although other experiments are needed to confirm it, we believe that the proposed method may be very con- venient for solving large problems, involving binary and n-ary constraints, often globally underconstrained and subject to frequent and relatively small changes, such as many real scheduling problems. Acknowlegments This work was supported both by the French Space Agency (CNES) and the French Ministry of Defence (DGA-DRET) and was done both at ONERA-CERT (Prance) and at the University of New Hampshire (USA). We are indebted to P.Hubbe, R.Turner and J. Weiner for helping us to improve this paper and to E. Preuder and R. Wallace for useful discussions. References Badie, C., and Verfaillie, G. 1989. OSCAR ou Com- ment Planifier Intelligemment des Missions Spatiales. In PTOC. of the gth International Avignon Workshop. Bellicha, A. 1993. Maintenance of Solution in a Dy- namic Constraint Satisfaction Problem. In Proc. of Applications of Artificial Intelligence in Engineering VIII, 261-274. Bruynooghe, M. 198 1. Solving Combinatorial Search Problems by Intelligent Backtracking. Information Processing Letters 12( 1):36-39. Cheeseman, P.; Kanefsky, B.; and Taylor, W. 1991. Where the really Hard Problems Are. In PTOC. of the lZth IJCAI 294-299 . de Kleer, J.’ 1989. A Comparison of ATMS and CSP Techniques. In Proc. of the llth IJCAI, 290-296. Dechter, R., and Dechter, A. 1988. Belief’Mainte- nance in Dynamic Constraint Networks. In Proc. of AAAI-88, 37-42. Dechter, R. 1990. Enhancement Schemes for Con- straint Processing : Backjumping, Learning and Cut- set Decomposition. Artificial Intelligence 41(3):273- 312. Ghedira, K. 1993. MASC : une Approche Multi-Agent des Problimes de Satisfaction de Contraintes. These de doctorat, ENSAE, Toulouse, Prance. Ginsberg, M. 1993. Dynamic Backtracking. Journal of Artificial Intelligence Research 1~25-46. Hentenryck, P. V., and Provost, T. L. 1991. In- cremental Search in Constraint Logic Programming. New Generation Computing 91257-275. Hubbe, P., and F’reuder, E. 1992. An Efficient Cross- Product Representation of the Constraint Satisfac- tion Problem Search Space. In Proc. of AAAI-92, 42 l-427. Minton, S.; Johnston, M.; Philips, A.; and Laird, P. 1992. Minimizing Conflicts: a Heuristic Repair Method for Constraint Satisfaction and Scheduling Problems. Artificial Intelligence 58:160-205. Prosser, P. 1993. Hybrid Algorithms for the Con- straint Satisfaction Problem. ComputationaZ InteZZi- gence 9(3):268-299. Schiex, T., and Verfaillie, G. 1993. Nogood Record- ing for Static and Dynamic CSP. In Proc. of the 5th IEEE International Conference on Tools with Artifi- cial Intelligence. Selman, B.; Levesque, H.; and Mitchell, D. 1992. A New Method for Solving Hard Satisfiability Problems. In Proc. of AAAI-92, 440446. 312 Constraint Satisfaction | 1994 | 141 |
1,477 | Weak-commitment Search for Solving Constraint Satisfaction Problems Makoto Yokoo NTT Communication Science Laboratories 2-2 Hikaridai, Seika-cho, Soraku-gun, Kyoto 619-02 Japan yokoo@cslab.kecl.ntt.jp Abstract The min-conflict heuristic (Minton et al. 1992) has been introduced into backtracking algorithms and it- erative improvement algorithms as a powerful heuris- tic for solving constraint satisfaction problems. Back- tracking algorithms become inefficient when a bad par- tial solution is constructed, since an exhaustive search is required for revising the bad decision. On the other hand, iterative improvement algorithms do not con- struct a consistent partial solution and can revise a bad decision without exhaustive search. However, most of the powerful heuristics obtained through the long history of constraint satisfaction studies (e.g., for- ward checking (Haralick & Elliot 1980)) presuppose the existence of a consistent partial solution. There- fore, these heuristics can not be applied to iterative improvement algorithms. Furthermore, these algo- rithms are not theoretically complete. In this paper, a new algorithm called we& commitment search which utilizes the min-conflict heuristic is developed. This algorithm removes the drawbacks of backtracking algorithms and iterative improvement algorithms, i.e., the algorithm can re- vise bad decisions without exhaustive search, the com- pleteness of the algorithm is guaranteed, and various heuristics can be introduced since a consistent partial solution is constructed. The experimental results on various example problems show that this algorithm is 3 to 10 times more efficient than other algorithms. Introduction A Constraint Satisfaction Problem (CSP) is a general framework that can formalize various problems in AI, and many theoretical and experimental studies have been performed (Mackworth 1992). Recently, the min- conflict heuristic (Minton et al. 1992) has been iden- tified as a powerful heuristic for finding one solution of a CSP. This heuristic can be described as follows: when deciding a variable value, it chooses the value that minimizes the number of constraint violations be- tween other variables. This heuristic has been applied to backtracking algorithms (Minton et al. 1992) and iterative improvement algorithms (Minton et al. 1992; Morris 1993). In backtracking algorithms, a consistent partial so- lution is constructed for a subset of variables, and this partial solution is extended by adding variables one by one until a complete solution is found. In the back- tracking algorithm that incorporates the mm-conflict heuristic (mm-conflict backtracking), all variables are given tentative initial values. When a variable is added to the partial solution, its tentative initial value is re- vised so that the new value satisfies all constraints be- tween the partial solution, and satisfies as many con- straints between variables that are not included in the partial solution as possible. The drawback of backtracking algorithms is as fol- lows. @ One mistake in the value selection is fatal. In mm-conflict backtracking, the partial solution con- structed during the search process will not be revised unless it is proven that there exists no complete so- lution subsuming the partial solution. If the algo- rithm makes a bad selection of a variable value, the algorithm must perform an exhaustive search for the partial solution in order to revise the bad decision. When the problem becomes very large, doing such an exhaustive search is virtually impossible. On the other hand, iterative improvement algo- rithms (Minton et al. 1992; Morris 1993; Selman, Levesque, & Mitchell 1992) do not construct a consis- tent partial solution. In these algorithms, a flawed so- lution containing some constraint violations is revised by local changes until all constraints are satisfied. The min-conflict heuristic is used as the basis for the local changes. In these algorithms, the value of one vari- able can be changed repeatedly without any exhaustive search. Therefore, one mistake in the value selection is not fatal and can be revised easily. However, the iterative improvement algorithms have the following drawbacks. The completeness of the algorithms can not be guaranteed. We say that an algorithm is com- plete if the algorithm is guaranteed to find one solu- tion eventually when solutions exist; and when there exists no solution, the algorithm is guaranteed to find out the fact that there exists no solution and Advances in Backtracking 313 From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. * terminate. One exception is the fill algorithm (Mor- ris 1993), which is guaranteed to find a solution if there exists one. However, this algorithm is far less efficient than a similar incomplete algorithm called the breakout algorithm (Morris 1993). Also, the fill algorithm will not terminate when there exists no solution. Introducing other heuristics is difficult. The completeness of the algorithms may have only the- oretical importance when solving large-scale prob- lems. A more practical drawback is that we can not apply most of the powerful heuristics obtained through the long history of constraint satisfaction studies (e.g., forward checking (Haralick & Elliot 1980)) to iterative improvement algorithms, since these heuristics presuppose the existence of a con- sistent partial solution. In this paper, a new algorithm called we& commitment search which utilizes the mm-conflict heuristic is developed. In this algorithm, all variables are given tentative initial values, and variables are added one by one to the consistent partial solution as in the min-conflict backtracking. This algorithm con- structs a consistent partial solution, but commits to the partial solution we&y, in contrast to backtracking algorithms which never abandon a partial solution un- less it turns out to be hopeless. Namely, this algorithm commits to the partial solution as long as it can be ex- tended. However, when there exists no value for one variable that satisfies all constraints between the par- tial solution, this algorithm abandons the whole partial solution, and starts constructing a new partial solu- tion from scratch, using the current value assignment as new tentative initial values. This algorithm removes the drawbacks of backtrack- ing algorithms and iterative improvement algorithms, i.e., the algorithm can revise bad decisions without ex- haustive search, the completeness of the algorithm is guaranteed, and various heuristics can be introduced since a consistent partial solution is constructed. In the following, we give a brief description of the CSP, and describe the weak-commitment search algo- rithm. Then, we show several empirical results indicat- ing the efficiency of this algorithm. Furthermore, we examine the algorithm complexity and show a proba- bilistic model of the mm-conflict backtracking and the weak-commitment search. Constraint Satisfaction A constraint satisfaction problem can be described as follows. There exist n variables 21, x2, . . . . x,, each of which takes its value from a finite, discrete domain DI, Dz, . . . . D,, respectively. There also exists a set of constraints. In this paper, we assume that a constraint is represented as a nogood, i.e., a combination of vari- 314 Constraint Satisfaction able values that is prohibited’. We represent the fact that variable xi’s value is t-& as a tuple (xi, di). A con- straint {fit:) ;CXj 9 dj ) 1 means that the combination of Zi = j * = dj is prohibited. A solution of a CSP is the valulassignment of all variables that sat- isfies all constraints, i.e., the assignment that is not a superset of any nogood. Weak-commitment Search Algorithm The weak-commitment search algorithm is illus- trated in Figure 1. Initially, vurs-lefi is set to ((XI, dl), (~2, dz), . . . ,(xn,dn)}, where di is the tenta- tive initial value of xi. Also, partial-solution is assigned to an empty set. This algorithm moves variables from vurs-left to partial-solution one by one. The essential difference between this algorithm and the mm-conflict backtracking is the shaded part in Fig- ure 1. In mm-conflict backtracking, backtracking is performed at this part and the most-recently added variable is removed from the partial solution. In weak- commitment search, the whole partial solution is aban- doned, i.e., all elements of partial-solution are moved to vurs-left. Then, the search process is restarted using the current value assignment as new tentative initial values. It must be noted that not all variable values in the partial solution will be revised again. Since the al- gorithm revises only the constraint violating variables, the variables that already satisfy all constraints will not be revised again. This algorithm records the abandoned partial solu- tions in nogoods as new constraints, and avoids creat- ing the same partial solution that has been created and abandoned before. Therefore, the completeness of the algorithm (always finds a solution if one exists, and terminates if no solution exists) is guaranteed. In Ginsberg & Harvey (1990), a backtrack-based al- gorithm called the iterutive broadening algorithm is presented. This algorithm avoids strong commitments and revises bad decisions without exhaustive search. The weak-commitment algorithm is similar to the iter- ative broadening algorithm where the search width is set to 1. However, the iterative broadening algorithm iteratively widens the search width if the trial employ- ing the initial width fails. On the other hand, the weak-commitment search algorithm restarts the search process without changing the search width. We show an example of algorithm execution using the well-known n-queens problem, placing n queens on an n x n chess board so that these queens will not threaten each other. In this example, we use the 4- queens problem. This problem can be formalized as a CSP where each variable represents the position of ‘In general, a constraint is represented as an allowed combination of variable values. In this paper, we use the opposite representation since this representation is conve- nient for treating an abandoned partial solution as a new constraint. This choice of representation is inessential and does not affect the evaluation results in this paper. vadeft + Uq,dl), (x2,d2), . . . 9 bn,dn>> partial-solution t () nogooals t set of all constraints (x,d)+ a variable and value pair in vanleft that does not satisfy some constraint values t the set of x’s values that satisfy all constraints with partial-solution remove (x,d) from vars-left v t- the-value within v&es that minimizes the number of constraint violations with vardeft add (x,v) to partial-solution Figure 1: Weak-commitment search algorithm Figure 2: Example of algorithm execution a queen in each row, and the domain of a variable is {1,2,3,4}. The initial state is illustrated in Fig- ure 2(a). The algorithm first revises the position of the first queen (Figure 2(b)), then revises the posi- tion of the fourth queen (Figure 2(c)). A queen whose position is revised is added to partial-solution. We rep- resent a queen in partial-solution as a filled circle. In Figure 2(c), there exists no consistent position with partial-solution for the third queen. Therefore, the whole purtiul-solution is abandoned (Figure 2(d)). The algorithm revises the position of the first queen again (Figure 2(e)). Consequently, all constraints are satis- fied. Evaluations In this section, we compare the weak-commitment search, the min-conflict backtracking, and an iterative improvement algorithm by experiments on typical ex- amples of CSPs (n-queens, graph-coloring, and 3-SAT problem). We use the breakout algorithm (Morris 1993) as the representative for iterative improvement algorithms. This algorithm has a notable feature in that it does not stop at local minima, and has been shown to be more efficient than other iterative improvement algo- rithms (Morris 1993). However, this algorithm is not complete, i.e., the algorithm can fall into an infinite processing loop. We measure the number of required steps and the number of consistency checks. Each change of one variable value, each backtracking, and each restarting is counted as one step. Also, one consistency check represents the check of one combination of variable values among which a constraint exists2. The num- ber of consistency checks is widely used as a machine- independent measure for constraint satisfaction algo- rithms. For all three algorithms, in order to reduce unnecessary consistency checks, the result of consis- tency checks at the previous step is recorded and only the difference is calculated in each step. In order to terminate the experiments in a reason- able amount of time, the maximum number of steps is limited to 5000, and we interrupt any trial that ex- ceeds this limit. For an interrupted trial, we count the number of required steps as 5000, and use the number of consistency checks performed until the interruption for the evaluation. n-queens The first example problem is the n-queens problem described in the previous section. We show the ratio of successful trials (trials finished within the limit), the number of required steps, and the number of consis- tency checks for n=lO, 50, 100 in Table 1. We run 100 trials with different initial values and show the 2The number of checks for newly added constraints (abandoned partial solutions) is also included. Advances in Backtracking 315 Table 1: Comparison on n-queens Table 2: Required steps for trials with backtrack- ing/restarting aver age3. These initial values are generated by the greedy method described in Minton et al. (1992). As shown in Table 1, for all cases, the weak- commitment search is more efficient than the min- conflict backtracking and the breakout algorithm. We can see that the breakout algorithm does a lot more consistency checks for each step compared with the weak-commitment search. This fact can be explained as follows. When choosing a variable to change its value, the weak-commitment search (and the min- conflict backtracking) can choose any of the constraint violating variables. On the other hand, the breakout algorithm must choose a variable so that the number of constraint violations can be reduced by changing its value. Therefore, in the worst case (when the cur- rent assignment is a local minimum), the breakout al- gorithm has to check all the values for all constraint violating variables4. For the trials without backtracking/restarting, the behaviors of the weak-commitment search and the min- conflict backtracking are exactly the same. We show the ratio of trials with backtracking/restarting, and the average number of steps for these trials in Table 2. We can see that the numbers of required steps for the mm-conflict backtracking in trials with backtracking are very large and dominate the average of all trials. 31n Minton e t al. (1992), it is reported that the min- conflict backtracking can solve the loo-queens problem in around 25 steps. In our experiment, there exists one trial that exceeds 5000 steps and the result of this trial becomes the dominant factor in the average. The average except this trial is almost identical to the result reported in Minton et al. (1992). F or ot 2 1000, the result for the min-conflict backtracking and weak-commitment search are exactly the same, and almost identical to the result reported in Minton et al. (1992). ‘If the current assignment is not a local minimum, the breakout algorithm does not have to check all variables. 316 Constraint Satisfaction Graph-coloring The graph-coloring problem involves painting nodes in a graph by k different colors so that any two nodes connected by an arc do not have the same color. We randomly generate a problem with n nodes and m arcs by the method described in Minton et al. (1992), so that the graph is connected and the problem has a solution. We evaluate the problem n = 120,180,240, where m = n x 2 and k=3. This parameter setting cor- responds to the “sparse” problems for which Minton et al. (1992) report poor performance of the min-conflict heuristic. We generate 10 different problems, and for each problem, 10 trials with different initial values are performed (100 trials in all). As in the n-queens prob- lem, the initial values are set by the greedy method. We introduce two kinds of heuristics (forward check- ing and first-fail principle (Haralick & Elliot 1980)) into the mm-conflict backtracking and the weak- commitment search, i.e., for each variable, we keep the list of values consistent with the partial solution, and when selecting a variable to be added to the partial solution, we select the variable that has the least num- ber of consistent values. Also, the variable that has only one consistent value is included into the partial solution immediately. Furthermore, before including a variable into the partial solution, we check whether each of remaining variables (variables in vars-left) has at least one consistent value with the partial solution, and avoid selecting a value that causes immediate fail- ure. Table 3 shows evaluation results for the three algorithms. Although Minton et al. (1992) report poor perfor- mance for the min-conflict backtracking for these prob- lems, by introducing the two heuristics (forward check- ing and first-fail principle), the performance of the min-conflict backtracking becomes relatively good in our evaluation. However, there exist several trials in which a mistake in the value selection becomes fatal, and the min-conflict backtracking fails to find a solu- tion within the limit. Therefore, the weak-commitment search is more efficient than the mm-conflict backtrack- ing. For these problems, the forward checking and first-fail principle are very efficient and the weak- commitment search is about 10 times more efficient than the breakout algorithm. We can see that the ca- pability of accommodating such powerful heuristics is a great advantage of the weak-commitment search over iterative improvement algorithms. 1 240 1 100% 1 71.9 1 5988.6 1 95% I 443.6 I 42164.5 I 100% I 601.2 I 66892.5 ] Table 3: Comparison on graph-coloring Table 4: Comparison on 3-SAT problem S-SAT Problem For the third example problem, we use the S-SAT prob- lem, which is commonly used as a benchmark for itera- tive improvement algorithms. This problem is to assign truth values for n boolean variables that satisfy con- straints represented as clauses. Each clause consists of three variables. The number of clauses divided by the number of variables is called the clause density, and the value 4.3 has been identified as the critical value that produces particularly difficult problems (Mitchell, Selman, & Levesque 1992). Table 4 shows results of changing the number of vari- ables n by setting the clause density to 4.3. We use the same method described in Morris (1993) to generate a random problem that has a solution. We generate IO different problems, and for each problem, 10 trials with different initial values5 are performed (100 trials in all). As in the case of the graph-coloring problems, we introduce the two heuristics into the min-conflict backtracking and the weak-commitment search. As shown in Table 4, the min-conflict backtracking is very inefficient for this problem. On the other hand, the weak-commitment search is around 10 times more efficient than the breakout algorithm for larger n. For this problem, the effect of the heuristics is not power- ful enough to completely avoid bad decisions. Such less powerful heuristics are of little use to the min-conflict backtracking. On the other hand, they are useful for the weak-commitment search algorithm since the vari- able values are iteratively revised by restarting. Discussions Algorithm Complexity The worst-case time complexity of the weak- commitment search becomes exponential in the num- 5These initial values are set by the greedy method as in the case of the other problems. ber of variables n. This result seems inevitable since constraint satisfaction is NP-complete in general. The space complexity of the weak-commitment search is determined by the number of newly added nogoods (constraints) to nogoods, i.e., the number of restart- ings. In the worst case, this is also exponential in n. On the other hand, the space complexity of the back- tracking algorithm is linear to n. This result seems inevitable since the weak-commitment search changes the search order flexibly while guaranteeing the com- pleteness of the algorithm. This is also the case for the fill algorithm described in Morris (1993), whose worst- case space complexity becomes exponential in n. However, the number of restartings will never ex- ceed the number of required steps. Therefore, we can assume that the space complexity would never become a problem in practice as long as the problem can be solved within a reasonable amount of time. Also, the nogood that is a superset of other nogoods is redun- dant and can be removed from nogoods. Furthermore, we can restrict the number of recorded nogoods. In that case, the theoretical completeness can not be guaranteed. However, in practice, the weak- commitment search algorithm can still find a solution for all example problems when the number of recorded nogoods is restricted so that only 10 of the most re- cently found nogoods are recorded. Probabilistic Model In order to show theoretical evidence that the weak- commitment search is more efficient than the min- conflict backtracking, we use a simple probabilistic model as follows. Let us assume that the probabil- ity for finding a solution without any backtracking in the min-conflict backtracking is given by the constant value p, regardless of the initial values. Also, let us assume that the expected number of steps for trials without backtracking is given by n, (n, 5 n), the ex- Advances in Backtracking 317 petted number of steps for trials with backtracking is given by B, and the expected number of steps until the occurrence of the first backtracking is given by nb (nb 5 n). Then, the expected number of steps for the min-conflict backtracking can be represented asn,p+Bq,whereq=l-p. On the other hand, in the weak-commitment search, a solution can be found without any restarting with the probability p, and the expected number of steps in this case is given by n,. Also, the probability that a solution can be found after one restarting is given by pq, and the expected number of steps is given by n,+nb. In the same way, the probability that a solution can be found after k restartings is given by pq”, and the expected number of steps is given by n, + knb. This probability distribution of the number of restartings is identical to the well-known geometric distribution, and the expected number of restartings is given by q/p. Therefore, the expected number of steps can be given by ns -I nbq/p. The condition that the weak-commitment search is more efficient than the mm-conflict backtracking is n,p+ Bq > n5 + ?‘&bq/p. By transforming this formula, we obtain p > nb/(B - n,). Since we can assume that B >> n5, and nb 5 n, we obtain the sufficient condition p > n/B, i.e., if the probability of finding a solution without backtracking is larger than the ratio of the number of variables and the number of steps for the trials with backtracking, the weak-commitment search is more efficient than the min-conflict backtracking. The experimental results in the previous section show that the min-conflict backtracking can find a so- lution efficiently without backtracking in many trials, but the number of required steps for trials with back- tracking becomes very large. Therefore, we can assume that the condition p > n/B is satisfied in many cases. For example, from the experimental results in Table 2, we can assume that B for the 50-queens problem is around 1473.4, and p is around 0.83 (where q is 0.17). Then, n/B is around 0.034. This value is much smaller than the expected value of p (0.83). In reality, the probability of finding a solution with- out backtracking is affected by the initial values. In the weak-commitment search, when restarting, the current value assignment is used as the new tentative initial values. Therefore, by repeating the restartings, we can expect the value assignment to become close to the fi- nal solution; thus, the probability of finding a solution without restartings increases. In such a case, even if the average probability of finding a solution without backtracking is very small and the condition p > n/B is not satisfied, the weak-commitment search can be more efficient than the min-conflict backtracking. For example, although the min-conflict backtracking never finds a solution without backtracking in the 3-SAT problem (Table 4), the weak-commitment search can find a solution. This is because the value assignment is iteratively improved in the weak-commitment search. Conclusions We have presented the weak-commitment search algo- rithm for solving CSPs. This algorithm can revise bad decisions without exhaustive search, and the complete- ness of the algorithm is guaranteed. By experimental results, we have shown that this algorithm is 3 to 10 times more efficient than the breakout algorithm and the mm-conflict backtracking. Our future work includes showing the effectiveness of this algorithm in practical application problems, developing a theoretical model to compare the per- formance of the weak-commitment search and itera- tive improvement algorithms, and applying the weak- commitment search algorithm to distributed constraint satisfaction problems, in which variables and con- straints are distributed among multiple problem solv- ing agents (Yokoo et al. 1992). Acknowledgments The author wishes to thank Tsukasa Kawaoka, Ryohei Nakano, and Nobuyasu Osato for their support in this work at NTT Laboratories; and Toru Ishida, Kazuhiro Kuwabara, Shigeo Matsubara, and Jun-ichi Akahani for their helpful comments. References Ginsberg, M. L., and Harvey, W. D. 1990. Iterative broadening. In Proceedings of the Eighth National Conference on Artificial Intelligence, 216-220. Haralick, R., and Elliot, G. L. 1980. Increasing tree search efficiency for constraint satisfaction problems. Artificial Intelligence 14:263-313. Mackworth, A. K. 1992. Constraint satisfaction. In S.C.Shapiro., ed., Encyclopedia of Artificial Intelli- gence. Wiley-Interscience Publication. 285-293. Minton, S.; Johnston, M. D.; Philips, A. B.; and Laird, P. 1992. Minimizing conflicts: a heuristic re- pair method for constraint satisfaction and scheduling problems. Artificial Intelligence 58:161-205. Mitchell, D.; Selman, B.; and Levesque, H. 1992. Hard and easy distributions of SAT problem. In Pro- ceedings of the Tenth National Conference on Artifi- cial Intelligence, 459-465. Morris, P. 1993. The breakout method for escaping from local minima. In Proceedings of the Eleventh National Conference on Artificial Intelligence, 40-45. Selman, B.; Levesque, H.; and Mitchell, D. 1992. A new method for solving hard satisfiability problems. In Proceedings of the Tenth National Conference on Artificial Intelligence, 440-446. Yokoo, M.; Durfee, E. H.; Ishida, T.; and Kuwabars, K. 1992. Distributed constraint satisfaction for for- malizing distributed problem solving. In Proceedings of the Twelfth IEEE International Conference on Dis- tributed Computing Systems, 614-621. 318 Constraint Satisfaction | 1994 | 142 |
1,478 | Expected Gains from Parallelizing Constraint Solving for Tad Hogg and Colin P. Williams Xerox Palo Alto Research Center 3333 Coyote Hill Road Palo Alto, CA 94304, U.S.A. Hogg@parc.xerox.com, CWilliams@parc.xerox.com Abstract A number of recent studies have examined how the dif- ficulty of various NP-hard problems varies with simple parameters describing their structure. In particular, they have identified parameter values that distinguish regions with many hard problem instances from relatively easier ones. In this paper we continue this work by examining independent parallel search. SpecificalIy, we evaluate the speedup as function of connectivity and search difficulty for the particular case of graph coloring with a standard heuristic search method. This requires examining the full search cost distribution rather than just the more commonly reported mean and variance. We also show similar behav- ior for a single-agent search strategy in which the search is restarted whenever it fails to complete within a specified cost bound. troduction Several recent studies have related the structure of con- straint satisfaction problems to the difficulty of solving them with search [Cheeseman et al., 1991, Mitchell et al., 1992, Williams and Hogg, 1992, Crawford and Auton, 1993, Gent and Walsh, 1993, Williams and Hogg, 19931. Particular values of simple parameters, describing the problem structure, that lead to hard problem instances, on average, were identified. These values are also associ- ated with high variance in the solution cost for different problem instances, and for a single instance with respect to different search methods or a single nondeterministic method with, e.g., different initial conditions or different tie-breaking choices made when the search heuristic ranks some choices equally. Structurally, these hard problems are characterized by many large partial solutions, which prevent early pruning by many types of heuristics. Can these observations be exploited in practical search algorithms? One possibility is to run several searches in- dependently in parallel, stopping when any process first finds a solution (or determines there are none) [Fishburn, 1984, Helmbold and McDowell, 1989, Pramaniclc and Kuhl, 1991, Kornfeld, 1981, Imai et al., 1979, Rao and Kumer, 1992, Mehrotra and Gehringer, 1985, Ertel, 19921. Since the benefit of this approach relies on variation in the individual methods employed, the high variance seen for the hard problems suggests it should be particularly appli- cable for them [Cheeseman et al., 1991, Rao and Kumer, 19921. In some cases, this approach could be useful even if multiplexed on a serial machine [Janakiram et al., 19871. This method is particularly appealing since it requires no communication between the different processes and is very easy to implement. However, the precise benefit to be obtained from inde- pendent parallel searches is determined by the nature of the full distribution of search cost, not just by the variance. In this paper, we present experimental results on the ex- pected speedup from such parallel searches for a particular well-studied example, graph coloring. By evaluating the speedup obtainable from graphs of different connectivities we investigate consequences of different structural prop- erties for the benefit of parallelization, even when such problems are equally hard for a serial search. In the remainder of the paper we describe the graph coloring search problem and show the types of cost dis- tributions that arise for a heuristic search method. We then show how the speedup from parallelization is de- termined from the distribution and present empirically ob- served speedups for a variety of graphs. We also present a similar analysis and experiments for a single-agent search strategy. Graph Coloring Problems The graph coloring problem consists of a graph, a specified number of colors, and the requirement to find a color for each node in the graph such that no pair of adjacent nodes (i.e., nodes linked by an edge in the graph) have the same color. Graph coloring has received considerable attention and a number of search methods have been developed [Minton et al., 1990, Johnson et al., 1991, Selman et al., 19921. This is a well-known NP-complete problem whose solution cost grows exponentially in the worst case as the number of nodes in the graph increases. For this problem, the average degree of the graph y (i.e., the average number of edges coming from a node in the graph) distinguishes relativelv easy from harder Techniques 331 From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. problem2, on average. In this paper, we focus on the case of 3-coloring (i.e., when 3 different colors are available). In our experiments we used a complete, depth-first backtracking search based on the Brelaz heuristic [Johnson et al., 19911 which assigns the most constrained nodes first (i.e., those with the most distinctly colored neighbors), breaking ties by choosing nodes with the most uncolored neighbors (with any remaining ties broken randomly). For each node, the smallest color consistent with the previous assignments is chosen first, with successive choices made when the search is forced to backtrack. This complete search method is guaranteed to eventually terminate and produce correct results. The Cost Distribution For our experiments we used randomly generated graphs with 100 nodes and with a specified average connectivity y. For these problems, there are two distinct regions with hard problems. The first, near y = 4.5 has a fairly high density of hard instances. The second, at somewhat lower connectivities, has mostly easy problems but occasionally instances with extremely high search costs. These extreme cases have such large cost that they dominate the mean cost in this region. These observations are summarized in Fig. 1. 2 3 4 5 6 connectivity Fig. 1. Search cost as a function of avera graphs with 100 nodes. The black curve s tt e connectivity y of the ows the mean cost and the gray one is the median cost. Values are shown in increments of 0.1 for y and are based on 50,000 samples at each point. The large variance in search cost for the lower connectivities produces the sharp peaks in the mean cost, an indication that many more samples are required to determine it accurately. For the behavior of parallel search we need to consider the distribution of search costs. For the cases in which a single search solves the problem rapidly there is not much to be gained from parallel methods. ‘Thus we focus on the behavior of problems in the hard regions of intermediate connectivity. Fig. 2 gives examples of the types of cost distribution we encountered. Specifically, there were two distinct shapes. On one hand, we found multimodal dis- tributions with a wide range of individual costs. In these cases, the search often completes rapidly but because of the occasional high-cost instance, the mean search cost is relatively large. On the other hand, we found more tightly clustered distributions in which all the searches required about the same, large cost. As described below, these dis- tinct distribution shapes give rise to very different potential for parallel speedup through multiple searches. lo3 $0 2 2 0101 10° 0 10000 30000 50000 cost 6 5 24 :3 O2 1 0 50000 55000 60000 65000 cost Fig. 2. The search cost distribution for two lOGnode graphs with y = times. 3.5. In each case, the graph was searched 10,000 The plots give the number of times each cost was found. The first case is a multimodal distribution with average cost Tl = 15122 and a large possible speedup: S(10) = 76, S opt = 37 with r* sh = 412. The second case is a fairly “rf: unimodal distribution (note the ex the orizontal axis). It has average cost f anded cost scale on 1 = 55302 and very limited speedup: S( 10) = 1.1 and Sopt = 1 with no cutoff (i.e., r* = 00). These quantities are defined in Eqs. 1 and 3. Speedup with When a heuristic search involves some random choices, re- peated runs on the same problem can give different search costs. This is especially true when incorrect choices made early in the search are undone only after a large number of steps. In such cases, one could benefit from running mul- tiple, independent versions of the search, stopping when the first one completes. In this section, we show how the expected speedup for this method is determined by the full cost distribution of the search method. Specifically, for a given problem instance, let p(i) be the probability that an individual search run terminates after exactly i search steps. We also define the cumulative distribution, i.e., the probability that the search requires 332 Constraint Satisfaction at least i steps, as q(i) = &iip(j). For a group of k independent searches the se&h cost for the group is defined as the cost for the first agent that terminates, i.e., the minimum of the individual finishing times. The probability that the group as a whole requires at least i steps, qk (i), is just the probability that all of the individual searches require at least i steps. Because the searches run independently, we have simply q&) = q(i)“. Thus the probability that the group finishes in exactly i steps is &( a) = q(i ) k - q(i + 1) l With these expressions we can now determine the aver- age speedup. Speci&ally, again for a given problem in- stance, let !& be the average cost for a group of k agents to solve the problem. Then we define the speedup as S(k) = 2 (1) This can be expressed in terms of the cost distribution by noting that Tk = c ipl, (i). Whtist this measure of speedup is a commonli used measure of parallel perfor- mance, we should note that for the extended distributions seen in our studies, it can differ from “typical” behavior when the mean search times are greatly influenced by rare, high-cost events. Even in these cases, however, this mea- sure does give some insight into the nature of the distribu- tion and allows for a simple comparison among different classes of graphs. In our experimental studies, we estimate the cost distri- bution p(i) by repeating the individual search N times (for most of our experiments we used N = 100 to allow for testing many graphs, but saw similar behaviors in a few graphs with more samples, such as N = lo4 for the ex- amples of Fig. 2). Let n(i) be the number of searches that finished in exactly i steps. Then we used p(i) R n(i)/N to obtain an estimate for the speedup for a given problem. By repeating this procedure over many different graphs, all having the same connectivity, we can obtain an estimate of the average speedup, (S(L)), as a function of average connectivity, y. Alternatively we can also ask how large k needs to be in order to solve a problem, having a particular connectivity, within a prespecified cost limit. Experimental Results: Speedup We searched a number of graphs at various connectivities and recorded the distribution of search times. From this we obtained estimates of the expected search cost for a single search and the speedup when several searches are run independently in parallel. To study the two regions of hard problems we selected graphs with y = 3.5 and r= 4.5. More limited experiments with y = 3.0 were qualitatively similar to the y = 3.5 case. The speedups for 7 = 3.5 are shown in Fig. 3 as a function of single-agent search cost. Whilst we see many samples with fairly limited speedup, this class of graphs generally shows an increased speedup with single search cost. A similar story is seen in Fig. 4 for the average speedup among these samples as a function of number of parallel searches. This shows an increasing speedup, especially for the harder cases. Together with Fig. 1 we conclude that at this connectivity we have mostly easy cases, but many of the harder cases can often benefit increasingly from parallelization. 25- 201 . . - 0 500 1000 1500 2000 cost Fig. 3. Speedup S(l0) for 10 agents vs. average single-agent cost T’r for lOO-node graphs at y = 3.5. Each graph was searched 100 times to estimate the single-agent cost drstribution. There were also a few samples with larger speedups than shown in the plot, as well as samples with substantially higher costs which continued the trend toward increasing speedups. I 4’ ........ . .. *n.. . .... ........ .-*.*.~;:.:&;..;. -........ ..... ...... n... .v.-r.Y--* .... .... 2 4 6 8 10 Fig. 4. Average speedup (S(L)) (black) and median speedup pay’ vs. number of agents k for 10%node graphs at y = 3.5. he solid curves include all samples while the dashed ones include only those whose average single-agent cost was at least 1CKIO. The behavior is different for y = 4.5 as shown in Fig. 5. In this case most samples exhibit limited speedup. In par- ticular, there is no increase in speedup with single search cost. In Fig. 6 we see the limited benefit of additional parallel searches, both for all samples and those with high cost. Speeding Up a Single Agent Another way to use the cost distribution of an individual search method for a given problem is to devise a better Techniques 333 cost Fig. 5. Speedup S(10) for 10 agents vs. average single-agent cost Tr for lOO-node graphs at y = 4.5. Each graph was searched 100 times to estimate the single-agent cost drstnbution. 1.3 1. 1.2 1. 1.1 1. 1.0 2 4 6 8 10 Fig. 6. Average speedup (S( Ic)) (black) and median speedup $pY’ vs. number of agents k for lOO-node graphs at y = 4.5. he solid curves include all samples while the dashed ones include only those whose average single-agent cost was at least 1000. strategy for use by a single agent. For example, if the agent finds it is taking an inordinately long time in the search, it can abandon the search and start over. With a prudent choice of when to quit, this strategy can lead to improved performance. Specifically, suppose the agent restarts its search after r steps. Then the time to solve the problem will be given by T = r(m - 1) + t, where m is the number of search repetitions required to find a case that finishes within the time bound and t is the time spent on this last search case. The expected overall search cost for this strategy when applied to a given problem is just t(r) = r(TZ - 1) + s with the overbars denoting averages over the individual search distribution for this problem. Let Q(i) = 1 - q(i + 1) be the probability the individual search completes in at most i steps. Then Q(T) is the probability that a search trial will succeed before being terminated by the cost bound. Thus the average number of repetitions required for success is TTT = z&J* lYhe expected cost within a repetition that does in fact succeed is given by z = c ip(ilr) where p(i1~) = # is the is7 conditional probability the search succeeds after exactly i steps given it succeeds in at most r steps. Finally, using the identity iFT ip(i) = rQ(r) - iFr Q(r) we obtain the expression fo?, e(r), the expected overall search cost for the “restart after r steps” strategy: (2) In particular, when the individual search is never aborted, corresponding to r = oo, we recover the average individ- ual search cost, i.e., [(CO) = Tl. Among all possible choices of the cutoff time, even allowing it to vary from one repetition of the search to the next, the optimal strategy [Ruby et al., 19931 is obtained by selecting a single termination time r* which minimizes e( 7). The speedup of this search method over the original one is then given by S Z opt = e(7*) Finally we should note that this strategy of restarting searches if they don’t finish within a prespecified time can be combined with independent parallel agents to give even greater potential speedups [Luby and Ertel, 19931. xperimental Results: ptimal Strategy Unfortunately, in practice one never knows the cost dis- tribution for a new problem before starting the search. Wowever, we can evaluate it for a range of graph col- oring problems as an additional indication of the benefit of independent search for graphs of different connectivi- ties. This can also give some indication of the appropriate termination time to use. 80 a60 s i40 (0 20 0 . . . . 0 500 1000 1500 2000 cost Fig. 7. Speedup So t single-agent cost T” for optimal single-agent search vs. average 1 for lOO-node graphs at y = 3.5. Each graph was searched 100 times to estimate the single-a ent cost distribution. We also found samples with substantial y higher P costs than shown in the plot, which continued the trend toward increasing speedups. In Figs. 7 and 8 we show the speedup of this “opti- mal restart” single search strategy compared to the aver- age “never restart” single search time. We see the same 334 Constraint Satisfaction cost Fig. 8. speedup sop, for optimal single-agent search vs. average single-agent cost TI for 100-node gra hs at y = 4.5. Each aph was searched 100 times to estimate cost istribution. f tic e single-agent qualitative behavior as with our previous results on inde- pendent parallel search: no significant speedup for many of the harder problems but with somewhat more speedup for the cases at the lower connectivity. As with the independent parallel search, an extended multimodal distribution allows this strategy to greatly out- perform a single search, on average. Thus these results again demonstrate the existence of the two types of cost distributions. Discussion We have examined the benefit from independent parallel search for a particular constraint satisfaction problem, as a function of a parameter characterizing the structure of the problem. This extends previous work on the existence of hard and easy regions of problems to the question of what kinds of search methods are suitable for the hard instances. With our samples we saw that, for the most part, independent parallelization gives fairly limited improvement for the hard cases. This is consistent with previous observations that the hard problems persist for a range of common heuristic search algorithms; and the resulting conjecture that these problems are intrinsically hard because of their structure. Specifically, this may be due to their having a large number of partial solutions, few of which can be extended to full solutions [Cheeseman et al., 1991, Williams and Hogg, 19921. These large partial solutions in turn make it difficult for heuristics, based on a local evaluation of the search space, to rapidly prune unproductive search paths. Our results for the graphs with limited speedup lend support to this conjecture. There were also a number of cases with significant speedup, especially for the lower connectivity graphs. Since this is due to a very extended, multimodal distri- bution, an interpretation of these cases is that the overall cost is very sensitive to the early decisions made by the heuristic. While the correct choice leads to relatively rapid search, the structure of the problem does not readily pro- vide an indication of incorrect choices. Thus in the latter case the search continues a long time before finally revis- ing the early choices. As a caveat, we should point out that our observations are based on a limited sampling of the individual search cost distribution (i.e., 100 trials per graph). Because of the extended nature of these distributions, additional samples may reveal some rare but extremely high cost search runs. These runs could significantly increase the mean individual search time while having only a modest effect on the parallel speeds. In such cases, the estimate of the potential speedup based on our limited sampling would be too low. To partially address this issue we searched a few graphs with significantly more trials (up to lo4 per graph), finding the same qualitative behaviors. There are a number of future directions for this work. One important question is the extent to which our re- sults apply to other constraint satisfaction problems which also exhibit this behavior of easy and hard re- gions; as well as to other types of search methods such as genetic algorithms [Goldberg, 19891 or simu- lated annealing [Johnson et al., 19911. For instance, when applied to optimization problems, one would need to consider the quality of solutions obtained as well as the search time required. Another issue concerns how the behaviors reported here may change as larger problems are considered. Curiously, independent studies of other NP-hard prob- lems, using very different search algorithms, have discov- ered qualitatively similar kinds of cost distributions [Ertel, 19921. It is possible, therefore, that such distributions are quite generic across many different problems and al- gorithms. If so, our observation that the potential gains from independent parallel search vary with some’param- eter characterizing the problem structure, might be useful in designing an overall better parallel search algorithm. Specifically, as hard problems in the “hard” region do not appear to benefit much from independent parallelization we might prefer instead attempt to solve problems in this region by some more sophisticated parallel search method. We have experimented with one such method, which we call “cooperative problem solving” in which the dif- ferent computational agents exchange and reuse informa- tion found during the search,’ rather than executing inde- pendently. If the search methods are sufficiently diverse but nevertheless occasionally able to utilize information found in other parts of the search space, greater perfor- mance improvements are possible [Hogg and Williams, 19931 including, in some cases, the possibility of super- linear speedups [Clear-water et al., 19911. eferences Cheeseman, I?, Kanefsky, B., and Taylor, W. M. (1991). Where the really hard problems are. In Mylopoulos, J. and Techniques 335 Reiter, R., editors, Proceedings of IJCAI91, pages 331- 337, San Mateo, CA. Morgan Kaufmann. Clearwater, S. H., Huberman, B. A., and Hogg, T. (1991). Cooperative solution of constraint satisfaction problems. Science, 254:1181-1183. An experimental evaluation; part ii, graph coloring and number partitioning. Operations Research, 39(3):378406. Kornfeld, W. A. (1981). The use of parallelism to im- plement a heuristic search. In Proc. of LKAI-81, pages 575-580. Crawford, J. M. and Auton, L. D. (1993). Experimental results on the cross-over point in satisfiability problems. In Proc. of the Eleventh Natl. Con$ on AI (AAAI93), pages 21-27, Menlo Park, CA. AAAI Press. Luby, M. and Ertel, W. (1993). Optimal parallelization of las Vegas algorithms. Technical report, Intl. Comp. Sci. Inst., Berkeley, CA. Ertel, W. (1992). Random competition: A simple, but efficient method for parallelizing inference systems. In Fronhofer, B. and Wrightson, G., editors, Parallelization in Inference Systems, pages 195-209. Springer, Dagstuhl, Germany. Luby, M., Sinclair, A., and Zuckerman, D. (1993). Optimal speedup of las vagas algorithms. Technical Report TR-93- 010, Intl. Comp. Sci. Inst., Berkeley, CA. Fishburn, J. P. (1984). Analysis of Speedup in Distributed Algorithms. UMI Research Press, Ann Arbor, Michigan. Gent, I. P. and Walsh, T. (1993). An empirical analysis of search in GSAT. J. of AI Research, 1:47-59. Goldberg, D. E. (1989). Genetic Algorithms in Search, Optimization and Machine Learning. Addison-Wesley, NY. Mehrotra, R. and Gehringer, E. F. (1985). Superlinear speedup through randomized algorithms. In Degroot, D., editor, Proc. of 198.5 Intl. Conf. on Parallel Processing, pages 291-300, Washington, DC. IEEE. Minton, S., Johnston, M. D., Philips, A. B., and Lair-d, P. (1990). Solving large-scale constraint satisfaction and scheduling problems using a heursitic repair method. In Proceedings of AAAI-90, pages 17-24, Menlo Park, CA. AAAI Press. Helmbold, D. P. and McDowell, C. E. (1989). Modeling speedup greater than n. In Ris, F. and Kogge, P. M., editors, Proc. of 1989 Intl. Conf. on Parallel Processing, volume 3, pages 219-225, University Park, PA. Penn State Press. Hogg, T. and Williams, C. P. (1993). Solving the really hard problems with cooperative search. In Proc. of the. Eleventh Natl. Co@ on AI (AAAI93), pages 231-236, Menlo Park, CA. AAAI Press. Mitchell, D., Selman, B., and Levesque, H. (1992). Hard and easy distributions of SAT problems. In Proc. of 10th Natl. Conf. on Artificial Intelligence (AAAI92), pages 459- 465, Menlo Park. AAAI Press. Pramanick, I. and Kuhl, J. G. (1991). Study of an inherently parallel heuristic technique. In Proc. of 1991 Intl. Con. on Parallel Processing, volume 3, pages 95-99. Rao, V. N. and Kumer, V. (1992). On the efficiency of parallel backtracking. IEEE Trans. on Parallel and Distributed Computing. Imai, M., Yoshida, Y., and Fukumura, T. (1979). A Selman, B., Levesque, H., and Mitchell, D. (1992). A new parallel searching scheme for multiprocessor systems and method for solving hard satisfiability problems. In Proc. of its application to combinatorial problems. In Proc. of 10th Natl. Co@ on Artificial Intelligence (AAAI92), pages IJCAI-79, pages 4 16-4 18. 440-446, Menlo Park, CA. AAAI Press. Janakiram, V. K., Agrawal, D. P., and Mehrotra, R. (1987). Randomized parallel algorithms for prolog programs and backtracking applications. In Sahni, S. K., editor, Proc. of 1987 Intl. Conf. on Parallel Processing, pages 278-28 1, University Park, PA. Penn State Univ. Press. Johnson, D. S., Aragon, C. R., McGeoch, L. A., and Schevon, C. (1991). Optimization by simulated annealing: Williams, C. P. and Hogg, T. (1992). Using deep structure to locate hard problems. In Proc. of IOth Natl. Conf. on Artificial Intelligence (AAA.l92), pages 472-477, Menlo Park, CA. AAAI Press. Williams, C. P. and Hogg, T. (1993). Extending deep structure. In Proc. of the Eleventh Natl. Conf. on AI (AAA193), pages 152-157, Menlo Park, CA. AAAI Press. 336 Constraint Satisfaction | 1994 | 143 |
1,479 | Noise Strategies for Improving Local Search Bart Selman, Henry A. Kautz, and Bram Cohen AT&T Bell Laboratories Murray Hill, NJ 07974 (selman, kautz, cohen)@research.att .com Abstract It has recently been shown that local search is sur- prisingly good at f?inding satisfying assignments for certain computationally hard classes of CNF formu- las. The performance of basic local search methods can be further enhanced by introducing mechanisme for escaping from local minima in the search space. We will compare three such mechanisms: simulated annealing, random noise, and a strategy called “mixed random walk”. We show that mixed random walk is the superior strategy. We also present results demon- strating the effectiveness of local search with walk for solving circuit ByntheBiB and circuit diagnosis prob- lems. Finally, we demonstrate that mixed random walk improves upon the best known methods for solv- ing MAX-SAT problems. Introduction Local search algorithms have been successfully applied to many optimization problems. Hansen and Jau- mard (1990) describe experiments using local search for MAX-SAT, i.e., the problem of finding an assign- ment that satisfies as many clauses as possible of a given CNF formula. In general, such local search algo- rithms find good but non-optimal solutions, and thus such algorithms were believed not to be suitable for satisfiability testing, where the objective is to find an assignment that satisfies all clauses (if such an assign- ment exists). Recently, however, local search has been shown to be surprisingly good at finding completely satisfying assignments for CNF problems (Selman et al. 1992; Gu 1992). Such methods outperform the best known systematic search algorithms on certain classes of large satisfiability problems. For example, GSAT, a random- ized local search algorithm, can solve 2,000 variable computationally hard randomly-generated 3CNF (con- junctive normal form) formulas, whereas the current fastest systematic search algorithms cannot handle in- stances from the same distribution with more than 400 variables (Buro and Kleine-Biining 1992; Dubois et al. 1993). The basic GSAT algorithm performs a local search of the space of truth-assignments by starting with a randomly-generated assignment, and then repeatedly changing (“flipping”) the assignment of a variable that leads to the largest decrease in the total number of un- satisfied clauses. As with any combinatorial problem, local minima in the search space are problematic in the application of local search methods. A local minimum is defined as a state whose local neighborhood does not include a state that is strictly better. The standard approach in combinatorial optimization of terminat- ing the search when a local minimum is reached (Pa- padimitriou and Steiglitz 1982) does not work well for Boolean satisfiability testing, since only global optima are of interest. In Selman et al. (1992) it is shown that simply continuing to search by making non-improving, “sideways” moves, dramatically increases the success rate of the algorithm. ’ We call the set of states ex- plored in a sequence of sideways moves a “plateau” in the search space. The search along plateaus often dominates GSAT’s search. For a detailed analysis, see Gent and Walsh (1992). The success of GSAT is determined by its ability to move between successively lower plateaus. The search fails if GSAT can find no way off of a plateau, either because such transitions from the plateau are rare or nonexistent. When this occurs, one can simply restart the search at new random initial assignment. There are other mechanisms for escaping from local minima or plateaus, which are based on occasionally making uphill moves. Prominent among such approaches has been the use of simulated annealing (Kirkpatrick et al. 1982), where a formal parameter (the “temperature”) controls the probability that the local search algorithm makes an uphill move. In Selman and Mautz (1993), we proposed another mechanism for introducing such uphill moves. The strategy is based on mixing a random walk over vari- ables that appear in unsatisfied clauses with the greedy local search. The strategy can be viewed as a way of introducing noise in a very focused manner - namely, perturbing only those variables critical to to the re- ’ Minton et al. (1990) encountered a similar phenomenon in their successfd application of local search in Bolving large scheduling problems. Techniques 337 From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. maining unsatisfied clauses. We will present detailed experimental data compar- ing the random walk strategy, simulated annealing, random noise, and the basic GSAT procedure on com- putationally difficult random formulas. In doing this comparison, we tuned the parameter settings of each procedure to obtain their best performance. We will see that the random walk strategy significantly out- performs the other approaches, and that all the escape strategies are an improvement over basic GSAT. One might speculate that the good performance of the random walk strategy is a consequence of our choice of test instances. We therefore also ran ex- periments using several other classes of problem in- stances, developed by transforming other combinato- rial problems into satisfiability instances. In partic- ular, we considered problems from planning (Kautz and Selman 1992) and circuit synthesis (Kamath et at?. 1991; 1993). These experiments again demonstrate that mixed random walk is the superior escape mech- anism. In addition, we show that GSAT with walk is faster than systematic search on certain circuit syn- thesis problems (such as adders and comparators) that contain no random component. We then present data on experiments with a modified version of the random walk strategy that further improves performance over GSAT with walk. Finally, we demonstrate that mixed random walk also improves upon the best known meth- ods for solving MAX-SAT problems. Local Search for Satisfiability Testing GSAT (Selman at al. 1992) performs a greedy local search for a satisfying assignment of a set of proposi- tional clauses.2 The procedure starts with a randomly generated truth assignment. It then changes (“flips”) the assignment of the variable that leads to the great- est decrease in the total number of unsatisfied clauses. Note that the greatest decrease may be zero (sideways move) or negative (uphill move). Flips are repeated un- til either a satisfying assignment is found or a pre-set maximum number of flips (MAX-FLIPS) is reached. This process is repeated as needed up to a maximum of MAX-TRIES times. In Selman et al. (1992), it was shown that GSAT substantially outperforms backtracking search proce- dures, such as the Davis-Putnam procedure, on vari- ous classes of formulas, including hard randomly gen- erated formulas and SAT encodings of graph coloring problems (Johnson et al. 1991). As noted above, local minima in the search space of a combinatorial problem are the primary obstacle to the application of local search methods. GSAT’s use of sideways moves does not completely eliminate this problem, because the algorithm can still become stuck ‘A clause is a disjunction of literals. A literal is a propo- sitional variable or its negation. A set of clauses corre- sponds to a CNF formula: a conjunction of disjunctions. on a plateau (a set of neighboring states each with an equal number of unsatisfied clauses). Therefore, it is useful to employ mechanisms that escape from local minima or plateaus by making uphill moves (flips that increase the number of unsatisfied clauses). We will now discuss two mechanisms for making such moves.3 Simulated Annealing Simulated annealing introduces uphill moves into lo- cal search by using a noise model based on statistical mechanics (Kirkpatrick et al. 1983). We employ the annealing algorithm defined in Johnson et al. (1991): Start with a randomly generated truth assignment. Repeatedly pick a random variable, and compute S, the change in the number of unsatisfied clauses when that variable is flipped. If 6 5 0 (a downhill or side- ways move), make the flip. Otherwise, flip the variable with probability e-s/T, where 2’ is a formal parame- ter called the temperature. The temperature may be either held constant,4 or slowly decreased from a high temperature to near zero according to a cooling sched- ule. One often uses geometric schedules, in which the temperature is repeatedly reduced by multiplying it by a constant factor (< 1). Given a finite cooling schedule, simulated annealing is not guaranteed to find a global optimum - that is, an assignment that satisfies all clauses. Therefore in our experiments we use multiple random starts, and compute the average number of restarts needed before finding a solution. We call this number R. The basic GSAT algorithm is very similar to an- nealing at temperature zero, but differs in that GSAT naturally employ restarts and always makes a downhill move if one is available. The value of R for GSAT is simply the average number of tries required to find a solution. The Random Walk Strategy In Selman and Kautz (1993), we introduced several ex- tensions to the basic GSAT procedure. One of those extensions mixes a random walk strategy with the greedy local search. More precisely, they propose the following mixed random walk strategy: With probability p, pick a variable occuring in some unsatisfied clause and flip its truth assignment. With probability 1 - p, follow the standard GSAT scheme, i.e., make the best possible local move. Note that the “walk” moves can be uphill. ‘If the only possible move for GSAT is uphill, it will make such a move, but such “forced” uphill moves are quite rare, and are not effective in escaping from local minima or plateaus. ‘This form of anne ahng corresponds to the Metropo- lis algorithm (Jerrum 1992). See Pinkas and Dechter (1992), for an interesting modification of the basic anneal- ing scheme. 338 Constraint Satisfaction A natural and simpler variation of the random walk strategy is not to restrict the choice of a randomly flipped variable to the set of variables that appear in unsatisfied clauses. We will refer to this modification as the random noise strategy. Note that random walk dif- fers from both simulated annealing and random noise, in that in random walk upward moves are closely linked to unsatisfied clauses. The experiments discussed be- low will show that the random walk strategy is gener- ally significantly better. Experimental We compared the basic GSAT algorithm, simulated an- nealing, random walk, and random noise strategies on a test suite including both randomly-generated CNF problems and Boolean encodings of other combinato- rial problems. The results are given in the Tables 1, 2, and 3. For each strategy we give the average time in seconds it took to find a satisfying assignment ,’ the average number of flips it required, and R, the average number of restarts needed before finding a solution. For each strategy we used at least 100 random restarts (MAX-TRIES setting in GSAT) on each problem in- stance; if we needed more than 20 restarts before find- ing a solution, the strategy was restarted up to 1,000 times. A “*” in the tables indicates that no solution was found after running for more than 10 hours or us- ing more than 1,000 restarts. The parameters of each method were varied over a range of values, and only the results of the best set- tings are included in the table. For basic GSAT, we varied MAX-FLIPS and MAX-TRIES; for GSAT with random walk, we also varied the probability p with which a non-greedy move is made, and similarly for GSAT with random noise. In all of our experiments, the optimal value of p was found to be between 0.5 and 0.6. For constant temperature simulated annealing, we varied the temperature 2’ from 5 to 0 in steps of 0.05. (At 2’ = 5, uphill moves are accepted with probability greater than 0.8.) For the random formulas, the best performance was found at T = 0.2. The planning for- mulas required a higher temperature, T = 0.5, while the Boolean circuit synthesis were at a low temperature, T = 0. 15. solved most quickly We also experimented with various geometric cooling schedules. Surprisingly, we did not find any geomet- ric schedule that was better than the best constant- temperature schedule. We could not even significantly improve the average number of restarts needed before finding a solution by extremely slow cooling schedules, regardless of the effect on execution time. An possible explanation for this is that almost all the work in solv- ing CNF problems lies in satisfying the last few unsat- isfied clauses. This corresponds to the low-temperature tail of a geometric little variation. schedule, where the temperature has Hard ndom Formulas Random instances of CNF formulas are often used in evaluating satisfiability procedures because they can be easily generated and lack any underlying “hidden” structure often present in hand-crafted instances. Un- fortunately, unless great care is taken in specifying the parameters of the random distribution, the problems so created can be trivial to solve. Mitchell et al. (1992) show how computationally difficult random problems can be generated using the fixed-clause length model. Let N be the number of variables, K the number of literals per clause, and L the number of clauses. Each instance is obtained by generating L random clauses each containing K literals. The K literals are gener- ated by randomly selecting K variables, and each of the variables is negated with a 50% probability. The difficulty of such formulas critically depends on the ra- tio between N and L. The hardest formulas lie around the region where there is a 50% chance of the randomly generated formula being satisfiable. For 3CNF formu- las (K = 3), experiments show that this is the case for L R 4.3N. (For larger N the the critical ratio for the 50% point converges to 4.25.) We tested the al- gorithms on formulas around the 4.3 point ranging in size from 100 to 2000 variables. Table 1 presents our results. For the smallest (loo- variable) formula, we observe little difference in the running times. As the number of variables increase, however, the random walk strategy significantly dom- inates the other approaches. Both random noise and simulated annealing also improve upon basic GSAT, but neither of these methods found solutions for largest three formulas.6 The performance of GSAT with walk is quite impressive, especially consider that fact that fastest current systematic search methods cannot solve hard random 3CNF instances with over 400 variables (Dubois et al. 1993). The columns marked with “flips” give the average number of flips required to find an assignment. (A “flip” in our simulated annealing algorithm is an actual change in the truth assignment. We do not count flips that were considered but not made.) When comparing the number of flips required by the various strategies, we arrive at the same conclusion about the relative efficiencies of the methods. This shows that our obser- vations based on the running times are not simply a consequence of differences in the relative efficiencies of our implementations. Finally, let us consider R, the average number of restarts needed before finding a solution. Basic GSAT easily gets stuck on plateaus, and requires many ran- dom restarts, in particular for larger formulas. On the ‘The algorithms were implemented in C and ran on an SGI Challenge with a 70 MHz MIPS R4400 processor. For code and experimental data, contact the tist author. 6GSAT with walk fmds approximately 50% of the formu- las in the hard region to be satisfiable, as would be expected at the transition point for SAT. Techniques 339 formula GSAT Simul. Ann. basic Wdk noise I’M-8 clauses time fiips R time f&IS R time f&B R time f&IS R 100 430 .4 7554 8.3 .2 2385 1.0 ..6 9975 4.0 .6 4748 1.1 200 860 22 284693 143 4 27654 1.0 47 396534 6.7 21 106643 1.2 400 1700 122 2.6~10~ 67 7 59744 1.1 95 892048 6.3 75 552433 1.1 600 2550 1471 30x lo6 500 35 241651 1.0 929 7.8x 10’ 20 427 2.7~10~ 3.3 800 3400 * * * 286 1.8~10~ 1.1 * * * * * * 1000 4250 * * * 1095 5.8~10’ 1.2 * 4 * * * * 2000 8480 * * * 3255 23~10~ 1.1 * * * * * * Table 1: Comparing noise strategies on hard random 3CNF instances. other hand, GSAT with walk is practically guaranteed to find a satisfying assignment. Apparently, mixing random walk over variables in the unsatisfied clauses with greedy moves allows one to escape almost always from plateaus that have few or no states from which a downhill move can be made. The other two strategies also give an improved value of R over basic GSAT but the effect is less dramatic. Planning Problems As a second example of the effectiveness of the vari- ous escape strategies, we consider encodings of blocks- world planning problems (Kautz and Selman 1992). Such formulas are very challenging for basic GSAT. Ex- amination of the best assignments found when GSAT fails to find a satisfying assignment indicates that dif- ficulties arise from extremely deep local minima. For example, the planning problem labeled “Hanoi” cor- responds to the familiar “towers of Hanoi” puzzle, in which one moves a stack of disks between three pegs while never placing a larger disk on top of a smaller disk. There are many truth assignments that satisfy nearly all of the clauses that encode this problem, but that are very different from the correct satisfying as- signment; for example, such a near-assignment may correspond to slipping a disk out from the bottom of the stack. As seen in Table 2, GSAT with random walk is far superior. As before, basic GSAT fails to solve the largest problems. GSAT with walk is about 100 times faster than simulated annealing on the two largest problems, and over 200 times faster than random noise. The random noise and annealing strategies on the large problems also require many more restarts than the ran- dom walk strategy before finding a solution. Circuit Synthesis Kamath et al. (1991) developed a set of SAT encodings of Boolean circuit synthesis problems in order to test a satisfiability procedure based on integer program- ming. The task under consideration was to derive a logical circuit from its input-output behavior. Selman et al. (1992) showed that basic GSAT was competitive with their integer-programming method. In Table 3, 340 Constraint Satisfaction we give our experimental results on five of the hardest instances considered by Kamath et al. As is clear from the table, both the random walk and the simulated annealing strategies significantly improve upon GSAT, with random walk being somewhat better than simu- lated annealing. For comparison, we also included the original timings reported by Kamath et aL7 In this case, the random noise strategy does not lead to an improvement over basic GSAT. In fact mixing in ran- dom noise appears to degrade GSAT’s performance. Note that the basic GSAT procedure already performs quite well on these formulas, which suggests that they are relatively easy compared to our other benchmark problems. The instances from Kamath et al. (1991) were de- rived from randomly wired Boolean circuits. So, although the SAT encodings contain some intricate structure from the underlying Boolean gates, there is still a random aspect to the problem instances. Re- cently, Kamath et al. (1993) have generalized their ap- proach, to allow for circuits with multiple outputs. Us- ing this formulation, we can encode Boolean circuits that are useful in practical applications. Some exam- ples are adder and comparator circuits. We encoded the I/O behavior of several of such circuits, and used GSAT with walk to solve them. Table 4 shows our results. (“GSAT+w” denotes GSAT with walk. We used p = 0.5. We will discuss the “WSAT” column below.) The type of circuit is indicated in the table. For example, every satisfying assignment for the for- mula 2bitadd-11 corresponds to a design for a 2-bit adder using a PLA (Programmable Logic Array). The suffix “11” indicates that the circuit is constrained to use only 11 AND-gates. We see from the table that GSAT with walk can solve the instances in times that range from less than a second to a few minutes. We also included the timings for the Davis-Putnam (DP) procedure. We used a variant of this procedure devel- oped by Crawford and Auton (1993). This procedure is currently one of the fastest complete methods, but it is quite surprising to see that it only solves two of 7Kamath et u L’s satisfiability procedure ran on a VAX 8700 with code written in FORTRAN and C. formula GSAT Simul. Ann. basic walk noise id VM’B clauses time fiipB R time flips R time flips R time flips R med. 273 2311 7.5 70652 125 0.4 3464 1.0 4.5 41325 1.1 4.5 12147 1.0 rev. 201 1382 3.7 41693 100 0.3 3026 1.0 2.7 29007 1.1 2.7 9758 1.0 hanoi 417 2559 * * * 39 334096 2.6 20017 16 x lo6 100 3250 4.1~10~ 25 huge 937 14519 * * * 38 143956 1.1 9648 37 x 10’ 200 8302 4.4x106 13 Table 2: Comparing noise strategies on SAT encodings of planning problems. formula Int .P. GSAT Simul. Ann. basic W&&k noise id VSTS time time flips R. time fiipB R. time flips R. time f’lips R. fl6al 1650 2039 58 709895 5 2 3371 1.1 375 1025454 6.7 12 98105 1.3 fl6bl 1728 78 269 2870019 167 12 25529 1.0 1335 2872226 167 11 96612 1.4 fl6cl 1580 758 2 12178 1.0 1 1545 1.0 5 14614 1.0 5 21222 1.0 fl6dl 1230 1547 87 872219 7.1 3 5582 1.0 185 387491 1.0 4 25027 1.0 fl6el 1245 2156 1 2090 1.0 1 1468 1.0 1 3130 1.0 3 5867 1.0 Table 3: Comparing noise strategies on the circuit synthesis problem instances as studied in Kamath et cr.!. (1991). the instances.8 (A “*” indicates that the method ran for 10 hrs without finding an assignment.) The good performance of GSAT with walk on these problems in- dicates that local search methods can perform well on structured problems that do not contain any random component. Circuit Diagnosis Larrabee (1992) proposed a translation of the problem of test pattern generation for VLSI circuits into a SAT problem. We compared the performance of GSAT with walk and that of DP on several of Larrabee’s formu- las. Our results are in table 5.’ We see that GSAT with walk again works very well, especially compared to DP’s systematic search. These results and the ones for circuit synthesis are of particular interest because they involve encodings of problems with clear practi- cal applications, and are not just useful as benchmark problems for testing satisfiability procedures. Modifying the Random Walk Strategy We have recently begun to experiment with a new algo- rithm that implements GSAT’s random walk strategy with subtle but significant modifications. This new al- gorithm, called WSAT (for “walk sat”), makes flips by first randomly picking a clause that is not satisfied by the current assignment, and then picking (either at ‘Preliminary experiments indicate that some of these formulas can also be solved by combining DP with multiple starts that randomly permute variables. Details wih appear in the full version. We thank Jimi Crawford for discussions on this issue. ‘The table contains some typical satisfiable instances from a collection made available by Allan van Gelder and Yumi Tsqji at the University of California at Irvine. random or according to a greedy heuristic) a variable within that clause to flip. Thus, while GSAT with walk can be viewed as adding “walk” to a greedy algorithm, WSAT can be viewed as adding greediness as a heuris- tic to random walk. The “WSAT” columns in Tables 4 and 5 shows that WSAT can give a substantial speed up over GSAT with walk. Whether or not WSAT out- perform8 GSAT with walk appears to depend on the particular problem class. We are currently studying this further. One unexpected and interesting observation we have already made is that there can be a great variance be- tween running GSAT with 100% walk (i.e., p = 1.0) and running WSAT where variables are picked within an unsatisfied clause at random. At first glance, these options would appear to be identical. However, there is a subtle difference in the probability that a given variable is picked to be flipped. GSAT maintain8 a list (without duplicates) of the variables that appear in unsatisfied clauses, and picks at random from that list; thus, every variable that appears in an unsatis- fied clause is chosen with equal probability. WSAT employs the two-step random process described above (first picking a clause, and then picking a variable), that favors variables that appear in many unsatisfied clauses. For many classes of formulas, the difference does not appear to be significant. However, GSA?’ with 100% walk does not solve the circuit diagnosis prob- lems, whereas WSAT with random picking can solve all of them. Maximum Satisfiability Finally, we compare the performance of GSAT with walk to the methods studied by Hansen and Jau- mard (1990) f or MAX-SAT. Our results appear in Table 6. Hansen and Jaumard compared five differ- Techniques 341 I formula DP I GSAT+w I WSAT 1 id VWB clauses 2bitadd-12 708 1702 2bitadd-11 649 1562 3bitadd-32 8704 32316 3bitadd-31 8432 31310 2bitcomp-12 300 730 2bitcomP-5 125 310 time time * 0.081 * 0.058 * 94.1 * 456.6 23096 0.009 1.4 0.009 time 0.013 0.014 1.0 0.7 0.002 0.001 Table 4: Comparing an efficient complete method (DP) with local search strategies on circuit synthesis problems. (Timings in seconds.) formula id VM’B ssai?%%o38 1501 l-t ssa7552-158 1363 BBEL755%159 1363 ssa7552-160 1391 Table 5: Comparing DP with local search strategies on circuit diagnosis problems by Larrabee (1989). (Timings in seconds.) ent algorithms for MAX-SAT. They considered a ba- sic local search algorithm called “zloc”, two determin- istic algorithms proposed by David Johnson (1974) called “zjohnl” and “zjohn2”, a simulated annealing approach called “anneal”, and their own “steepest as- cent, mildest descent” algorithm, “zsamd”. The last one is similar to basic GSAT with a form of a tabu list (Glover 1989). They showed that “zsamd” consistently outperformed the other approaches. We repeated their main experiments using GSAT with walk. The problem instances are randomly gen- erated 3SAT instances. For each problem size, 50 prob- lems were randomly generated, and each problem was solved 100 times using different random initial assign- ments. The mean values of the best, and mean num- ber of unsatisfied clauses found during the 100 tries are noted in the table. For example, on the 500 vari- able, 5000 clause 3SAT instances, the best assignments GSAT with walk found contained an average of 161.2 unsatisfied clauses. As we can see from the table, GSAT with walk consistently found better quality solu- tions than any other method. Note that there is only a small difference between the best and mean values found by GSAT with walk, which may indicate that the best values are in fact close to optimal. Conclusions We compared several mechanisms for escaping from local minima in satisfiability problems: simulated an- nealing, random noise, and mixed random walk. The walk strategy introduces perturbations in the current state that are directly relevant to the unsatisfied con- straints of the problem. Our experiments show that this strategy significantly outperforms simulated an- nealing and random noise on several classes of hard satisfiability problems. Both of the latter strategies can make perturbations that are in a sense less focused, in that they may involve variables that do not appear in any unsatisfied clauses. The relative improvement found by using random walk over the other methods increases with increasing problem size. We also showed that GSAT with walk to be remarkably efficient in solv- ing basic circuit synthesis problems. This result is es- pecially interesting because the synthesis problems do not have any random component, and are very hard for systematic methods. Finally, we demonstrated that GSAT with walk also improves upon the best MAX- SAT algorithms. Given the effectiveness of the mixed random walk strategy on Boolean satisfiability prob- lems, an interesting direction for future research would be to explore similar strategies on general constraint satisfaction problems. References Buro, M. and Kleine-Bting, H. (1992). Report on a SAT competition. Technical Report # 110, Dept. of Math- ematics and Informatics, University of Paderborn, Ger- many. Crawford, J.M. and Auton, L.D. (1993) Experimental Re- sults on the Cross-over Point in Satisfiability Problems. Proc. AAAI-93, Washington, DC, 21-27. Davis, M. and Putnam, H. (1960). A computing procedure for quantification theory. J. Assoc. Comput. Mach., 7, 201-215. Dubois, O., Andre, P., Bouflchad, Y., and Carlier, J. (1993). SAT versus UNSAT. DIMACS Workshop on Sat- i&ability Testing, New Brunswick, NJ, Oct. 1993. Glover, F. (1989). Tabu search - Part I. ORSA Journal 342 Constraint Satisfaction GSAT+ best 0 2.8 12.9 0 0 7.6 31.8 161.2 mean 0 2.9 12.9 0 0 8.1 34.9 163.6 Table 6: Experimental results for MAX-3SAT. The data for the first five methods are from Hansen and Jaumard (1990). - of Computing, 1, 190-206. Gent, I.P. and Walsh, T. (1992). The enigma of SAT hill- climbing procedures. Techn. report 605, Department of Computer Science, University of Edinburgh. Revised version appeared in the Journal of Artificial Intelligence Research, Vol. 1, 1993. Gu, J. (1992). Effi cient local search for very large-scale satisfiability problems. Sigart Bulletin, Vol. 3, no. 1, 8-12. Hansen J. and Jaumard, B. (1990). Algorithms for the maximum satisfiability problem. Computing, 44, 279- 303. Jerrum, M. (1992) Large Cliques Elude the Metropolis Process. Random Structures and Algorithms, Vol. 3, no. 4, 347-359. Johnson, D.S. (1974) Optimization algorithms for combi- natorial problems. J. of Comp. and Sys. Sci., 9:256-279. Johnson, D.S., Aragon, C.R., McGeoch, L.A., and Schevon, C. (1991) Optimization by simulated anneal- ing: an experimental evaluation; part II, graph coloring and number partioning. Operations Research, 39(3):378- 406. Kamath, A.P., Karmarkar, N.K., Ramakrishnan, K.G., and Resende, M.G.C. (1991). A continuous approach to inductive inference. iUathematica1 Programming, 57, 215-238. Kamath, A.P., Karmarkar, N.K., Ramakrishnan, K.G., and Resende, M.G.C. (1993). An Interior Point Ap- proach to Boolean Vector Function Synthesis. Technical Report, AT&T Bell Laboratories, Nov. 1993. Kautz, H.A. and Selman, B. (1992). Planning as satisfia- bility. Proceedings ECAI-92, Vienna, Austria. Kirkpatrick, S., Gelatt, C.D., and Vecchi, M.P. (1983). Optimization by Simulated Annealing. Science, 220, 671-680. Larrabee, T. (1992). Test pattern generation using Boolean satisfiability. IEEE Transactions on Computer- Aided Design, 1992. Minton, S., Johnston, M.D., Philips, A.B., and Laird, P. (1990) Solving 1 ar e-scale g constraint satisfaction and scheduling problems using a heuristic repair method. Proceedings AAAI-90, 1990, 17-24. Extended version appeared in Artificial Intelligence, 1992. Mitchell, D., Selman, B., and Levesque, H.J. (1992). Hard and easy distributions of SAT problems. Proceedings AAAI-92, San Jose, CA, 459-465. Papadimitriou, C.H. (1991). On Selecting a Satisfying Truth Assignment. Proc. of the Conference on the Foun- dations of Computer Science, 163-169. Papadimitriou, C.H., Steiglitz, K. (1982). Combinatorial optimization. Englewood Cliffs, NJ: Prentice-Hall, Inc. Pinkas, G. and Dechter, R. (1992). An Improved Con- nectionist Activation Function for Energy Minimization. Proc. AAAI-92, San Jose, CA, 434-439. Selman, B. and Kautz, H.A. (1993). Domain-Independent Extensions to GSAT: Solving Large Structured Satisfia- bility Problems. Proc. IJCAI-93, Chambery, Prance. Selman, B. and Levesque, H.J., and Mitchell, D.G. (1992). A New Method for Solving Hard Satisfiability Problems. Proc. AAAI-92, San Jose, CA, 440-446. Techniques 343 | 1994 | 144 |
1,480 | Improving Repair-based Constraint Satisfaction Methods by Value Propagation Nobuhiro Yugami Yuiko Ohta Mirotaka Nara FUJITSU LABORATORIES LTD. 10 15, Kamikodanaka Nakahara-ku, Kawasaki 2 11, Japan yugami@flab.fujitsu.co.jp Abstract A constraint satisfaction problem (CSP) is a problem to find an assignment that satisfies given constraints. An interesting approach to CSP is a repair-based method that first generates an initial assignment, then repairs it by minimizing the number of conflicts. Min-conflicts hill climbing (MCHC) and GSAT are typical examples of this approach. A serious problem with this approach is that it is sometimes trapped by local minima. This makes it difficult to use repair-based methods for solving problems with many local minima. We propose a new procedure, EFLOP, for escaping from local minima. EFLOP changes the’ values of mutually dependent variables by propagating changes through satisfied constraints. We can greatly improve the performance of repair- based methods by combining them with EFLOP. We tested EFLOP with graph colorability problems, randomly generated binary CSPs and propositional satisfiability problems. EFLOP improved the performance of MCHC and GSAT for all experiments and was more efficient for large and difficult problems. Introduction A constraint satisfaction problem (CSP) is a problem to find an assignment that satisfies given constraints. Approaches to solving CSPs can be classified into constructive methods and repair-based methods [Minton et al. 921. Constructive methods are based on a tree search and find a solution by incrementally extending a consistent partial assignment. Constraint directed search [Fox 871, arc and path consistency algorithms [Mohr & Henderson 863 and intelligent backtrackings such as dependency directed backtracking [Doyle 791 fall into this group. Repair-based methods are based on a local search. They first generate an initial assignment with conflicts, then repair it by minimizing the number of conflicts. Min-conflicts hill climbing (MCHC) [Minton et al. 901 and GSAT [Selman, Levesque & Mitchell 921 are typical examples. For small- and medium-scale problems, constructive methods show good performance. However, repair-based methods are more practical for large-scale problems. Repair-based methods use local search techniques such as hill climbing to minimize the number of conflicts. Local search techniques do not provide the capability to escape from local minima. It is, thus difficult to solve a CSP with many local minima by using a repair-based method. One solution is to use a more powerful local search techniques such as simulated annealing (SA). Johnson et al. applied SA to graph colorability problems [Johnson et al. 911. However, this has a major disadvantage that local search techniques capable of escaping from local minima such as SA, require a very long time for problem solving. Another solution is combining local search methods with a special procedure for escaping from local minima. [Selman & Kautz, 931 and [Morris, 931 proposed constraint weighting for escaping from local minima. We propose a new procedure EFLOP (Escaping Erom Local @tima by gropagation) to resolve this problem. EFLOP changes the values of mutually dependent variables for escaping from local minima. Satisfied constraints are used for finding such a set of variables and their new values. Repair-based methods call EFLOP when they are trapped by local minima and restart search from an output assignment of EFLOP. 344 Constraint Satisfaction From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. epair-based methods and their limitations Repair-based methods solve a minimization problem of the number of conflicts. If an original CSP is solvable, then the minimization problem has an optimum assignment with no conflict and the optimum assignment is a solution of the original CSP. Min-conflicts hill climbing (MCHC) [Minton et al. 901 solves this minimization problem by repairing an assignment with a following simple value selection heuristic. Min-conflicts heuristic Select a variable in conflict randomly. Assign it a value that minimizes the number of conflicts. Break ties randomly. This heuristic guarantees that the number of conflicts of the new assignment is fewer than or equal to that of the old assignment because it selects the present value if all other values increase the number of conflicts. The number of conflicts, thus decreases monotonically. MCHC could solve certain classes of CSPs, e.g., n- queens problems and graph colorability problems for dense graphs but could not solve other classes of CSPs such as graph colorability problems for sparse graphs [Minton et al. 921. This is because MCHC has few ability to escape from local minima of the minimization problem. We explain this with an example of a graph colorability problem. Figure 1 shows a local minimum assignment for a 3-colorability problem used by Selman [Selman & Kautz 931 as an example that the basic GSAT could not escape from. Each node (variable) should be colored with one of the three colors but the color must be different from colors of neighboring nodes. In Figure 1, only one constraint, x#y, is violated. Min-conflicts heuristic selects x (or y) and tries to assign a new value to x. However, x=green and x=blue cause two conflicts respectively and x=red, the present value, minimizes the number of conflicts. Min-conflicts heuristic, thus selects red for x and MCHC can not escape from this local minimum assignment. We first discuss the property of local assignments of CSPs. The example in Figure 1 can be minimum ‘Lh, JL 4’ k-. sub-problem 1 .-.M sub-problem 2 -4 Figure 1: A local minimum assignment A local minimum assignment and a division into consistent sub-problems of a graph 3- colorability problem. Only a constraint between sub-problems is violated. divided into two sub-problems where all constraints within each sub-problem are satisfied but a constraint between sub-problems is violated. In general, if an assignment is local minimum, a problem can be divided into consistent sub-problems where conflicts occur only between different sub-problems. In such a case, changing the value of the conflicting variable causes new conflicts in the sub-problem that the variable belongs to. This increases the total number of conflicts and makes it impossible to escape from local minima with hill climbing like methods such as MCHC and GSAT. This property of local minima leads directly to the following procedure for escaping from them. Step 1: Find a consistent sub-problem that involves a variable in g:onflict. Step 2: Change the values of variables in the sub- problem so that the new values satisfy all constraints in the sub-problem. However, this is not practical because Step 2 requires to solve the sub-problem and takes a long time if the sub- problem is not small. EFLOP avoids this difficulty by combining Step 1 and Step 2, extending the sub-problem incrementally by propagating the changes of values of variables through satisfied constraints. Figure 2 shows the procedure of EFLOP. EFLOP first selects a variable in conflict randomly and changes the value of it. If this change causes a satisfied constraint to become unsatisfied, EFLOP tries to resolve it by changing a value of another variable in the constraint. If Techniques 345 procedure EFLOP Input : a local minimum assignment; Output: an assignment that is not local minimum; begin select a variable v in conflict randomly; change v ‘s value randomly; v := {v}; while possible select a constraint c that satisfies following conditions; (cl) c is satisfied before EFLOP is called; (~2) c is not satisfied now; (~3) there is a variable v in c and its value a such that (c3-1) v 6?! v; (~3-2) v’s present value is consistent with old values of variables in V, (~3-3) v=a makes c satisfied; (~3-4) v=u is consistent with new values of variables in V; change v’s value to a; tivtov; end-of-while; end; condition (~3-2). The sub-problem is also consistent after EFLOP terminates because of the condition (~3-4). We explain EFLOP procedure in detail using the example in Figure 1. EFLOP first selects a variable in conflict and its new value (new value must be different from the present value) randomly. Let x and green be selected. EFLOP changes the value of x to green and initializes the set of changed variables V = {x}. This new value, green, violates two constraints, xfz and xfw. Both of these constraints satisfy conditions (cl)-(c3), EFLOP thus selects one of them. Let xfz be selected. EFLOP tries to satisfy it by changing a value of a variable in it, i.e., x or z but EFLOP doesn’t change x’s value because the condition (~3-1) requires that each variable is changed at most once. Two variable-value pairs, z=red and z=blue, satisfy condition (~3). EFLOP selects z=blue because z=red causes one new conflict but z=blue causes no new conflict. EFLOP changes z’s value to blue and adds z to V. Next, EFLOP tries to satisfy xfw. Because of the same reason in xfz, EFLOP changes w’s value to blue. EFLOP terminates because there is no constraint that satisfies (cl)-(~3). EFLOP then assigns green to x, blue to z and blue to w, i.e., EFLOP changes the values of variables in sub-problem 1 (Figure 1) to other consistent values between them and the new assignment generated by EFLOP satisfies all constraints. Figure 2: EFLOP procedure Experiments this second change causes a new constraint violation, EFLOP tries to resolve it by changing the value of the third variable. This is continued until no new conflict can be resolved by changing values of variables that aren’t changed yet. If more than one variable-value pairs satisfy the condition (c3), EFLOP selects one of them with a following heuristic so as not to propagate value changes to too many variables. EFLOP heuristic Select a pair of a variable and its value that minimizes the number of constraints which are satisfied before EFLOP is called but will be violated after the change. Break ties randomly. The sub-problem defined by the set of changed variables V and constraints between variables in V, is consistent before EFLOP is called because of the In this section, we show the effect of EFLOP by using graph colorability problems, randomly generated binary CSPs and propositional satisfiability problems. EFLOP is not a method for solving CSP, but is a method for improving the performance of repair-based methods. We combined EFLOP with MCHC for binary CSPs and for colorability problems, and combined with GSAT for SAT We compared their performance with and without EFLOP. MCHC (GSAT) with EFLOP calls EFLOP whenever it trapped by local minima and restarts from an output assignment of EFLOP. MCHC (GSAT) without EFLOP restarts from a randomly generated initial assignment when it is trapped by local minima. We used C language on a SPARCstation2 for all experiments. Graph Colorability Problems We generated 3-colorability problems of sparse graphs by the way in [Minton et al. 921. We first divided N nodes 346 Constraint Satisfaction I nodes I edges I MCHC IMCHC+EFLOP I Table 1: Average numbers of hill climbing steps for graph 3-colorability problems 0.5 0.6 0.64 0.7 0.5 0.6 0.64 0.7 strength of a constraint strength of a constraint Figure 3: Probability of solvability of random CSPs with 20 variables, 10 values for each variable and 40 constraints to 3 groups with N/3 nodes and randomly created edges between nodes in different groups. If the generated graph had unconnected components, we rejected it. This generation guarantees the solvability of generated problems. We used the graphs with 2N edges because [Minton et al. 921 reported the poor performance of MCHC for such problems. Table 1 shows the average numbers of hill climbing steps of MCHC (average of 100 problems for each N) with and without EFLOP for 3-colorability problems with 30-150 nodes. EFLOP could improved MCHC drastically and its effect was larger for large problems than that of small problems. Binary CSPs We generated binary CSPs with 4 parameters, the number Table 2: 50% solvable strength and average number of hill climbing steps for binary CSPs Figure 4: Average numbers of hill climbing steps for 20 variables CSPs of variables, N, the domain size of each variable, D, the number of constraints, M and the strength of the constraint, S. The strength of the constraint is the ratio of the number of forbidden value pairs to the number of all value pairs of two variables in the constraint. This means that ( l-S>D2 value pairs satisfy the constraint and SD2 value pairs violate the constraint. When generating a problem, we first selected M variable pairs as constraints and for each constraint (variable pair), we selected (l-S)D 2 permitted value pairs randomly. We did all selections randomly, so a generated problem may not have a solution. This way of generating CSPs was based on [Freuder & Wallace 921. We first tested EFLOP’s effect on CSPs with 20 variables, 10 possible values for each variable and 40 Techniques 347 constraints (N=20, D=lO, M=40). Figure 3 shows the probability of solvability, the ratio of solvable problems to generated problems and figure 4 shows the average numbers of hill climbing steps (average of 100 solvable problems for each strength). MCHC with EFLOP was faster than without at all strength and the difference was biggest at the “50% solvable” strength, the strength at which a half of randomly generated problems were solvable. This is because that when S was near this value, the ratio of the numbers of local minima to the number of solutions was large, so MCHC trapped by local minima with high probability. Table 2 shows 50% solvable strengths and average numbers of hill climbing steps (average of 100 solvable problems) at the strengths of CSPs with 20, 40 and 60 variables. For each case, we set M=2N. It is interesting to note that the strength was almost constant. This suggests that the 50% solvable strength depends only on the constraints-to-variables ratio and domain size. The result was that EFLOP could improve the performance of MCHC for all cases and the improvement was greater for large problems. MCHC with EFLOP was about 3 times faster than MCHC without EFLOP for problems with 20 variables and the improvement was more than 10 times for problems with 60 variables. Satisfiability Problems A propositional satisfiability problem (SAT) is a problem to determine whether a given logical formula can be satisfied or not, and to find a truth assignment that satisfies the formula if it is satisfiable. The formula is given in conjunctive normal form and a clause is a constraint that at least one literal in the clause should be true. Selman et al. proposed an efficient repair-based method for SAT, GSAT [Selman, Levesque & Mitchell 921 and extended GSAT with clause weighting and random walk to overcome the inability to escape from local minima [Selman & Kautz 931. We used basic GSAT and didn’t use these extensions for our experiments because we wanted to know the effect of EFLOP alone. We generated SAT with three parameters, the number of variables, N, and the number of clauses, M and the length of clause, K. For each clause, we first selected K variables randomly and negated each variable with probability 0.5. This generation was based on [Mitchell, Selman & Levesque 921. We first tested EFLOP with 3-SAT (K=3) problems. Mitchell et al. [Mitchell, Selman & Levesque 921 reported Table 3: Average number of flips for 3-SAT’ Table 4: Average number of flips for 3-, 4- and §-SAT with 50 variables the hardness of 3-SAT. Their conclusion was that 3-SAT was most difficult when the ratio of clauses to variables was 50% satisfiable ratio, the ratio at which a randomly generated problem was satisfiable with probability 0.5, and this ratio was 4.3 for 3-SAT. Table 3 shows the average number of flips (average of 100 satisfiable problems) for randomly generated 3-SAT with M=4.3N. GSAT with EFLOP was about twice as fast as GSAT without EFLOP. The difference between them didn’t depend on the number of variables. We also examined the dependency of the EFLOP’s effect on clause length. Table 4 shows the 50% satisfiable clauses-to-variables ratio for 3-, 4- and 5-SAT and the average number of flips (average of 100 satisfiable problems) at the ratio. EFLOP was more effective for problems with longer clauses and GSAT with EFLOP was about 10 times faster than GSAT without EFLOP for 5- SAT We have proposed a new procedure, EFLOP, for improving the performance of repair-based constraint satisfaction methods such as min-conflicts hill climbing (MCHC) and GSAT. Repair-based methods solve a CSP by minimizes the number of conflicts. The most serious problem of these methods is that they can’t escape from 348 Constraint Satisfaction local minima. EFLOP propagates changes of values through satisfied constraints and changes the values of mutually dependent variables at once. This enables to escape from local minima and thus improves the performance of repair-based methods. EFLOP is independent on problem domains and can be used with any repair-based method. We tested EFLOP’s effect by using graph colorability problems, randomly generated binary CSPs and satisfiability problems. EFLOP improved the performance of MCHC and GSAT, and the improvement was greater for larger and more difficult problems. An interesting question is how effective EFLOP is for structured problems such as planning problems. In structured problems, some variables strongly depend on each other and a value change for one variable causes many conflicts between such variables. In a planning problem, a selection of an operator is strongly dependent on previous operators and affects selections for following operators through its preconditions and post conditions. This property causes many local minima and makes it very difficult to solve structured problems by repair-based methods [Kautz & Mitchell 921. EFLOP changes values of such mutually dependent variables at once, and the ability to do this will improve the performance of repair- based methods for structured problems. References [Doyle 791 J. Doyle : A Truth Maintenance System, Artificial Intelligence, Vol. 12, pp.23 l-272, 1979 [Fox 871 M. Fox : Constraint Directed Search: A Case Study of Job-Shop Scheduling, Morgan Kaufman Publishers, Inc., 1887 [Freuder & Wallace 921 E.C. Freuder and R.J.Wallace : Partial constraint satisfaction, Artificial Intelligence Vol. 58, pp.21-70, 1992 [Kautz & Selman 921 H. kautz and B. Selman : Planning as Satisfiability, Proc. ECAI 92, pp.359-363 [Johnson et al. 911 A. K. Johnson, C. R. Aragon, L. A. McGeoch and C. Schevon: Optimization by simulated annealing: an experimental evaluation Part II, Gperating Research Vol. 39, pp.378406, 199 1 [Minton et al. 901 S. Minton, M. D. Johnston, A. B. Philips and P. Laird: Solving Large-Scale Constraint Satisfaction and Scheduling Problems Using a Heuristic Repair Method, Proc. of AAAI-90, pp.17-24, 1990 [Minton et al. 901 S. Minton, M. D. Johnston, A. B. Phillips and P. Lair-d: Minimizing conflicts: a heuristic repair method for constraint satisfaction and scheduling problems, Artificial Intelligence Vol. 58, pp. 16 l-203, 1992 [Mitchell, Selman & Levesque 921 D. Mitchell, B. Selman and H. Levesque: Hard and Easy Distributions of SAT Problems, Proc. of AAAI-92, pp.459-465, 1992 [Mohr, Thomas & Henderson 861 R. Mohr and T. C. Henderson, Arc and Path Consistency Revisited, Artificial Intelligence Vol. 28 pp.225-233, 1986 [Morris 931 P. Morris, The Breakout Method for Escaping From Local Minima, Proc. of AAAI-93, pp.40-45, 1993 [Selman, Levesque & Mitchell 921 B. Selman, H. Levesque and D. Mitchell: A New Method for solving Hard Satisfiability Problems, Proc. of AAAI-92, ~~440-446, 1992 [Selman & Kautz 931 B. Selman and H. Kautz: Domain- Independent Extensions to GSAT: Solving Large Structured Satisfiability Problems, Proc. of IJCAI-93, pp.290-295, 1993 Techniques 349 | 1994 | 145 |
1,481 | Planning from First Princi les for Cm etric Constraint Satisfae Sanjay Bhansali Glenn A. Kramer School of EECS Enterprise Integration Technologies Washington State University 459 Hamilton Avenue Pullman, WA 99 163 Palo Alto, CA 94301 bhansali@eecs.wsu.edu gak@eit.com Abstract. An important problem in geometric reasoning is to find the configuration of a collection of geometric bodies so as to satisfy a set of given constraints. Recently, it has been suggested that this problem can be solved efficiently by symbolically reasoning about geometry using a degrees of freedom analysis. The approach employs a set of specialized routines called plan fragments that specify how to change the configuration of a set of bodies to satisfy a new constraint while preserving existing constraints. In this paper we show how these plan fragments can be automatically synthesized using first principles about geometric bodies, actions, and topology. Operators are actions, e.g.rotate, that change the configuration of geoms, thereby violating or achieving some constraint. An initial state is specified by the set of existing invariants on a geom and a final state by the additional constraints to be satisfied. A plan is a sequence of actions that when applied to the initial state achieves the final state. Introduction An important problem in geometric reasoning is the following: given a collection of geometric bodies, or geoms, and a set of constraints between them, find a configuration, i.e. position, orientation, and dimension, of the geoms that satisfies all the constraints. Solving this problem is an integral task for constraint-based sketching and design, geometric modeling for computer-aided design, kinematic analysis of robots and other mechanisms, and describing mechanical assemblies. With this formulation, one could presumably use a classical planner, e.g. STRIPS [l], to automatically generate a plan-fragment. However, plan fragment actions are parametric operators with a real-valued domain. Thus, the search space consists of an infinite number of states. Our approach uses loci information (representing a set of points that satisfy some constraints) to reason about the effects of operators and thus reduces the search problem to a problem in topology, involving reasoning about the intersection of various loci. In [3] Kramer suggests that general-purpose constraint satisfaction techniques are not well suited to problems involving complicated geometry. He describes a system called GCE that uses an alternative approach called degrees of freedom analysis. This approach is based on symbolic reasoning about geometry and is more efficient than systems based on algebraic equation solvers. GCE employs a set of specialized routines called plan fragments, that specify how to change the configuration of a geom using a fixed set of operators and the available degrees of freedom, so that a new constraint is satisfied while preserving the geom’s prior constraints. This approach is canonical: at any point one may choose any constraint to be satisfied without affecting the final result. The resulting algorithm has polynomial time complexity and is more efficient than general-purpose constraint satisfaction algorithms. Another issue to be addressed is the frame problem: how to determine what properties or relationships do not change as a result of an action. A typical solution is to use the assumption that an action does not modify any property or relationship unless explicitly stated as an effect of the action. Such an approach works well if one knows a priori all possible constraints or invariants that might be of interest and relatively few constraints get affected by each action - which is not true in our case. We use a novel scheme for representing effects of actions. It is based on reifying actions in addition to geoms and invariant types. We associate, with each pair of geom and invariants, a set of actions that can be used to achieve or preserve that invariant for that geom. Whenever a new geom or invariant type is introduced the corresponding rules for actions that can achieve/preserve the invariants are added. Since there are many more invariant types than actions in this domain, this scheme results in simpler rules. A unique feature of our work is the use of geometry-specific matching rules to determine when two or more general actions that achieve/preserve different constraints can be reformulated to a less general action. Since the most interesting part of problem-solving is performed by plan fragments, the success of this approach depends on one’s ability to construct a complete set of plan fragments meeting the canonical specification. In this paper we describe how to automatically generate plan fragments using first principles about geoms, actions, and topology. Our approach is based on planning. Plan fragment generation becomes a planning problem by considering the various geoms and invariants on them as describing a state. Another shortcoming of using a conventional planner is the difficulty of representing conditional effects of operators. In GCE an operation’s effect depends on the type of geom as well as the particular geometry. E.g, the action of translating a body to the intersection of two lines on a plane normally reduces the body’s translational degrees of freedom to zero; however, if the two lines happen to coincide then the body still retains one translational degree of freedom and if the two lines are parallel but do not coincide then the action fails. Kramer calls such situations degenerucies. One approach to handling degeneracies is to Techniques 319 From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. use a reactive planner that dynamically revises its plan at run-time. However, this could result in unacceptable performance in many real-time applications. Our approach makes it possible to pre-compile all potential degeneracies in the plan. We achieve this by dividing the planning algorithm into two phases. In the first phase a skeletal plan is generated that works in the normal case and in the second phase, this skeletal plan is refined to take care of singularities and degeneracies. This approach is similar to the idea of refining skeletal plans in MOLGEN [2] and the idea of critics in HACKER [4] to fix known bugs in a plan. However, the skeletal plan refinement in MOLGEN essentially consisted of instantiating a partial plan to work for specific conditions, whereas in our method a complete plan which works for a normal case is extended to handle special conditions like degeneracies. A Plan Fragment Example. The following is a simple example of a plan fragment specification that is also used to illustrate The example is illustrated in Figure 1. our approach. Figure 1. Example problem (initial state) Geom-type: circle Name: $c Invariants: (fixed-distance-line $c $Ll $dl $BIAS~COUNTER~CLOCKWISE) To-be-achieved: (fixed-distance-line $c $L2 $dist2 $BIAS-CLOCKWISE) In this example, a variable radius circle $cl has a prior constraint specifying that the circle is at a fixed distance $distl to the left of a fixed line $LI (or alternatively, the line, $51 is tangent in a counterclockwise direction to the circle). A new constraint to be satisfied is that the circle be at a fixed distance $dist2 to the right of another line $L2. To solve this problem, three different plans can be used: (a) translate the circle from its current position to a position such that it touches the two lines $L2’ and $Ll’ shown in the figure. (b) scale the circle while keeping its point of contact with $Ll’ fixed, so that it touches $L2’ (c) scale and translate the circle so that it touches both $L2’ and $Ll’. Each of the above plan fragment would be available to GCE from a plan-fragment library. Note that some of the plan fragments would not be applicable in certain situations. 1 We use the following conventions: symbols preceded by $ represent constants, symbols preceded by ? represent variables, expressions of the form (>> parent subpart) denote the subpart of a compound term, parent. For example, if $Ll and $L2 are parallel, then a single translation can never achieve both the constraints, and plan- fragment (a) would not be applicable. We show how each of the plan-fragments can be automatically synthesized by reasoning from more fundamental principles. Overview of System Architecture Figure 2 depicts the architecture of our system showing the various knowledge components and the plan generation process. The knowledge represented in the system is broadly categorized into a Geom Knowledge-base that contains knowledge specific to geometric entities and a Geometry Knowledge-base that is independent of particular geoms and can be reused for generating plan fragments for any geom. The geom-specific knowledge consists of six knowledge components: 1. Actions. These describe operations that can be performed on geoms. In the GCE domain, three actions suffice to change the configuration of a body to an arbitrary configuration: (translate g v) which denotes a translation of geom g by vector v; (rotate g pt ax amt) which denotes a rotation of geom g, around point pt, about an axis ax, by an angle amt; and (scale g pt amt) where g is a geom, pt is a point about which g is scaled, and amt is a scalar. 2. Invariants. These describe constraints to be solved for the geoms. The initial version of our system has been designed to generate plan fragments for a variable-radius circle on a fixed workplane, with constraints that are distances between these circles and points, lines, and other circles on the same workplane. There are seven invariant types to represent these constraints. An example of an invariant is: (Fixed-distance-point g pt dist bias) which specifies that the geom g lies at a fixed distance dist from point pt; bias specifies whether g lies inside or outside a circle of radius dist around point pt. 3. Loci. These represent sets of possible values for a geom parameter, e.g. the position of a point on a geom. The various kinds of loci can be grouped into either a ld- locus (described by an equation of 1 variable) or a 2d-locus (described by an equation of 2 variables). For example, (make-line-locus through-pt direc) represents an infinite line (a Id-locus) passing through through-pt and having direction direc. Other loci in the system include rays, circles, parabolas, hyperbolas, and ellipses. 4. Measurements. These are used to represent the computation of some function, object, or relationship between objects. These terms are mapped into a set of service routines which are called by the plan fragments. An example of a measurement terms is: (Od-intersection Id-locus1 Id-Zocus2). This represents the intersection of two ld-loci. In the normal case, the intersection of two l-d loci is a point. Singular cases occur when the two loci happen to coincide; in such a case their intersection returns one of the loci instead of a point. Degenerate cases occur when the two loci do not intersect; in such cases, the intersection is undefined. These exceptional conditions are used during the second phase of the plan generation process to elaborate a skeletal plan (see Section 3.3). 320 Constraint Satisfaction r Knowledge Components Geom Knowledge ction Matching Rules Plan fragment specification Plan fragment Figure 2. Architectural Overview of the Plan fragment Generator 5. Geoms. These are the objects of interest in solving geometric constraint satisfaction problems. Examples of geoms are lines, line-segments, circles, and rigid bodies. Geoms have degrees of freedoms which allow them to vary in location and size. For example, in 3D-space, a circle with a variable radius has three translational, two rotational, and one dimensional degree of freedom. The configuration variables of a geom are defined as the minimal number of real-valued parameters required to specify the geometric entity in space unambiguously. Thus, a circle has six configuration variables (three for the center, one for the radius, and two for the plane normal). In addition, the representation of each geom includes the following: name: a unique symbol to identify the geom; invariant-descriptors: a set of rules that describe how invariants on the geom can be preserved or achieved by actions (see below); invariants: the set of current invariants on the geom; invariants-to-be-achieved: the set of invariants that need to be achieved for the geom. 6. Action Rules (Invariant Descriptors). An action rule describes the effect of an action on an invariant. The planner must know: (1) how to achieve an invariant using an action and (2) how to choose actions that preserve as many of the existing invariants as possible. In general, there are several ways to achieve an invariant and several actions that will preserve one or more invariant. The intersection of these two sets of actions is the set of feasible solutions. In our system, the effect of actions is represented as part of geom-specific knowledge in the form of action rules, whereas knowledge about how to compute intersections of two or more sets of actions is represented as geometry-specific knowledge (since it does not depend on the particular geom being acted on). An action rule is a three-tuple (pattern, to-preserve, to- [relachieve). Pattern is the invariant of interest; to-preserve is a list of actions that can be taken without violating the pattern invariant; and to-[relachieve is a list of actions that can be taken to achieve the invariant or re-achieve an existing invariant “clobbered” by an earlier action. These actions are stated in the most general form. The matching rules in the Geometry Knowledge base are then used to obtain the most general unifier of two or more actions. An example of an invariant descriptor, associated with variable radius circle geoms is: pattern: (1 d-constrained-point ?c (>> ?c CENTER) ? 1 dlocus) to-preserve: (scale ?c (>> ?c CENTER) ?any) (translate ?c (v- (>> ?ldlocus ARBITRARY-PT) (>> ?c CENTER)) to-[re]achieve:(translate ?c (v- (>> ? 1 dlocus ARBITRARY PI’) (>> ?c CENTER)) This descriptor is used to preserve or achieve the constraint that the center of a circle geom lie on a Id locus. Two actions that may be performed without violating this constraint: (1) scale the circle about its center, and (2) translate the circle by a vector that goes from its current center to an arbitrary point on the Id locus. To achieve this invariant only one action may be performed: translate the circle so that its center moves from its current position to an arbitrary position on the l-dimensional locus. The Geometry specific knowledge is organized as three different kinds of rules: I. Matching Rules. These are used to match terms using geometric properties. The planner employs a unification algorithm to match actions and determine whether two actions have a common unifier. Here standard unification is not sufficient since it is purely syntactic and does not use knowledge about geometry. To illustrate this, consider the two actions: (i) (rotate $g $ptl ?vecl ?amtl), and (ii) (rotate $g $pt2 ?vec2 ?amt2). Each denotes a rotation of a fixed geom $8, around a fixed point about an arbitrary axis by an arbitrary amount. Standard unification fails when applied to the above terms because no binding of variables makes the two terms syntactically equal. However, knowledge about geometry allows matching the two terms to yield (rotate $g $ptl (v- $pt2 $ptl) ?amtl), denoting a rotation of the geom around the axis passing through $ptl and $pt2. The point around which the body is rotated can be any point on the axis (here arbitrarily chosen as $ptl) and the amount of rotation can be anything. 2. Reformulation Rules. These are used to rewrite pairs of invariants on a geom into an equivalent pair of simpler invariants (using a well-founded ordering). Here equivalence means that the two sets of invariants produce the same range of motions in the geom. Techniques 321 Besides reducing the number of plan fragments, reformulation rules also help to simplify action rules. Currently all action rules (for variable radius circles and line-segments) use only a single action to preserve or achieve an invariant. If we do not restrict the allowable signatures on a geom, it is possible to create examples where we need a sequence of (more than one) actions in the rule to achieve the invariant, or we need complex conditions that need to be checked to determine rule applicability. Allowing sequences and conditionals on the rules increases the complexity of both the rules and the pattern matcher. This makes it difficult to verify the correctness of rules and reduces the efficiency of the pattern matcher. 3. Prioritizing Rules. Given a set of invariants to be achieved on a geom, a planner generally creates multiple solutions. A majority of the solutions contain redundant actions which can be easily eliminated (e.g. if there are two consecutive translations, they can be replaced by a single translation). However, after such redundant actions are eliminated, the planner may still have multiple solutions. A set of rules called prioritizing rules are then used to choose a preferred solution. We have identified two types of prioritizing rules: I. Prefer solutions that subsume an alternative solution. This rule permits more flexibility in resolving degeneracies in the plan fragment later. 2. Choose the solution that minimizes a geom’s motion as measured by a motion function. This rule reflects that in most applications,e.g. computer-aided sketching it is desirable to produce the least amount of perturbation to a geometric system in order to satisfy a set of constraints. Plan Fragment Generation The plan fragment generation process is divided into two phases. In the first phase a specification of the plan fragment is taken as input, and a planner is used to generate a set of skeletal plans. These form the input to the second phase which chooses one of the skeletal plans and elaborates it to take care of singularities and degeneracies. The output of this phase is a complete plan fragment. Phase I A skeletal plan is generated using a breadth-first search process. Figure 3 gives the general form of a search tree produced by the planner. The planner first tries the reformulation rules to rewrite the geom invariants into a canonical form. Next, the planner searches for actions that produce a state in which at least 1 invariant in the Preserved list is preserved or at least 1 action in the To-be- achieved (TBA) list is achieved. The preserved and achieved invariants are pushed into the Preserved list, and the clobbered or unachieved invariants are pushed into the TBA list of the child state. The above strategy will produce intermediate nodes in the search tree which might clobber one or more preserved invariant without achieving any new invariant. The planner iteratively expands each leaf node in the Preserved: P TBA: A L Reformulate I Preserved: P TBA: A I Actions 1 Figure 3. Overview of the search tree produced by the planner search tree until one of the following is true: (1) The node represents a solution, i.e. the TBA list is nil. (2) The node represents a cycle, i.e. the Preserved and TBA lists are identical to an ancestor node. The node is then marked as terminal and the search tree is pruned at that point. If all leaf nodes are marked as terminal, then the search terminates. The plan-steps of each of those solution nodes represents a skeletal plan fragment. When multiple skeletal plan fragments are obtained, the planner chooses one of them using the prioritizing rule described earlier and passes it to the second phase of the plan fragment generation. Phase I: Example We use the example of Section 1 to illustrate Phase I of the planner. The planner begins by trying to reformulate the constraints. It uses a reformulation rule to produce the search tree shown in Fig. 4. Next, the planner searches for actions that can achieve the new invariant or preserve the existing invariant. We only describe the steps involved in finding actions that satisfy the maximal number of constraints (in this case, two). The planner first finds all actions that achieve the Id-constrained-point invariant by examining the action rules associated with the variable circle geom. The action rule given in section 1 contains a pattern that matches the Id-constrained-point invariant. The relevant action after the appropriate substitutions is: (translate $c (v- (>> (angular-bisector (make-displaced-line $L 1 $BIAS-LEFT $d 1) (make-displaced-line $L2 $BIAS-RIGHT $d2) arbitrary-pt) (>> $c center))) Similarly, the planner finds all actions that will preserve theflxed-distance-line invariant. Matching and performing the appropriate substitutions yields the single action: (translate $c (v- (>> (make-line-locus (>> $c center) (>> $Ll direction)) arbitrary-point) (>> $c center))) 322 Constraint Satisfaction Preserved: (fixed-distance-line $c $Ll $dl $BIAS-CIR-CLOCKWISE) Preserved: (fixed-distance-line $c $Ll $d 1 $BIAS_CTR-CLOCKWISE) I’BA: (1 d-constrained-point $c (>> $c CENTER) (angular-bisector (make-displaced-line $Ll $BIAS-LEFT $dl) (make-displaced-line $L2 $BIAS-RIGHT $d2) $BIAS-CTR-CLOCKWISE $BIAS_CLOCKWISE)) Figure 4. Search tree after reformulating invariants Now, to find an action that both preserves the preserved invariant and achieves the TBA invariant, the planner attempts to match the preserving action with the achieving action. The two actions do not match using standard unification, but match employing the following geometry- specific matching rule: (v- (~-2 $ld-locus1 arbitrary-point) $to) (v- (>> $ld-locus2 arbitrary-point) $to) (v- (Od-intersection $ Id-locus 1 $1 d-locus2) $to) (To move to an arbitrary point on two diferent loci, move to the intersection point of the two loci) to yield the following action: (translate $c (v- (Od-intersection (angular-bisector (make-displaced-line..).) (make-line-locus (>> $c center) (>> $Ll direction)) (>> $c CENTER))) This action moves the circle to the point shown in Figure 5 and achieves both the constraints. This single one-step plan constitutes a skeletal plan fragment. There are two other actions that are generated by the planner in the first iteration, one of which achieves the new constraint but clobbers the prior invariant, while the second one moves the circle to another configuration without achieving the new constraint but preserving the prior constraint. $di make-line-locus $dist Figure 5. The 0 denotes the point to which the circle is moved. After two iterations the following solutions are obtained: (1) Translate to the intersection of the angular-bisector and make-line-locus. (2) Translate to an arbitrary point on the angular-bisector, followed by a translation to the intersection point. (3) Translate to an arbitrary point of make-line-locus, followed by a translation to the intersection point. (4) Translate to an arbitrary point on the angular-bisector and then scale. At this stage the first phase of the plan fragment generation is terminated and the skeletal plan fragments are passed on to the second phase of the planner. Phase 2: Elaboration of Skeletal Plan The first step in Phase 2 is to select one of the skeletal plan fragments. The system begins this process by first eliminating all redundant steps in a plan. Elimination of Redundant Plan-steps. We assume that there is only one degree of dimensional freedom for each geometric body. Under this assumption it can be proved that 1 translation, 1 rotation, and 1 scale is sufficient to change the configuration of an object to an arbitrary configuration in 3D space. Therefore, any plan fragment that contains more than 1 instance of any action type contains redundancies and can be rewritten to an equivalent plan fragment by eliminating redundant actions, or combining two or more action into a single composite action. As an example, consider the following pair of translations on a geom: e (translate $g ?vec) 0 (translate $g (v- ?to2 (>> $g center))) where ?vec represents an arbitrary vector and ?to2 represents an arbitrary position. If ?to2 is independent of any positional parameter of the geom, then the first translate action is redundant and can be removed. After eliminating redundant plan-steps, the system selects one of the plan fragments using the prioritizing rule described earlier, i.e. it selects one of the plan fragments that subsumes the maximal number of other plan fragments. Least Motion. The least motion principle is meant to reduce the total perturbation in a geometric configuration when satisfying a set of new constraints. This is done by first defining a motion function, CA,G for each action, A, and geom type, G. For example, for a translation of a circle, the motion function, CT,circIe could be the square of the displacement of the center of the circle from its initial to its final position. Next, we choose a motion summation function, c that sums the motion produced by individual actions on a geom. An example of the summation function is the normal addition operator: plus. The total motion for a geom is computed using the summation function and the motion functions for action- geom pairs. When a plan fragment is not deterministic, the expression representing the total motion would contain one or more variables representing the ungrounded parameters of the geom. If the motion function and the summation function are chosen so that the resultant expression is analytically evaluable, then we can compute the values of Techniques 323 the variables that would minimize the expression representing the total motion. Substituting these values back in the plan fragment results in a plan fragment producing the least motion. Exception Handling. Exceptional conditions occur when geometric entities are positioned so that the solution of a set of constraints results in either a solution that has extra degrees of freedom or results in no solution. To detect and characterize degeneracies, we begin by enumerating the elementary geoms and grouping possible values of these elements into equivalence classes. An equivalence class represents a set of solutions that are all associated with some kind of exception condition or a normal situation. Whenever an expression contains a subterm whose value is instantiated at run-time, the computed value is checked for membership in one of the equivalence classes above to see if this situation represents an exception. To implement this, the representation of each elementary geom contains an additional attribute called solution-type which denotes the equivalence class of solution type for that geom. Each service routine that computes one of these geoms would return both the solution type and the solution value(s). For an aggregate type the exceptional cases are derived by taking the cross product of the exceptional cases of each of the components of the aggregate. In general, the characterization of the value of a subterm as an exception depends on the context in which it occurs. Thus, to determine exception conditions in a plan fragment, we enumerate for each action term, all possible exception conditions of its arguments that might cause the action to fail. The elaboration of the skeletal plan fragment consists of converting each action in the plan into a case statement. The conditions of the case statement represent one of the exception conditions of interest for the corresponding action. The body of each conditional branch represents the action to be taken to deal with the exception. Depending upon the exception, the action might involve choosing one solution from several alternatives, or generating an error message describing why the action failed. Phase II: Example Four skeletal plan fragments were generated in the first phase of the planner. Using the rule for eliminating redundant translations given earlier, the second and third plan fragments can be reduced to single translation plan fragments equivalent to the first plan fragment. This leaves only two distinct plan fragment solutions to consider. Using the prioritizing rule, the system concludes that the first plan fragment consisting of a single translation is subsumed by the second plan fragment consisting of a translation and scaling. Thus, the second plan fragment is chosen as the preferred solution. Next, the system discovers that the plan fragment is not deterministic since it contains an action that translates the circle geom to an arbitrary point on the angular-bisector. It grounds the plan by finding a fixed point on the locus based on least motion (Due to space limitations a full discussion on computing least motion is omitted). The grounded plan fragment is: (translate $c (v- (compute-least-motion-points . ..) (>> $c center)) (scale $c (>> $c center) (line-point-distance $Ll (>> $c center))) Next, each action in the above plan fragment is transformed into a case statement: (let ((vector (v- (compute-least-motion-points . ..) (>> $c center)))) (case vector.solution-type (zero-vector (nop) . ..) (one-of-N (funcall #‘select-one-from-N vector)) (not-ground (print “Error: . . .“)) (undefined (print “Error: . ..“) (t (translate $c vector.value)) (let ((amt (line-point-distance $Ll (>> $c center)))) (case amt.solution-type (zero (print “Zero dimension error . ..“)) (negative (print “Bias inconsistent . ..“)) (undefined (print “Error: . . .“)) (t (scale $g (>> $g center) amt.value)) This plan fragment is very concise, containing only the logic for solving a set of constraints; most of the other functionality that used to be part of the plan fragment is pushed down to service routines that deal with topology. onclusions and ork We have described an automatic plan fragment generation methodology that can automatically synthesize plan fragments for geometric constraint satisfaction systems by reasoning from first principles about geometric entities, actions, and topology. We implemented the first phase of the planner and used it to synthesize plan fragments for variable-radius circle and line-segment geoms and are currently implementing Phase II of the planner. Further work includes extending and evaluating the approach to handle more complex (e.g. 3-d) geoms and constraints and pushing the automation one level further so as to automatically acquire some types of knowledge from simpler building blocks. For example, a technique for automatically synthesizing the least motion function from some description of the geometry would be very useful. Acknowledgment: This work was done while the first author was with the Knowledge Systems Laboratory, Stanford and the second author was at the Schlumberger Laboratory for Computer Science, Austin. eferences [l] Fikes, R. E. and Nilsson, N.J., “STRIPS: a new approach to the application of theorem proving to problem solving, Artificial Intelligence 2, 1971, pp. 198-208. [2] Friedland, P.E., “Knowledge-based experiment design in molecular genetics”, Technical Report No. 79-771, Computer Science Department, Stanford University, 1979. [3] Kramer, 6. A. “A Geometric Constraint Engine”, Artifical Intelligence, 58( l-3), pp. 327-360. [4] Sussman, G.J., A computer model of skill acquisition, American Elsevier: New York, 1975. 324 Constraint Satisfaction | 1994 | 146 |
1,482 | GENET: A Connectionist Architecture for Solving Constraint Sat isfact ion terative rovement* Andrew Davenport, Edward Tsang, Chang J. Wang and Kangmin Zhu Department of Computer Science, University of Essex, Wivenhoe Park, Colchester, Essex CO4 3SQ, United Kingdom. {daveat,edward,cwang,kangmin}@essex.ac.uk Abstract New approaches to solving constraint satisfaction problems using iterative improvement techniques have been found to be successful on certain, very large prob- lems such as the million queens. However, on highly constrained problems it is possible for these meth- ods to get caught in local minima. In this paper we present GENET, a connectionist architecture for solv- ing binary and general constraint satisfaction prob- lems by iterative improvement. GENET incorporates a learning strategy to escape from local minima. Al- though GENET has been designed to be implemented on VLSI hardware, we present empirical evidence to show that even when simulated on a single processor GENET can OUtperfOrm e.Xkting iteratk? improvement techniques on hard instances of certain constraint sat- isfaction problems. Introduction Recently, new approaches to solving constraint satis- faction problems (CSPS) have been developed based upon iterative improvement (Minton et al. 1992; Selman & Kautz 1993; Sosic & Gu 1991). This technique involves first generating an initial, possi- bly “flawed” assignment of values to variables, then hill-climbing in the space of possible modifications to these assignments to minimize the number of con- straint violations. Iterative improvement techniques have been found to be very successful on certain kinds of problems, for instance the min-conflicts hill-climbing (Minton et al. 1992) search can solve the million queens problem in seconds, while GSAT can solve hard, propositional satisfiability problems much larger than those which can be solved by more conventional search methods. These methods do have a number of drawbacks. Firstly, many of them are not complete. However, the size of problems we are able to solve using itera- tive improvement techniques can so large that to do a *Andrew Davenport is supported by a Science and Engi- neering Research Council Ph.D Studentship. This research has also been supported by a grant (GR/H75275) from the Science and Engineering Research Council. complete search would, in many cases, not be possible anyway. A more serious drawback to iterative improve- ment techniques is that they can easily get caught in local minima. This is most likely to occur when trying to solve highly constrained problems where the number of solutions is relatively small. In this paper we present GENET, a neural-network architecture for solving finite constraint satisfaction problems. GENET solves CSPS by iterative improve- ment and incorporates a learning strategy to escape lo- cal minima. The design of GENET was inspired by the heuristic repair method (Minton et aZ. 1992), which was itself based on a connectionist architecture for solving CSPs- the Guarded Discrete Stochastic (GDS) network (Adorf & Johnston 1990). Since GENET is a connectionist architecture it is capable of being fully parallelized. Indeed, GENET has been designed specif- ically for a VLSI implementation. After introducing some terminology we describe a GENET model which has been shown to be effective for solving binary CSPS (Wang & Tsang 1991). We intro- duce extensions to this GENET model to enable it to solve problems with general constraints. We present experimental results comparing GENET with existing iterative improvement techniques on hard graph color- ing problems, on randomly generated general CSPS and on the Car Sequencing Problem (Dincbas, Simonis, & Van Hentenryck 1988). Finally, we briefly explain what we expect to gain by using VLSI technology. Terminology We define a constraint satisfaction problem as a triple (2, II, C) (Tsang 1993), where: 2 is a finite set of variables, D is a function which maps every variable in 2 to a set of objects of arbitrary type. We denote by D, the set of objects mapped by D from Z, where x E 2. We call the set D, the domain of 2 and the members of Dz possible values of x. C is a set of constraints. Each constraint in C re- stricts the values that can be assigned to the vari- ables in 2 simultaneously. A constraint is a nogood Techniques 325 From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. if it forbids certain simultaneously. values being assigned to variables An n-ary constraint applies to n variables. A binary CSP is one with unary and binary constraints only. A general CSP may have constraints on any number of variables. We define a label, denoted by (x, v), as a variable- value pair which represents the assignment of value v to variable x. A compound label is the simul- taneous assignment of values to variables. We use ((%Vl),...,(~ n, vn)) to denote the compound label of assigning VI,. . . , v, to ~1,. . . , xn respectively. A k-compound label assigns k values to k variables si- multaneously. A solution tuple of a CSP is a compound label for all the variables in the CSP which satisfies all the constraints. Binary GENET Network Architecture The GENET neural network architecture is similiar to that of the GDS network. In the GENET network each variable i in 2 is represented by a cluster of label nodes, one for each value j in its domain. Each label node may be in one of two states “on” or “off”. The state S(i,j) of a label node representing the label (2’, j) indicates whether the assignment of the value j to variable i is true in the current network state. The output of a label node yi,j) is 1 if S(i,i) is “on” and 0 otherwise. All binary constraints in GENET must be represented by nogood ground terms. Binary constraints are imple- mented as inhibitory (negatively weighted) connections between label nodes which may be modified as a result of learning. Initially all weights are set to -1. The input to each label node 1(i,j) is the weighted sum of the output of all the connected label nodes: where W(i,j)(k,l) is the connection weight between the label nodes .representing the labels (i, j) and (k, Z)l. Since there are only connections between incompat- ible label nodes the input to a label node gives an in- dication of how much constraint violation would occur should the label node be in an on state. If no violation would occur the input would be a maximum of zero. A CSP it solved when the input to all the on label nodes is zero-such a state is called a global minima. Each cluster of label nodes is governed by a mod- ulator which effectively implements a variation of the min-conflicts heuristic (Minton et al. 1992). The pur- pose of the modulator is to determine which label node in the cluster is to be on. Only one label node in a clus- ter may be on at any one time. The modulator selects the label node with the highest input to be on, with ties IIf there is no constraint between two label nodes rep- 3Morris (Morris 1993) has recently reported a similiar resenting (i, j) and (Ic, Z) then W(i,j)(k,l) = 0. mechanism for escaping minima. 326 Constraint Satisfaction being broken randomly. When the modulator changes the label node which is on in a cluster we say it has made a repair. GENET Convergence Procedure A state of a GENET network represents a complete assignment of values to variables i.e. exactly one la- bel node in each cluster is on. The initial state of the GENET network is determined randomly-one la- bel node per cluster is selected arbitrarily to be on. GENET iterates over convergence cycles until it finds a global minima. We define a convergence cycle as: 1. foreach cluster in parallel do2 update states of all label nodes in cluster, 2. if none of the label nodes have changed state in step 1 then on nodes is zero then solution (a) if the input to all found-terminate, (b) else activate learning, 3. got0 step 1. Learning Like most hill-climbing searches, GENET can reach lo- cal optimal points in the search space where no more improvements can be made to the current state-in this case we say the network is in a minima. A local minima is a minima in which constraints are violated. GENET can sometimes escape such minima by mak- ing sideways moves to other states of the same “cost”. However in some minima this is not possible, in which case we say the network is in a single-state minima. To escape local minima we adjust the weights on the con- nections between label nodes which violate a constraint according to the following rule:3 wt+1 (S,j)(k,l) = Wfd,j)(k,l) - qi,j) qk,~l) (2) where Wfd,i)(k,l) is the connection weight between label , ,-,. .m nodes representing (i, j) and (k, I) at time t. By using weights we associate with each constraint a cost of violating that constraint. We can also associate with each GENET network state a cost which is the sum of the magnitudes of the weights of all the constraints violated in that state. Learning has the effect of “filling in” local minima by increasing the cost of violating the constraints which are violated in the minima. After learning, constraints which were violated in the minima are less likely to be violated again. This can be particularly useful in 2We do not w ant clusters to update their states at ex- actly the same time since this may cause the network to oscillate between a small number of states indefinitely. In a VLSI implementation we would expect the clusters to up- date at slightly different times. structured CSPS where some constraints are more crit- ical than others (Selman & Kautz 1993). Learning is activated when the GENET network state remains unchanged after a convergence cycle. Thus learning may occur when GENET, given the choice of a number of possible sideways moves to states of the same cost, makes a sideways move back to its cur- rent state. We consider this a useful feature of GENET since it allows the network to escape more complicated multi-state minima composed of a “plateau” of states of the same cost. A consequence of learning is that we can show GENET is not complete. This is because learning affects many other possible network states as well as those that compose the local minima. As a result of learn- ing new local minima may be created. A discussion of the problems this may cause can be found in (Morris 1993). General Constraints Many real-life CSPS have general constraints e.g. scheduling, car sequencing (Dincbas, Simonis, & Van Hentenryck 1988). In this section we describe how can we represent two types of general constraint, the il- legal constraint and the atmost constraint, in a GENET network. One of our motivations for devising these par- ticular constraints has been the Car Sequencing Prob- lem, a real-life general CSP once considered intractable (Parrello & Kabat 1986) and which has been success- fully tackled using CSP solving techniques (Dincbas, Simonis, & Van Hentenryck 1988). Since we cannot represent general constraints by bi- nary connections alone, we introduce a new class of nodes called constraint nodes. A constraint node is connected to one or more label nodes. Let c be a constraint node and L be the set of label nodes which are connected to c. Then the input Ic to the constraint node c is the unweighted outputs of these connected label nodes: sum of the Ic = c v(,,) (3) (i,j)EL We can consider the connection weights between constraint nodes and label nodes to be assymetric. The weight on all connections from label nodes to con- straint nodes is 1 and is not changed by learning. Con- nection weights from constraint nodes to label nodes are, like for-binary constraints, initialised to -1 and can change as a result of learning. The input to label nodes in networks with general constraints C is now given by: Itid) = C W(d,j)(k,Z) bk,Z) + C wc9(W hW kEZ,lEDk CEC The state S atm of an atmost constraint node is de- (4 termined as follows: where Vc,(i,j) is the output of the constraint node c to the label node (i, j). S atm = Iatm - N The learning mechanism for connection weights wj,(i,j) between constraint nodes c and label nodes (i, j) is given by: wt+1 wi(hj) - 1 if SC > 0 c&J) = whd) otherwise (5) where S, is the state of the constraint node. The Illegal Constraint The iZlegal( (tl , VI), . . . , (xk , vk)) constraint specifies that the k-compound label L = ((21, or), . . . , (zk, ?&)) is a nogood. An illegal constraint is represented in a GENET network by an illegal constraint node, which is connected to the k label nodes which represent the k labels in L. Sill = Iill - (k - 1) (6) The state Sill of the illegal constraint node is nega- tive if less than k - 1 of connected label nodes are on. In this case there is no possibility that the constraint will become violated should another node become on. A constraint node in this state outputs 0 to all the connected label nodes. If k - 1 of the connected label nodes are on then we want to discourage the remaining off label node from becoming on, since this will cause the constraint to be violated. However, we do not wish to penalize the label nodes which are already on, since the constraint remain satisfied even if they do change state. In this case we want to output 1 to the label node which is off and 0 to the remaining label nodes. Finally, if all the connected label nodes are on then the constraint is violated. We want to penalize all these nodes for violating the constraint, so we give them all an output of 1 to encourage them to change state. We summarize the output T/dll,(a,j) from an illegal constraint node ill to a label node representing the label (i, j) by: h,(d,j) = 0 if Sill < 0 1 + Sill - yi,j) otherwise (7) The Atmost Constraint We can easily extend the illegal constraint node ar- chitecture to represent more complex constraints. For instance, given a set of variables Var and v;*ues VuZ the atmost(iV, VW, VaZ) constraint specifies that no more than N variables from Var may take values from VuZ. The atmost constraint node is connected to all nodes of the set L which represent the labels {(i, j)li E Vim, j E VaZ, j E Di}. This constraint is a modification of the atmost constraint found in the CHIP constraint logic programming language. Techniques 327 The output from an atmost constraint node is simi- lar to that for the illegal constraint node, although we have the added complication that a single variable may have more than one value in the constraint. We do not want label nodes in the same cluster to receive different inputs from a particular constraint node since, in situ- ations where the network would normally be in a single state local minima, we would find the network oscillat- ing about the states of these label nodes. Instead, we give the output of an atmost constraint node atm to a label node representing the label (i, j) as follows: C 0 if S&m < 0 Vatm,(i,j) = 1 - Max{ yi,k) Ik E VUZ) if Satm = 0 1 otherwise (9) Experimental Results Graph Coloring In (Selman & Kautz 1993) it is reported that the per- formance of GSAT on graph coloring problems is com- parable with the performance of some of the best spe- cialised graph-coloring algorithms. This surprised us since a graph coloring problem with N vertices to be colored with k colors would require, in a conjunctive normal form (CNF) representation, N x k variables. Since each of these variables has a domain size of 2 the size of the search space is 2Nk. To represent such a problem as a CSP would require N variables of domain size k, giving a search space of size kN . For exam- ple, the 250 variable 29 coloring problem in Table 1 has a search space size in GENET of 4 x 1O365 possible states. This is far smaller than the corresponding size of 3 x 1o2183 states possible in GSAT. Another difference between GSAT and GENET is the way in which they make repairs. GSAT picks the best “global” repair which reduces the number of conflicts amongst all the variables, whereas GENET makes “lo- cal” repairs which minimizes the number of conflicts for each variable. Thus we would expect repairs made by GSAT to be of “higher quality” than those of GENET, although they are made at the extra expense of con- sidering more possibilities for each repair. We compared GSAT~ and GENET~ on a set of hard graph coloring problems described in (Johnson et al. 1991), running each method ten times on each prob- lem. We present the results of our experiments in Ta- bles 1 and 2. Both GSAT and GENET managed to solve all the problems, although GSAT makes many more re- pairs to solve each problem. These results seem to confirm our conjecture that for CSPS such as graph col- oring GENET is more effective than GSAT because of the way it represents such problems. *We ran GSAT with MAX-FLIPS set to 10 x the number of variables, and with averaging in reset after every 25 tries. 5All experiments were carried out using a GENJST simu- lator written in C++ on a Sun Microsystems Spare Classic. Table 1: GSAT on hard graph coloring problems. graph median median number nodes colors time of repairs I t 125 17 I 2.6 hours 1,626,861 1 ’ 125 18 23 sets 7, Oil 250 15 4.2 sets 580 250 29 1.1 hours 571,748 Table 2: GENET on hard graph coloring problems. Random General Constraint Sat isfaet ion Problems There are two important differences between a sequen- tial implementation of GENET and min-conflicts hill- climbing (MCHC). The first is our learning strategy for escaping local minima. The second difference is in choosing which variables to update. MCHC selects ran: domly a variable to update from the set of variables which are currently in conflict with other variables. In GENET we randomly select variables to update from the set of all variables, regardless of whether they conflict with any other variables. Our aim in this experiment was to try to determine empirically what effect these individual modifications to MCHC was making to the effectiveness of its search. We compared GENET with a basic min-conflicts hill- climbing search, a modified MCHC (MCHC2) and a mod- ified version of GENET (GENETS). MCHC2 randomly se- lects variables to update from the set of all variables, not just those which are in conflict. MCHC2 can also be regarded as a sequential version of GENET without learning. In GENET2 variables are only updated if they are in conflict with other variables. We produced a set of general CSPS with varying numbers of the &most@, VW, VuZ) constraint, where N = 3, /VQT~ = 5 and IVaZI = 5. The problems were-not guaranteed to be solvable. Each problem had fifty variables and a domain of ten values. The set of variables and values in each constraint were generated randomly. At each data-point we generated ten prob- lems. We ran each problem ten times with GENET, GENET2, MCHC and MCHC2. We set a limit of five hun- dred thousand repairs for each run, after which failure was reported if no solution had been found. Figure 1 shows that MCHC2 solves more problems than MCHC. This is to be expected since, because MCHC2 can modify the values of variables which are 328 Constraint Satisfaction 80 % 6o succ. Runs 40 Figure 1: 400 420 440 460 480 500 Number of atmost constraints Comparison of percentage of successful runs for GENET and min-conflicts hill-climbing searches on randomly generated general constraint satisfaction problems. not in conflict, it is less likely to become trapped in local minima. The performance of GENET2 shows that .learning is an even more effective way of escaping local minima. However Figure 1 shows that combining these two approaches in GENET is the most effective way of escaping minima for this particular problem set. The Car Sequencing Problem We have been using the car-sequencing problem as a benchmark problem during the development of a GENET model which would solve general CSPS. The car-sequencing problem is a real-life general CSP which is considered particularly difficult due to the presence of global atmost constraints. For a full description of the car sequencing problem see (Dincbas, Simonis, & Van Hentenryck 1988). We compared GENET with MCHC, MCHC2 and CHIP. CHIP is a constraint logic programming language which uses a complete search based on forward-checking and the fail-first principle to solve CSPS. In our experiments we used randomly-generated problems of size 200 cars and utilisation percentages in the range 60% to 80%. At each utilisation percentage we generated ten prob- lems. The problems all had 200 variables with domains varying from 17 to 28 values and approximately 1000 atmost constraints of varying arity. All the problems were guaranteed to be solvable. We ran the GENET, MCHC-~~~ MCHC2 ten times on each problem, with a limit of one million repairs for each run, after which failure was reported. This limit corresponded to a run- ning time of approximately 220 seconds at 60% utilisa- tion up to 270 seconds at 80% utilisation. We used the method described in (Dincbas, Simonis, & Van Hen- tenryck 1988) to program the problems in CHIP, which MCHC MCHC2 utilisa- % succ. median % succ. median tion % runs 60 78 repairs 737 runs 85 repairs 549 L 65 81 586 82 524 70 82 670 85 508 75 76 1282 82 811 80 29 10235 51 4449 Table 3: A comparison of MCHC and MCHC2 on 200 car sequencing problems. GENET GENET3 utilisa- % succ. median % succ. median tion % runs repairs runs repairs 60 84 463 100 452 65 87 426 100 439 70 83 456 100 426 75 85 730 100 686 80 50 4529 100 1886 Table 4: A comparison of car sequencing problems. GENET and GENET3 on 200 included adding redundant constraints to speed up the search. With a time limit of one hour to solve each problem CHIP managed to solve two problems at 60% utilisation, one problem at 65% utilisation, two prob- lems at 70% utilisation and one problem at 75% utili- sation. The results for min-conflicts hill-climbing and GENET on 200 car sequencing problems are given in Tables 3 and 4. From Table 3 it can be seen that MCHC2 is more ef- fective than MCHC at solving the car-sequencing prob- lem. However the results for MCHC2 and GENET are very similiar, indicating that learning is having very little or no effect in GENET. This can be attributed to the presence of very large plateaus of states of the same cost in the search space. Learning is activated only when GENET stays in the same state for more than one cycle, thus learning is less likely to occur when these plateaus are large. To remedy this problem we made a modification to GENET to force learning to oc- cur more often. We define the parameter p,, as the probability that, in a given convergence cycle, GENET may make sideways moves. Thus, for each convergence cycle, GENET may make sideways moves with proba- bility psw , and may only make moves which decrease the cost with probability 1 - psw. Thus, if GENET is in a state where only sideways moves may be made then learning will occur with a probability of at least 1-P,,. The results for GENET3 in Table 4, where p,, is set to 0.75, shows that this modification significantly improves the performance of GENET. Techniques ?‘?9 VLSI Implementation Although the results presented so far have been ob- tained using a GENET simulator on a single proces- sor machine, it is the aim of our project to implement GENET on VLSI chips. A full discussion of a VLSI im- plementation for GENET would be beyond the scope of this paper6 so in this section we describe what we expect to gain by using VLSI technology. A disadvantage of the min-conflicts heuristic, as noted by Minton (Minton et al. 1992), is that the time taken to accomplish a repair grows with the size of the problem. For a single-processor implementation of GENET the cost of determining for a single variable the best value to take is proportional to the number of values in the domain of the variable and the number constraints involving that value. To determine for each variable the best value to take can potentially be per- formed at constant time in a VLSI implementation of GENET no matter how large the domain or how highly constrained the problem. This would mean that the time taken for GENET to perform a single convergence cycle would be constant, no matter what the problem characteristics 7. Since we estimate the time taken to perform one convergence cycle using current VLSI tech- nology to be of the order of tens of nanoseconds, this would allow all the CSPS mentioned in this paper to be solved in seconds rather than minutes or hours. Conclusion We have presented GENET, a connectionist architecture for solving constraint satisfaction problems by iterative improvement. GENET has been designed to be imple- mented on VLSI hardware. However we have presented empirical evidence to show that even when simulated on a single processor GENET can outperform existing iterative improvement techiques on hard binary and general CSPs, We have developed strategies for escaping local min- ima which we believe significantly extend the scope of hill-climbing searches based on the min-conflicts heuristic. We have presented empirical evidence to show that GENET can effectively escape local minima when solving a range of highly constrained. real-life and randomly generated problems. Acknowledgements We would also like to thank Alvin Kwan for his use- ful comments on earlier drafts of this paper. We are grateful to Bart Selman and Henry Kautz for making their implementation of GSAT available to us. References Adorf, H., and Johnston, M. 1990. A discrete stochas- tic neural, network algorithm for constraint satisfac- tion problems. In Proceedings of the International Joint Conference on Neural Networks. Dincbas, M.; Simonis, H.; and Van Hentenryck, P. 1988. Solving the car-sequencing problem in logic pro- gramming. In Proceedings of ECAI-88. Johnson, D.; Aragon, C.; McGeoch, L.; and Schevon, C. 1991. Optimization by simulated annealing: an experimental evaluation; part II, graph coloring and number partitioning. Operations Research 39(3):378- 406. Minton, S.; Johnston, M.; Philips, A.; and Laird, P 1992. Minimizing conflicts: a heuristic repair method for constraint satisfaction and scheduling problems. Artificial Intelligence 58:161-205. Morris, P. 1993. The breakout method for escap- ing from local minima. In Proceedings of the Twelth National Conference on Artificial Intelligence. AAAI Press/The MIT Press. Parrello, B., and Kabat, W. C. 1986. Job-shop scheduling using automated reasoning: A case study of the car-sequencing problem. JOURNAL of Auto- mated Reasoning 2~1-42.. Selman, B., and Kautz, H. 1993. Domain independent extensions to GSAT: Solving large structured satisfi- ability problems. In Proceedings of the 13th Interna- tional Joint Conference on Artificial Intelligence. Sosic, R., and Gu, J. 1991. 3,000,OOO queens in less than one minute. SICART Bulletin 2(2):22-24. Tsang, E. 1993. Foundations of Constraint Satisfac- tion. Academic Press. Wang, C., and Tsang, E. 1991. Solving constraint satisfaction problems using neural-networks. In Pro- ceedings IEE Second International Conference on Ar- tificial Neural Networks. Wang, C., and Tsang, E. 1992. A cascadable VLSI design for GENET. In International Workshop on VLSI for Neural Networks and Artificial Intelligence. ‘A VLSI design for GENET is described in (Wang & Tsang 1992) 7The size of p roblem would be limited by current VLSI technology 330 Constraint Satisfaction | 1994 | 147 |
1,483 | An Approach to Multiply Segmented Constraint Satisfaction Problems* Randall A. Helzerman and Mary P. School of Electrical Engineering 1285 Electrical Engineering Building Purdue University West Lafayette, IN 47907 {helz, harper}@ecn.purdue.edu Abstract This paper describes an extension to the constraint satisfaction problem (CSP) approach called MUSE CSP (Multiply SEgmented Constraint Satisfaction Problem). This extension is especially useful for those problems which segment into multiple sets of partially shared variables. Such problems arise naturally in sig- nal processing applications including computer vision, speech processing, and handwriting recognition. For these applications, it is often difficult to segment the data in only one way given the low-level information utilized by the segmentation algorithms. MUSE CSP can be used to efficiently represent several similar in- stances of the constraint satisfaction problem simul- taneously. If multiple instances of a CSP have some common variables which have the same domains and compatible constraints, then they can be combined into a single instance of a MUSE CSP, reducing the work required to enforce node and arc consistency. Introduction Constraint satisfaction provides a convenient way to represent certain types of problems. In general, these are problems which can be solved by assigning mu- tually compatible values to a fixed number of vari- ables under a set of constraints. This approach has been used in a variety of disciplines including ma- chine vision, belief maintenance, temporal reasoning, f raph theory, circuit design, and diagnostic reasoning Kumar 1992). A classic example of a CSP is the map coloring problem (e.g., Figure l), where a color must be assigned to each country such that no two neighboring countries have the same color. A vari- able represents a country’s color, and a constraint arc between two variables indicates that the two joined countries are adjacent and should not be assigned the same color. Formally, a CSP (Mackworth 1977; Mohr & Henderson 1986) is defined in Definition 1. Definition 1 (Constraint Satisfaction Problem) f = i,j,.. I .) is the set of nodes, with IN = n, = a,b,...} is th e set of labels, with IL = 2, 217, a unary relation (i a) is admisiible if RI (i a), ala E L and (i u) is admissible) I Rr? is a binary relatioi, (i, a)-( j, b) is admissible ii R2(i, a, j, b). *This work is supported in part by Purdue Research Foundation, NSF grant number IRI-9011179, and NSF Par- allel Infrastructure Grant CDA-9015696. A CSP network contains all n-tuples in L” which satisfy R1 and R2. Since some of the values associ- ated with a variable may be incompatible with values assigned to other variables, it is desirable to eliminate as many of these values as possible by enforcing lo- cal consistency conditions (such as arc consistency) be- fore a globally consistent solution is extracted (Dechter 1992). Node and arc consistency are defined in Defini- tions 2 and 3 respectively. Node consistency is easily enforced by the operation Li = Li n { 21 RI (i, 2)). En- forcing arc consistency is more complicated, but Mohr and Henderson (Mohr & Henderson 1986) have de- signed an optimal algorithm (AC-4), which runs in O(yZ2) time (where y is the number of pairs of nodes for which R2 is not the TRUE relation). Definition 2 (Node Consistency) An instance of CSP is said to be node consistent if and only if each variable’s domain contains only labels which do not violate the unary relation Rl, i.e.: Qi E N : Qu E L; : Rl (i, a) Definition 3 (Arc Consistency) An instance of CSP is said to be arc consistent if and only if for every pair of nodes i and j, each element of L; (the domain of d) has at least one element of L, for which they both satisfy the binary re- lation R2, i.e.: Vi, j E N : Qu E L; : 3b E L, : R2(i,a, j, b) There are many types of problems which can be solved by using this approach in a more or less direct fashion. There are also problems which might benefit from the CSP approach, but which are difficult to seg- ment into a single set of variables. This is the class of problems our paper addresses. For example, suppose the map represented in Figure 1 were scanned by a noisy computer vision system, with a resulting uncer- tainty as to whether the line between regions 1 and 2 is really a border or an artifact of the noise. This situa- tion would yield two CSP problems as depicted in Fig- ure 2. A brute-force approach would be to solve both of the problems, which would be reasonable for scenes containing few ambiguous borders. However, as the number of ambiguous borders increases, the number of CSP networks would grow in a combinatorially explo- sive fashion. In the case of ambiguous segmentation, it might often be more efficient to merge the constraint networks into a single network which would compactly represent all of them simultaneously, as shown in Fig- ure 3. In this paper, we develop an extension to CSP called MUSE CSP (Multiply SEgmented Constraint Satisfaction Problem) to represent multiple instances 350 Constraint Satisfaction From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. of CSP problems. The initial motivation for extending CSP came from work in spoken language parsing (Zoltowski et al. 1992; Harper ei al. 1992; Harper & Helzer- man 1993). The output of a hidden-Markov-model- based speech recognizer is often a list of the most likely sentence hypotheses parsing can be used to ! i.e., an N-best list) where ru e sentence hypotheses. out the ungrammatical Maruyama (Maruyama 1990a; 199Oc; 1990b) has shown that parsing can be cast as a CSP with a finite domain? so constraints can be used to rule out syntactically incorrect sentence hypothe- ses. However, individually processing each sentence hypothesis provided by a speech recogmzer is inefficient since many sentence hypotheses are generated with a high degree of similarity. An alternative representation for a list of similar sentence hypotheses is a word graph or lattice of word candidates which contains informa- tion on the approximate beginning and end point of each word. Word graphs are typically more compact and more expressive than N-best sentence lists. In an experiment (Zoltowski et al. 1992), word graphs were constructed from three different lists of sentence hy- potheses. The word graphs provided an 83% reduction in storage, and in all cases, they encoded more possi- ble sentence hypotheses than were in the original list of hypotheses. Figure 4 depicts a word graph containing eight sentence hypotheses which was constructed from two sentence hypotheses: Its hard to recognizes speech and It’s hard to wreck: a nice beach. By structuring the spoken language parsing problem as a MUSE CSP problem, the constraints used to parse individual sen- tences would be applied to a word graph of sentence hypotheses, eliminating from further consideration all those hypotheses that are ungrammatical. The goal in this case is to utilize constraints to eliminate as many ungrammatical hypotheses as possible, and then to se- lect the best remaining sentence hypothesis (given the word probabilities given by the recognizer). From CSP to MUSE CSP If there are multiple, similar instances of a CSP which need to be solved, then separately enforcing node and arc consistency on each instance can often result in much duplicated work. To avoid this duplication, we have combined the multiple instances of CSP into a Figure 1: The map coloring problem as an example of CSP. When using a CSP approach, the variables are depicted as circles, where each circle is associated with a finite set of possible values, and the constraints im- posed on the variables are depicted using arcs. An arc looping from a circle to itself represents a unary con- straint (a relation involving a single variable), and an arc between two circles represents a binary constraint (a relation on two variables). F--l I-T-l 2 3 Figure 2: An I 1 ambiguous map yields two Ired. Wm. bw t Dllfualt COW CSP problems. Figure 3: The two CSP problems of figure 2 are cap- tured by a single instance of MUSE CSP. The directed edges form a DAG such that the paths through the DAG correpond to instances of combined CSPs. shared constraint network and revised the node and arc consistency algorithms to support this representation. Formally, we define MUSE CSP as follows: Definition 4 (MUSE CSP) N = {i, j,...} is th e set of nodes, with INI = n, E 5 2,” 6” a set of segments with IC! = s, Li = I . .} is the set of labels, wrth IL1 = I, uiu’i L and (;,a) is admissible in at least one segment}, RI is a unary relation, (i, a) is admissible if RI (i, a), R2 is a binary relation, (i, a)-(j, b) is admissible if R2(i, u, j, b). The segments in C are the different sets of nodes repre- senting instances of CSP which are combined to form a MUSE CSP. We also define L(i,,) to be the set of all labels a E Li that are admissible for u E C. Because there can be an exponential number of se ments in the set C, it is important to define metho 2 - s for combining instances of CSP into a single, compact MUSE CSP. To create a MUSE CSP, we must be able to determine when variables across several instances of CSP can be combined into a single shared variable, and we must also be able to determine which subsets of variables in the MUSE CSP correspond to individ- ual CSPs. A word graph of sentence hypotheses is an k’s hard to wreck a nice beach 1 Its 1 hard ( to 1 recognizes 1 speech 1 Figure 4: Multiple sentence hypotheses can be parsed simultaneously by propagating constraints over a word graph rather than individual sentences. Tractable Problems 351 excellent representation for a MUSE CSP based con- straint parser for spoken sentences. Words that occur in more than one sentence hypothesis over the same time interval are represented using a single variable. The edges between the word nodes, which indicate temporal ordering among the words in the sentence, provide links between words in a sentence hypothesis. A sentence hypothesis, which corresponds to an indi- vidual CSP, is simply a path through the word nodes in the word graph going from a start node to an end node. The concepts used to create a MUSE CSP network for spoken language can be adapted to other CSP problems. In particular, it is desirable to represent a MUSE CSP as a directed acyclic graph (DAG) where the paths through the DAG correspond to instances of CSP problems. In addition, CSP networks should be combined only if they satisfy the conditions below: Definition 5 (MUSE CSP Combinability) p instances of CSP Cl ) . . . , c, are said to be MUSE Combinable ifl the following conditions hold: 1. If {m,u2 ,..., 4 E (NI ,--*, Np} A (i E ul A i f 02 A - + .Ai E uq) A (a E L(i,ul)Aa E L(i,02)A- - .Aa E L(i,u,>)t then Rl,,(i, a) = Rl,,(i, a) = . . . = Rla,(i,a). 2. If(cq,~2 ,..., ~7~) s {NI ,..., Np} A (i,j E m Ai,j E 62 A. . . A i, j E up) A (a E L(i,ul) A a E L(i,u,) A - - -A a E L(i,u,)) A (b E L(j,ul)Ab E L(j,uz)A- - .Ab E L(j,uo)), then Ri&(i, a, j,b) = Ri?,,(i, a, j, b) = . . . = RiTifu,(i, a, j, b). These conditions are not overly restrictive, requiring only that the labels for each variable i must be con- sistently admissible or inadmissible for all instances of CSP which are combined. These conditions do not uniquely determine which variables should be shared across CSP instances for a particular problem type. We define an operator @ which combines instances of CSP into an instance of MUSE CSP in Definition 6. Definition 6 (@, the MUSE CSP Combining Operator) If Cl,..., C, are MUSE combinable instances of CSP, then c = Cl @ . . . $ C, will be an instance of MUSE CSP such that: N = N1 u . . . u Np C= Nl ii: i N) 1 u... ‘YJE L(i,N1) U Lpi,N2) U . -0 u L(i,Np) RI (i, TV) = 3 (Rl,(i, a) A a E L(i,,)) o=N1 NP R2(i, a, j, b) = V (R2&, a, j, a) A (a E L(i,u) A b E L(,,,))) a=N1 As mentioned above, a DAG is an excellent rep- resentation for a MUSE CSP, where its nodes are the elements of N, and its edges are arranged such that every c in C maps onto a path through the DAG. In some a B plications, tion (Zoltowski et al. such as speech recogni- 1992; Harper et d. 1992; Harper & Helzerman 1993)., the DAG will already be available to us. In applications where the DAG is not available, the user must determine the best way to combine instances of CSP to maximize sharing. We have provided an algorithm (shown in Figure 5) which automatically constructs a single instance of a MUSE 1. Assign each element i of IV some number u in the range of 1 to n by using ord(i) = u. 2. 3. 4. 5. nodes called start and 6. 7. Figure 5: The algorithm to create a DAG to represent the set C, and examples of its action. CSP from multiple CSP instances in Q(sn log n) time, where s is the number of CSP instances to combine, and n is the number of nodes in the MUSE CSP. This algorithm requires the user to assign numbers to vari- ables in the CSPs such that variables that can be shared are assigned the same number. As shown by the examples in Figure 5, for a given set of CSPs, the greater the intersection between the sets of node num- bers across CSPs, the more compact the MUSE CSP. Depending on the application, the solution for a MUSE CSP could range from the set of consistent la- bels for a single path through the MUSE CSP to all compatible sets of labels for all paths (or CSPs). For our speech processing application, we select the most likely path through the MUSE CSP based on probabil- ities assigned to word candidates by our speech recog- nition algorithm. It is desirable to prune the search space before selecting a solution by enforcing local con- sistency conditions, such as node and arc consistency. However, node and arc consistency must first be ex- tended to MUSE CSP. Definition 7 (MUSE Node Consistency) An instance of MUSE CSP is said to be node consistent i I each variable’s domain, Li, contains only la and only if els which do not violate the unary relation Rl, i.e.: Vs E N : Va E Li : Rl (i, a) Definition 8 (MUSE Arc Consistency) An instance of MUSE CSP is said to be arc consistent if and only if for every label a in each domain Li there is at least one seg- ment IY whose nodes’ domains contain at least one label b which satisfy the binary relation R2, i.e.: Vi E N : Va E Li:3aEC:iEaAVjEa:j#ij3bELj:R2(i,u,j,b) A MUSE CSP is node consistent if all of its seg- ments are node consistent. Unfortunately, arc consis- tency in a MUSE CSP requires more attention because even though a binary constraint might disallow a label in one segment, it might allow it in another segment. When enforcing arc consistency in a CSP, a label a in Li can be eliminated from node i whenever any other domain Lj has no labels which together with a satisfy the binary constraints. However, in a MUSE CSP, be- fore a label can be eliminated from a node, it must fail to satisfy the binary constraints in all the segments in 352 Constraint Satisfaction Notation Meaning t&i) An ordered pair of nodes. K&j), al An ordered parr of a node pair (i, j ) and a label a E Li. W(i,i>, al = 1 mdrcates that the label missable for been eliminated from) \ and has already a 1 segments ~ (1, ) umber of labels m Lj *tibl.= with a in Li. W-Sup[ I, J ), a] means dm!s;L; ;” every segment i,l, ~kis%bk’!n )eover~~~~rnent s [") ] - - Local-Prev-Sup(i, a) A set of elements (a,~) such that (i, i) E Prev-edgei and a is compatible with at least one of j’s labels. A set of elements (a. 1) such that List IA queue of arc support to be deleted. Figure 6: Data structures and notation for the algo- rithms. which it appears. Therefore, the definition of MUSE arc consistency is modified as shown in Definition 8. Notice that Definition 8 reduces to Definition 3 when the number of segments is one. Because MUSE arc consistency must hold for all segments, if a single CSP were selected from the MUSE CSP after MUSE arc consistency is enforced, additional filtering can be re- quired for that single instance. MUSE CSP Arc Consistency Algorithm MUSE arc consistency1 is enforced by removing from the domains those labels in Li which violate the con- ditions of Definition 8. MUSE AC-l builds and main- tains several data structures, described in Figure 6, to allow it to efficiently perform this operation. Figure 8 shows the code for initializing the data structures, and Figure 9 contains the algorithm for eliminating incon- sistent labels from the domains. If label a at node i is compatible with label b at node j, then a supports b. To keep track of how much sup- port each label u has, the number of labels in Lj which are compatible with a in Li are counted, and the total is stored in CounterI(i, jl, aE; The algorithm must also keep track of which abe s t at label a supports by us- ing S[(i, j), ] a , w ic h’ h is a set of arc and role value pairs. For example, S[(i, j), a] = {[(j, i), b], [(j, i), c]} means that a in Li supports b and c in Lj. If a is ever invalid for Li then b and c will loose some of their support. This is accomplished by decrementing Counter[(j, i), b] and Counter[(j, i), c]. For regular CSP arc consistency, ‘We purposely keep our notation and presentation as close as possible to that of Mohr and Henderson in (Mohr & Henderson 1986), in order to aid understanding for those already familiar with the literature on arc consistency. 8. Figure 7: A: Local-Prev-Sup and Local-Next-Sup for an example DAG. The sets indicate that the label a is allowed for every segment which contains n, m, and j, but is disallowed for every segment which contains : Given that Next-edgej = {(j, z), (j, y)) and iy segment containin: 0th i’ and j. u] = G and S[(i y) l,= g a is inadmissible if Counter[(i, j), a] b ecomes zero, a would automati- cally be removed from Li, because that would mean that u was incompatible with every label for j. How- ever, in MUSE arc consistency, this is not the case, because even though u could not participate in a so- lution for any of the segments which contain i and j, there could be another segment for which a would be perfectly legal. A label cannot become globally inad- missible until it is incompatible with every segment. By representing C as a DAG, the algorithm is able to use the properties of the DAG to identify local (and hence efficiently computable) conditions under which labels become globally inadmissible. Consider Figure 7A, which shows the nodes which are adjacent to node i in the DAG. Because every segment in the DAG which contains node i is represented as a path through the DAG going through node i, either node j or node Ic must be in every segment containing i. Hence, if the label a is to remain in Li, it must be compatible with at least one label in either Lj or Lk. Also, because either n or m must be contained in every segment con- taining i, if label a is to remain in Li, it must also be compatible with at least one label in either L, or L,. In order to track this dependency, two sets are main- tained for each label a at i, Local-Next-Sup(i, a), and Local-Prev-Sup(i, a). Local-Next-Sup(i, u) is a set of ordered node pairs (i, j) such that (i, j) E Next-edgei, and there is at least one label b E Lj which is com- patible with a. Local-Prev-Sup(i, a) is a set of or- dered pairs (i, j) such that (j, i) E Prev-edgei, and there is at least one label b E Lj which is compat- ible with a. Whenever one of i’s adjacent nodes, .i, no longer has any labels b in its domain which ‘are compatible with a, then (i, j) should be removed from Local-Prev-Sup(i, a) or Local-Next-Sup(i, a), depend- ing on whether the edge is from j to i or from i to j, respectively. If either Local-Prev-Sup(i, a) or Local- Next-Sup(i, a) becomes the empty set, then a is no longer a part of any solution, and may be eliminated from Li . In Figure 7A, the role value a is admissible for the segment containing i and j, but not for the Tractable Problems 353 :i,jEahi#jhi,jEN}; E E A (j,z) E Next-edgej} U((i, j)l(j, i) E Next-edgej} 19. if k U{(i, end)l(j, end) E Next-edgej}; i,j) E *Next-ed ei then k ocal-Next-Sup 20. if j, i) E Prev-ed f i, a):=Local-Next-Sup(i, a) u {(i, j)}; ei then ocal-Prev-Sup f i, a):=Local-Prev-Sup(i, a) U {(i, j)} } Local-Prev-Sup 2, t Local-Prev-Sup 3, a 7 a a Figure 8: Algorithm for initializing the MUSE CSP data structures along with a simple example. The dot- ted lines are members of the set E. segment containing i and h. If because of constraints, thg labels in j be&me inconsistent with a on i, (i, j) would be eliminated from Local-Next-Sup(a, i), leav- ing an empty set. In that case, a would no longer be supported by any segment. The algorithm can utilize similar conditions for nodes wh;ch may not be directly connected to i by Next-edgei or Prev-edgei . Consider Figure 7B. Sup- pose that the label a at node i is compatible with a label in Lj, but it is incompatible the labels in L, and L,, then it is reasonable to eliminate a for all seg- ments‘ would -containing both i and have to include either j, because those segments node z or IU. To determine whether a role value is admissible for aset of segments containing i and j, we calculate Prev-Sup[(i, j), a] and Next-Sup[(i, j), a] sets. Next-Sup[(i, j), a] includes all (i, n) arcs which-support u in , - i-i&en that there is a directed edge between j and Ic aid (i, j) supports a. Prev-Sup[(i, j), ] a includes all (i, L) arcs which support a in i given that there is a directed edge between lc and j and (i, j) supports u. Note that Prev-Sup[(i, j), a] will contain an ordered pair (i, j) if (i, j) E Prev- edgej, and Next-Sup[(i, j), a] will contain-an ordered pair (i, j) if (j, i) E Next-edgej . These elements are included because the edge between nodes i and j is sufficient to allow j’s labels to support a in the seg- ment containing i and j. Dummy ordered pairs are also created to handle cases where a node is at the be- ginning or end of a network: when (start, j) E Prev- edgej, (i, start) is added to Prev-Sup[(i, j), a], and when (j, end) E Next-edgej , (i, end) is added to Next- Sw[(i.d, 4. F g i ure 8 shows the support sets that the initialization algorithm creates for the label a in the simple example DAG. To illustrate how these data structures are used in MUSE AC-l (see Figure 9), consider what happens if initially [( 1,3), a] E List for the MUSE CSP in Figure 8. First, it is necessary to remove [( 1,3), al’s support from all S[(3, l), z] such that [(3, l), z] E S[(1,3),a] by decrementing for each x, Counter[(3, l), X] by one. If the counter for any [(3, l), z] becomes 0, and the value has not already been placed on the List, then it is added for future processing. Once this is done, it is necessary to remove [( 1,3), al’s influence on the DAG. To handle this, we examine the two sets Prev- Sup[(l, 3), a] = {(1,2), (1,3)} and Next-Sup[(l, 3), a] = {(l,end)}. Note that the value (1,end) in Next- Sup[( 1,3), U] and th e value (1,3) in Prev-Sup[(l, 3), a], once eliminated from those sets. reauire no further action because they are dummy values. However, the value (1,2) in Prev-Sup[( 1,3), a] indicates that (1,3) is a member of Next-Sup[( 1,2), a], and since a is not admissible for (1,3), (1,3) should be removed from Next-Sup[( 1,2), a], leaving an empty set. Note that because Next-Sup[(l, 2), u] is empty and assum- ing that M[(1,2),u] = 0, [(l, 2), a] is added to List for further processing. Next, (1,3) is removed from Local-Next-Sup( 1, a), but that set is non-empty. Dur- ing the next iteration of the while loop, [( 1,2), u] is popped from List. When Prev-Sup[( 1,2), a] and Next- SUPKI, 2) 1 a are processed, Next-Sup[( 1,2), a] = o and Prev-Sup[(l, 2), ] a contains only a dummy, which is removed. When (1,2) is removed from Local-Next- Sup( 1, a , the set becomes empty, so a is no longer compati b le with any segment containing 1 and can be eliminated from further consideration as a possible la- bel for 1. In contrast, consider what happens if initially [(l, 2), a] E List for the MUSE CSP in Figure 8. In this case, Prev-Sup[( 1,2), a] contains (1,2) which re- quires no additional work; whereas, Next-Sup[( 1,2), a] contains (1,3), indicating that (1,2) must be removed from Prev-Sup[(l, 3), al’s set. After the removal, Prev- SUPKL 9, 1 a is non-empty, so the segment containing nodes 1 and 3 still supports the label a on 1. The rea- son that these two cases provide different results is that nodes 1 and 3 are in every segment; whereas, nodes 1 and 2 are only in one of them. Running Time and Corredness of MUSE AC-1 The running time of the routine to initialize the MUSE CSP data structures (in Figure 8) is O(n2Z2+n3Z), and the running time for the aleorithm which nrunes labels that are not arc consistent”(in Figure 9) also operates in O(n212 + n3Z) time, where n is the number of nodes 354 Constraint Satisfaction 22. fdr E 23. (j, ~1 &al-Prev-Sup(j, b) do 24. ifM[(j,a),b] = 0 then 25. List:=List U([(j, s), b] if (i, j) E Prev-ed I ; M[(j,ic), b] := 1 ) }; 26. Local-Prev-Sup ej then ;2. e 1, b):= Local-Prev-Sup(j, b) - ((j,i)}; if~yl-l’felI-S;f 1, b) = 4 then { 29. fdr (j, ~1 E &al-Next-Su 30. (j, b) do 31. ifM[(j,z), b] = 0 then List:=Llst U([(j, G), b] P ; M[(j, c), b] := 1 } } } ; Figure 9: Algorithm to enforce MUSE CSP arc consis- tency. in a MUSE CSP (and 1 is the number of labels. By comparison, the running time for CSP arc consistency is (n212), assuming that there are n2 constraint arcs. Note that for applications where I = n, the running times of the algorithms are the same (this is true for parsing spoken language with a MUSE CSP). Also, if the C is representable as a planar DAG (in terms of Prev-edge and Next-Edge, not E), then the running time of the algorithms is the same because the average number of values in Prev-Sup and Next-Sup would be a constant. In the general case, the increase in the running time for arc consistency of a MUSE CSP is reasonable considering that it is ossible to combine a large number of CSP instances possibly exponential) Q into a compact graph with a small number of nodes. Next we prove the correctness of MUSE AC-4. A role value is eliminated from a domain by MUSE AC-4 only if its Local-Prev-Sup or its Local-Next-Sup be- comes empty. Therefore, we must show that a role value’s local support sets become empty if and only if that role value cannot participate in a MUSE arc consistent solution. This is proven for Local-Next-Sup (Local-Prev-Sup follows by symmetry). Observe that if a E Li, and it is incompatible with all of the nodes which immediately follow Li in the DAG, then it can- not participate in a MUSE arc consistent solution. In line 19 in figure 9, (i, j) is removed from Local-Next- Sup(i, a) set only if [(i, j), a] has been popped off List. Therefore, we show that [(i, j), a] is put on List, only if a E Li is incompatible with every segment which con- tains i and j by induction on the number of iterations of the while loop. For the base case, the initialization routine only puts [(i, j), a] on List if a E Li is incompatible with every label in Lj (line 15 of Figure 8). Therefore, a E Li is in no solution for any segments which contain i and j. Assume the condition holds for the first k itera- tions of the while loop in Figure 9, then durin the (k+l)th iteration, new tuples of the form [(i, j), a are 4 put on the list by line 6 (in which case a is no longer compatible with any labels in Lj), line 11 (in which case Prev-Sup([(i, j), u]) = 4), line 16 (in which case Next-Sw( [(i, j), al) = 4)) or line 24 (there is no longer any Next-Sup for a). In any of these cases, a E Li is incompatible with every segment which contains i and j. We can therefore conclude that this is true for all iterations of the while loop. In conclusion, MUSE CSP can be used to efficiently represent several similar instances of the constraint sat- isfaction problem simultaneously. If multiple instances of a CSP have some common variables with the same domains and compatible constraints, then they can be combined into a single instance of a MUSE CSP, and much of the work required to enforce node and arc con- sistency need not be duplicated across the instances. For our work in speech processing, the MUSE arc con- sistency algorithm was very effective at pruning the in- compatible labels for the individual CSPs represented in the composite structure. Very little additional work is typically needed to enforce arc consistency on a CSP represented by the best path through the network. References Dechter, R. 1992. From local to global consistency. Artificial Intelligence 55:87-107. Harper, M. P., and Helzerman, R. A. 1993. PARSEC: A constraint-based parser for spoken language pars- &~ool of El Technical Report EE93-28, Purdue University, ec t rical Engineering, West Lafayette, IN. Harper, M.; Jamieson, L.; Zoltowski, 6.; and Helzer- man, R. 1992. Semantics and constraint parsing of word graphs. In Proceedings of the International Con- ference on Acoustics, Speech, and Signal Processing, 11-63-11-66. Kumar, V. 1992. satisfaction problems: Algorithms for constraint- 13( 1):32-44. A survey. AI Magazine Mackworth, A. 1977. Consistency in networks of relations. Artificial Intelligence 8( 1):99-118. Maruyama, H. 1990a. Constraint dependency gram- mar. Japan. Technical Report #RTOO44, IBM, Tokyo, Maruyama, H. 1990b. Constraint dependency gram- mar and its weak generative capacity. Computer Soft- ware. Maruyama, H. 199Oc. Structural disambiguation with constraint propagation. In The Proceedings of the An- nual Meeting of ACL. Mohr, R., and Henderson, T. C. 1986. Arc and path consistency revisited. 233. Artificial Intelligence 28:225- Zoltowski, C.; Harper, M.; Jamieson, L.; and Helzer- man, R. 1992. PARSEC: A constraint-based frame- work for spoken language understanding. In Proceed- ings of the International Conference on Spoken Lan- guage Understanding. Tractable Problems 355 | 1994 | 148 |
1,484 | Reasoning about Temporal Relatio A Maximal Tractable Su of Alllen’s Interval Algebra* Bernhard Nebel Universit at Ulm Fakultat fiir Informatik D-89069 Ulm, Germany nebel@informatik.uni-ulm.de Abstract We introduce a new subclass of Allen’s interval algebra we call “ORD-Horn subclass,” which is a strict super- set of the “pointisable subclass.” We prove that rea- soning in the ORD-Horn subclass is a polynomial-time problem and show that the path-consistency method is sufficient for deciding satisfiability. Further, using an extensive machine-generated case analysis, we show that the ORD-Horn subclass is a maximal tractable subclass of the full algebra (assuming P#NP). In fact, it is the unique greatest tractable subclass amongst the subclasses that contain all basic relations. Introduction Temporal information is often conveyed qualitative- ly by specifying the relative positions of time in- tervals such as “. . .point to the figure while ex- plaining the performance of the system . . , ” Fur- ther, for natural language understanding (Allen 1984; Song & Cohen 1988), general planning (Allen 1991; Allen & Koomen 1983), presentation planning in a multi-media context (Feiner et al. 1993), and knowl- edge representation (Weida & Litman 1992), the repre- sentation of qualitative temporal relations and reason- ing about them is essential. Allen (1983) introduces an algebra of binary relations on intervals for repre- senting qualitative temporal information and address- es the problem of reasoning about such information. Since the reasoning problems are NP-hard for the full algebra (Vilain & Kautz 1986), it is very unlikely that other polynomial-time algorithms will be found that solve this problem in general. Subsequent research has concentrated on designing more efficient reason- ing algorithms, on identifying tractable special cases, and on isolating sources of computational complexity (Golumbic & Shamir 1992; Ladkin & Maddux 1988; Nijkel 1989; ValdBz-PCrez 1987; van Beek 1989; 1990; *This work was supported by the German Ministry for Research and Technology (BMFT) under grant ITW 89018 as part of the WIP project and under grant ITW 9201 as part of the TACOS project, and by the European Comis- sion as part of DRUMS-II, the ESPRIT Basic Research Project P6156. 356 Chstraint Satisfaction Hans-Xirgen DFKI Stuhlsatzenhausweg 3 D-66123 Saarbriicken, Germany hjb@dfki.uni-sb.de van Beek & Cohen 1990; Vilain & Kautz 1986; Vilain, Kautz, & van Beek 1989). We extend these previous results in three ways. Firstly, we present a new tractable subclass of Allen’s interval algebra, which we call ORD-Horn subclass. This subclass is considerably larger than all other known tractable subclasses (it contains 10% of the full algebra) and strictly contains the pointisable subclass (Ladkin & Maddux 1988; van Beek 1989). Secondly, we show that the path-consistency method is sufficient for deciding satisfiability in this subclass. Thirdly, us- ing an extensive machine-generated case analysis, we show that this subclass is a maximal subclass such that satisfiability is tractable (assuming P#NP) .l From a practical point of view, these results imply that the path-consistency method has a much larger range of applicability than previously believed, provid- ed we are mainly interested in satisfiability. Further, our results can be used to design backtracking algo- rithms for the full algebra that are more efficient than those based on other tractable subclasses. easoning about Interval Relations using Allen’s Interval Algebra Allen’s (1983) approach to reasoning about time is based on the notion of time intervals and binary re- lations on them. A time interval X is an ordered pair (X- , X+) such that X- < X+, where X- and X+ l are interpreted as points on the real line.2 So, if we talk about interval interpretations or I- interpretations in the following, we mean mappings of time intervals to pairs of distinct real numbers such that the beginning of an interval is strictly before the ending of the interval. ‘The programs we used and an enumeration of the ORD-Horn subclass can be obtained from the authors or by anonymous ftp from duck. dfki. uni-sb .de as /pub/papers/DFKI-others/RR-93-1l.programs.tar.Z. 20ther underlying models of the time line are also possi- ble, e.g., the rationals (Allen & Hayes 1985; Ladkin 1987). For our purposes these distinctions are not significant, however. From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. Basic Interval Sym- Endpoint Relation bol Relations X before Y 4 x- <y-, x- < y+, Y after X + x+ < Y-, x+ < Y+ X meets Y m x- <y-, x- <y+, Y met-bv X m’ . x+ X overlaps Y = Y-, x+ < Y+ 0 x- < y-1 x- < Y+ Y overlapped-by X oy x+>y-, x+<y’l I. -- , X during Y d x- > Y-; x- < y+, Y includes X d- * x+ > Y-, x+ <y+ X starts Y s X- =Y-, x- <y+, Y started-by X x+ > Y-, x+ < Y+ X finishes Y ;- x- > Y-, x- < y+, Y finished-by X f’ x+>y-, x+=y+ X equals Y E X- = Y-, x- <y+, x+>y-. x+=y+ Table 1: The set B of the thirteen basic relations. Given two interpreted time intervals, their relative positions can be described by exactly one of the el- ements of the set B of thirteen basic interval rela- tions (denoted by B in the following), where each basic relation can be defined in terms of its endpoint re- lations (see Table 1). An atomic formula of the form XBY, where X and Y are intervals and B E B, is said to be satisfied by an I-interpretation iff the interpre- tation of the intervals satisfies the endpoint relations specified in Table 1. In order to express indefinite information, unions of the basic interval relations are used, which are writ- ten as sets of basic relations leading to 213 binary in- terval relations (denoted by R, S, T)-including the null relation 0 (also denoted by I) and the univer- sal relation B (also denoted by T). The set of all binary interval relations 2B is denoted by A. An atomic formula of the form X (Br , . . . , B,,} Y (denoted by 4) is called interval formula. Such a formula is satisfied by an I-interpretation 9 iff X Bi Y is satisfied by 3 for some e’, 1 5 i 5 n. Finite sets of interval formulas are denoted by 0. Such a set 0 is called I-satisfiable iff there exists an I-interpretation % that satisfies every formula of 0. Further, such a satisfying I-interpretation 3 is called I-model of 9. If an interval formula 4 is satisfied by every I-model of a set of interval formulas 0, we say that q5 is logically implied by 0, written 0 by 4. Fundamental reasoning problems in this frame- work include (Golumbic & Shamir 1992; Ladkin & Maddux 1988; van Beek 1990; Vilain & Kautz 1986): Given a set of interval formulas 0, (1) decide the of I- satisfiability of0 (ISAT), and (2) determine for each pair of intervals X, Y the strongest impkied relation be- tween them (ISI). In the following, we often consider restricted rea- soning problems where the relations used in interval formulas in 0 are only from a subclass S of all inter- val relations. In this case we say that 0 is a set of formulas over S, and we use a parameter in the prob- lem description to denote the subclass considered, e.g., ISAT( As is well-known, ISAT and IS1 are equiva- lent with respect to polynomial Turing-reductions (Vi- lain & Kautz 1986) and this equivalence also extends to the restricted problems ISAT and ISI( provided S cant ains all basic relations. The most prominent method to solve these prob- lems (approximately for all interval relations or exact- ly for subclasses) is constraint propagation (Allen 1983; Ladkin & Maddux 1988; Nijkel 1989; van Beek 1989; van Beek & Cohen 1990; Vilain & Kautz 1986) using a slightly simplified form of the path-consistency al- gorithm (Mackworth 1977; Montanari 1974). In the following, we briefly characterize this method without going into details, though. In order to do so, we first have to introduce Allen’s interval algebra. Allen’s interval algebra (1983) consists of the set A = 2B of all binary interval relations and the op- erations unary converse (denoted by VW), binary in- tersection (denoted by n), and binary composition (denoted by o), which are defined as follows: vx, Y: XR-Y u YRX VX,Y: X(RnS)Y H XRY AXSY VX,Y: X (Ro S) Y fs 32: (XRZ A ZSY). Assume an operator I’ that maps finite sets of inter- val formulas to finite sets of interval formulas in the following way: w = OU{XTY 1 X, Y appear in 0) u(XRY 1 (Y R- X) E 0) u{X (Rn S) Y 1 (XRY), (XSY) E 0) U{X (Ro S) Y 1 (XRZ), (ZSY) E 0). Since there are only finitely many different interval for- mulas for a finite set of intervals and I’ is monotone, it follows that for each 0 there exists a natural number n such that I?*(O) = I’*+‘(O). P(0) is called the closure of 0, written 0. Considering the formulas of the form (X RiY) E 0 for given X, Y, it is evident that the Ri ‘s are closed under intersection, and hence there exists (XSY) E G such that S is the strongest relation amongst the_Ri’s, i.e., S E Ri, for every i. The subset of a closure 0 containing for each pair of intervals only the strongest relations is called the reduced closure of 8 and is denoted by 6. As can be easily shown, every reduced closure of a set 0 is path consistent (Mackworth 1977), which means that for every three intervals X, Y, 2 and for every interpretation 3 that satisfies (XRY) f 6, there exists an interpretation 3 that agrees with 5 on X and Y and in addition satisfies (XSZ), (ZS’Y) E 6. Under the assumption that (XRY) E 0 implies (Y R’ X) f 0, it is also easy to show that path consistency of 0 implies that 0 = 6. For this reason, we will use the term path-consistent set as a synonym for a set that Tractable Problems 357 is the reduced closure of itself. Finally, computing 6 is polynomial in the size of 0 (Mackworth & Freuder 1985; Montanari 1974). The ORD-Horn Subclass Previous results on the tractability of ISAT (and hence ISI( for some subclass S c A made use of the expressibUy of interval formulas over S as certain logical formulas involving endpoint relations. As usual, by a clause we mean a disjunction of lit- erals, where a literal in turn is an atomic formula or a negated atomic formula. As atomic formulas we allow a 5 b and a = b, where a and b denote endpoints of intervals. The negation of a = b is also written as a # b. Finite sets of such clause will be denoted by Q. In. the following, we consider a slightly restricted form of clauses, which we call ORD clauses. These clauses do not contain negations of atoms of the form i.e., they only contain literals of the form: a = b, a 5 b, a # b. The ORD-clause form of an interval formula 4, written r(4), is the set of ORD Cl .auses over endpoint relations that is equivalent to 4, i.e., every interval model of d can be transformed into a model of the ORD-clause form over the reals and vice uersa using the obvious transformation. Consider, for instance, n(X {d, o, s} Y): {(X- 5 x+1, (X- # x+1, (Y- 5 y+), (Y- # y+), (X- 5 y+), (X- #y+), (Y- I x+)9 w+ I y+1, (x+ # y-1, Gf+ # y+)l The function r(s) is extended to finite sets of interval formulas in the obvious way, i.e., for identical inter- vals in 0, identical endpoints are used in n(o). Sim- ilarly to the notions of I-satisfiability, we define R- satisfiability of 0 to be the satisfiability of Q over the real numbers. Proposition 1 0 is I-satisfiable ifl ~(63) is R- satisfiable. Not all relations permit a ORD-clause form that is as concise as the the one shown above, which contains only unit clauses. However, in particular those rela- tions that allow for such a clause form have interesting computational properties. For instance, the continu- ous endpoint subclass (which is denoted by C) can be defined as the subclass of interval relations that (1) permit a clause form that contains only unit clauses, and (2) for each unit clause a # b, the clause form contains also a unit clause of the form a 5 b or b < a. As demonstrated above, the relation {d,o, s} is a member of the continuous endpoint subclass. This subclass has the favorable property that the path- consistency method solves ISI (van Beek 1989; van Beek & Cohen 1990; Vilain, Kautz, & van Beek 1989). A slight generalization of the continuous end- point subclass is the pointisable subclass (denoted by P) that is defined in the same way as 6, but with- out condition (2). Path-consistency is not sufficient for solving I%.(P) ( van Beek 1989) but still sufficient for deciding satisfiability (Ladkin & Maddux 1988; Vilain & Kautz 1986). We generalize this approach by being more liberal concerning the clause form. We consider the subclass of Allen’s interval algebra such that the relations per- mit an ORD-clause form containing only clauses with at most one positive literal, which we call ORD-Horn clauses. The subclass defined in this way is called ORD-Horn subclass, and we use the symbol % to refer to it. The relation (0, s, f’} is, for instance, an el- ement of 3c because n(X (0, s, f’} Y) can be expressed as follows: {(X- L x+1, (X- # x+)1 (Y- I y+), (Y- # y+), (X- 5 Y-), (X- 5 Y+), (X- # Y+), (Y- 5 x+), (x+ # Y-), (x+ 5 Y+), (X- # Y- vx+ # Y+)}. By definition, the ORD-Horn subclass contains the pointisable subclass. Further, by the above example, this inclusion is strict. Consider now the theory ORD that axiomatizes “=” as an equivalence relation and “5” as a partial ordering over the equivalence classes: vx,y,z: XLYAYlz + X<% vx: x5x vx, y: x<yAyjx + x=y vx, y: x = y + XSY vx,y: x=y + y<x. Although this theory is much weaker than the theory of the reals, R-satisfiability of a set Q of ORD clauses is nevertheless equivalent to the satisfiability of s2 U QRD over arbitrary interpretations. Proposition 2 A set of ORD clauses Q is R- satisfiable iff $2 U ORD is satisfiable.3 Proof Sketch. Any linearization of a partial order that satisfies all atoms appearing in ORD clauses also satisfies these atoms. Hence, a model of !Z? U ORD can be used to generate an R-model for Sz. The other direction is trivial. q In the following, ORDn shall denote the axioms of QRD instantiated to all endpoints mentioned in 52. As a specialization of the Herbrand theorem, we obtain the-next proposition. Proposition 3 Cl U QRD is satisfiable i$ Sz U QRDa is satisfiable. 3Full proofs are given in the long paper (Nebel & Biirckert 1993), which can be obtained by anonymous ftp from duck.dfki.uni-sb.de. 358 Constraint Satisfaction From the fact that ORDQ and M are propositional Horn formulas, polynomiality of ISAT(31) is immedi- ate. Theorem 4 ISAT(31) is polynomial. The Applicability of Path-Consistency Enumerating the ORD-Horn subclass reveals that there are 868 relations (including the null relation I) in Allen’s interval algebra that can be expressed us- in ORD-Horn clauses. Since the full algebra contains 21% = 8192 relations, 31 covers more than 10% of the full algebra. Comparing this with the continuous end- point subclass 6, which contains 83 relations, and the pointisable subclass P, which contains I88 relations,4 having shown tractability for ?t is a clear improvement over previous results. However, there remains the ques- tion of whether the “traditional” method of reasoning in Allen’s interval algebra, i.e., constraint propagation, gives reasonable results. As it turns out, this is indeed the case. Theorem 5 Let 6 be a path-consistent set of interval formulas over ?i. Then 6 is I-satisfiable iff (XIY) # 6. Proof Sketch. A case analysis over the possible non- unit clauses in ~(6) U ORD,(G) reveals that no new units can be derived by positive unit resolution, if the ORD-clause form of the interval formulas satisfies the requirement that it contains all implied atoms and the clauses are minimal. By refutation completeness of positive unit resolution (Henschen & Wos 1974), the claim follows. q The only remaining part we have to show is that transforming 0 over 3c into its equivalent path- consistent form 6 does not result in a set that contains relations not in Z. In order to show this we prove that 31 is closed under converse, intersection, and composi- tion, i.e., 31. (together with these operations) defines a subalgebra of Allen’s interval algebra. Theorem 6 7-l is closed under converse, intersection, and composition. Proof Sketch. The main problem is to show that the composition of two relations has an ORD-Horn form. We show that by proving that any minimal clause C implied by ?r((XRY, YSZ}) is either ORD-Horn or there exists a set of ORD-Horn clauses that are im- plied by n((XRY,YSZ}) and imply C. q From that it follows straightforwardly that ISAT(31) is decided by the path-consistency method. Theorem 7 Lf 0 is a set ouer 31, then 0 is satisfiable ijgc (XIY) $! 0 for all intervals X, Y. 4 An enumeration of C and P is given by van Beek and Cohen (1990). The Borderline between Tractable and NP-complete Subclasses Having identified the tractable fragment 31 that con- tains the previously identified tractable fragment P and that is considerably larger than P is satisfying in itself. However, such a result also raises the quesz tion for the the boundary between polynomiality and NP-completeness in Allen’s interval algebra. While the introduction of the algebraic structure on the set of expressible interval relations may have seem to be only motivated by the particular approximation algorithm employed, this structure is also useful when we explore the computational properties of restricted problems. For any arbitrary subset S c CA, ‘s shall denote the closure of S under converse,-intersection, and composition. In other words, 3 is the carrier of the least subalgebra generated by S. Apparently, it is nossible to translate anv set of interval formulas L 0 over S way such ” into a set 0’ over S in polynomial time that I-satisfiability is preserved. in a Theorem 8 ISAT can be polynomially trans- formed to ISAT( In other words, once we have proven that satisfia- bility is polynomial for some set S C ,4, this result extends to the least subalgebra generated by S. Con- versely, NP-hardness for a subalgebra is “inherited” by all subsets that generate this subalgebra. It still takes some effort to prove that a given frag- ment S is a maximal tractable subclass of Allen’s inter- val algebra. Firstly, one has to show that S = s. For the ORD-Horn subclass, this has been done in The- orem 6. Secondly, one has to show that ISAT is NP-complete for all minimal subalgebras 7 that strict- ly contain S. This, however, means that these subal- gebras have to be identified. analysis Certainly, such a case cannot be done manually. In fact, we used a program to identify the containing X . minimal subalgebras strictly An analysis of the clause form of the relations appearing in these subalgebras leads us to consider the following two relations: NI = {d,d-,o-,s-,f) N2 = {d-,o,o-,s-,f-). One of these two relations can be found in all minimal subalgebras strictly containing 31, , as can be shown us- ing a machine-assisted &se analysis. Lemma 9 Let S & d be any set of interval relations that strictly contains 31. Then N1 or N2 is an element of& For reasons of simplicity, we will not use the ORD clause form in the following, but a clause form that also contains literals over the relations 2, <, >. Then the clause form for the relations mentioned in the lemma can be given as follows: r(X NI Y) = {(X- < x+), (Y- < Y+), (X- <Y+), (x+ > Y-), ((X- > Y-) v (x+ > Y+))}, Tractable Problems 359 r(X N2 Y) = {(X- < x+), (Y- < YS), (X- < Y+), (x+ > Y-), ((X- < Y-) v (x+ > Y+))}. We will show that each of these relations together with the two relations & = {-+J’,o,m,f’} B2 = GO,o,m,s~, which are elements of C, are enough for making the in- terval satisfiability problem NP-complete. The clause form of these relations looks as follows: n(XBi Y) = {(X- < x+), (Y- <Y+), (X- < Y-), (X- < Y+)) n(XB2Y) = {(X- < x+), (Y- <Y+), (x+ < Y+), (X- < Y+)) Lemma 10 ISAT is NP-complete if I. JV’~ = {Br,Bz,Ni} C S, or 2. JV~={B&E~~,N~}~S. - Proof Sketch. Since ISAT E NP, membership in NP> follows. For the NP-hardness part we will show that 3SAT can be polynomially transformed to ISAT( We will first prove the claim for JV~. Let D = {Ci} be a set of clauses, where Ci = Zi,r V Zi,2 V Zi,s and the Z i,j’s are literal occurrences. We will construct a set of interval formulas 0 over JV~ such that 0 is I-satisfiable iff D is satisfiable. For each literal occurrence Zi,j a pair of intervals Xi,j and u;‘,j is introduced, and the following first group of interval formulas is put into 0: This implies that n(O) contains among other things the following clauses (XQ > E;,j V Xcj > Yjs). (Xi,2 BI X,1), (xi,3 BI X,2), (Xi,1 BI X,3), Additionally, we add a second group of formulas for each clause Ci: which leads to the inclusion of the clauses (Yi,1 > X,,), (U;:, > Xi3), (K, > X,Tl) in n(o). This construction leads to the situation that there is no model of 0 that satisfi.es for given i all dis- juncts of the form (X*Tj > Yi;) in the clause form of r(Xi,jNrYi,j). If the jth disjunct (X*yj > Yi,) is unsatisfied in an I-model of 0, we will interpret this as the satisfaction of the literal occurrence Zi,j in Ci of D. In order to guarantee that if a literal occurrence Zi,j is interpreted as satisfied, then all complementary lit- eral occurrences in D are interpreted as unsatisfied, the following third group of interval formulas for comple- mentary literal occurrences Zi,j and Z9,h are added to 0: (&,,/a B2 K,j), (X,j B2 yS,h), which leads to the inclusion of (Yi$ > Xzh), (Ygth > x;tj>e This construction guarantees that 0 is I- satisfiable iff D is satisfiable. The transformation for Jzf2 is similar. q Based on this result, it follows straightforwardly that 31 is indeed a maximal tractable subclass of A. Theorem 11 If S strictly contains 31, then ISAT is NP-complete. The next question is whether there are other max- imal tractable subclasses that are incomparable with 3c. One example of an incomparable tractable subclass is U = {{-Q-}, T}. S ince { 4, +} has no ORD-Horn clause form, this subclass is incomparable with 31, and since all sets of interval formulas over 24 are trivially satisfiable (by making all intervals ,disjoint), ISAT can be decided in constant time. The subclass 24 is, of course, not a very interesting fragment. Provided we are interested in temporal reasoning in the framework as described by Allen (1983), one necessary require- ment is that all basic relations are contained in the sub- class. A machine-assisted subalgebras leads us to the exploration of the space of following machine-verifiable lemma. Lemma i2 If S is a subclass that contains the thir- teen basic relations, then s C 31, or N1 or N2 is an element of S. Using the fact that B1, B2 are elements of the least subalgebra generated by the set of basic relations and employing Lemma 10 again, we obtain the quite sat- isfying result that 31 is in fact the unique greatest tractable subclass amongst the subclasses containing all basic relations. Theorem 13 Let S be any subclass of A that contains all basic relations. Then either S E 31 and ISAT is polynomial or ISAT is NP-complete. We have identified a new tractable subclass of Allen’s Conclusion interval algebra, which we call ORD-Horn subcZass and which contains the previously identified continuous endpoint and pointisable subclasses. Enumerating the ORD-Horn subclass reveals that this subclass contains 868 elements out of 8192 elements in the full algebra, i.e., more than 10% of the full algebra. Comparing this with the continuous endpoint subclass that covers approximately 1% and with the pointisable subclass that covers 2%, our result is a clear improvement in quantitative terms. Furthermore, we showed that the “traditional” method of reasoning in Allen’s interval algebra, name- ly, the path-consistency method, is sufficient for decid- ing satisfiability in the ORD-Horn subclass. In other words, our results indicate that the path-consistency method has a much larger range of applicability for 360 Constraint Satisfaction reasoning in Allen’s interval algebra than previously believed-if we are mainly interested in satisfiability. Provided that a restriction to the subclass 3c is not possible in an application, our results may be employed in designing faster backtracking algorithms for the full algebra (ValdGz-P&ez 1987; van Beek 1990). Since our subclass contains significantly more relations than oth- er tractable subclasses, the branching factor in a back- track search can be considerably decreased if the ORD- Horn subclass is used. Finally, we showed that it is impossible to improve on our results. Using a machine-generated case analy- sis, we showed that the ORD-Horn subclass is a maxi- mal tractable subclass of Allen’s interval algebra and, in fact, even the unique greatest tractable subclass in the set of subclasses that contain all basic relations. In other words, the ORD-Horn subclass presents an op- timal tradeoff between expressiveness and tractability (Levesque %z Brachman 1987) for reasoning in Allen’s interval algebra. Acknowledgments We would like to thank Peter Ladkin, Henry Kautz, Ron Shamir, Bart Selman, and Marc Vilain for discussions concerning the topics of this paper. In addition, we would like to thank Christer BZckstrSm for comments on an earlier version of this paper. eferences Allen, J. F., and Hayes, P. J. 1985. A common-sense theory of time. In Proc. 9th IJCAI, 528-531. Allen, J. F., and Koomen, J. A. 1983. Planning using a temporal world model. In Proc. 8th IJCAI, 741-747. Allen, J. F. 1983. Maintaining knowledge about tem- poral intervals. CACM 26( 11):832-843. Allen, J. F. 1984. Towards a general theory of action and time. Artificial Intelligence 23(2):123-154. Allen, J. F. 1991. Temporal reasoning and planning. In Allen, J. F.; Kautz, H. A.; Pelavin, R. N.; and Tenenberg, J. D., eds., Reasoning about Plans. San Mateo, CA: Morgan Kaufmann. chapter 1, l-67. Feiner, S. K.; Litman, D. J.; McKeown, K. R.; and Passonneau, R. J. 1993. Towards coordinated tempo- ral multimedia presentation. In Maybury, M., ed., In- telligent Multi Media. Menlo Park, CA: AAAI Press. Forthcoming. Golumbic, M. C., and Shamir, R. 1992. Algorithms and complexity for reasoning about time. In Proc. 10th AAAI, 741-747. Henschen, L., and Wos, L. 1974. Unit refutations and Horn sets. JACM 21:590-605. Ladkin, P. B., and Maddux, R. 1988. On binary constraint networks. Technical Report KES.U.88.8, Kestrel Institute, Palo Alto, CA. Ladkin, P. B. 1987. Models of axioms for time inter- vals. In Proc. 6th AAAI, 234-239. Levesque, H. J., and Brachman, R. J. 1987. Expres- siveness and tractability in knowledge representation and reasoning. Computational Intelligence 3:78-93. Mackworth, A. K., and Freuder, E. C. 1985. The complexity of some polynomial network consistency algorithms for constraint satisfaction problems. Arti- ficial InteZZigence 25:65-73. Mackworth, A. K. 1977. Consistency in networks of relations. Artificial Intelligence 8:99-118. Montanari, U. 1974. Networks of constraints: fun- damental properties and applitations to picture pro- cessing. Information Science 7:95-132. Nebel, B., and Biirckert, H.-J. 1993. Reasoning about temporal relations: A maximal tractable subclass of Allen’s interval algebra. DFKI Research Report RR- 93-l 1, Saarbriicken, Germany. NGkel, K. 1989. Convex relations between time inter- vals. In Rettie, J., and Leidlmair, K., eds., Proc. 5. Gsterreichische Artificial InteZZigence- Tagung. Berlin, Heidelberg, New York: Springer-Verlag. 298-302. Song, F., and Cohen, R. 1988. The interpretation of temporal relations in narrative. In Proc. 7th AAAI, 745-750 * ValdCz-PCrez, R. E. 1987. The satisfiability of tempo- ral constraint networks. In Proc. 6th AAAI, 256-260. van Beek, P., and Cohen, R. 1990. Exact and ap- proximate reasoning about temporal relations. Corn- putational Intelligence 6:132-144. van Beek, P. 1989. Approximation algorithms for temporal reasoning. In Proc. 11th IJCAI, 1291-1296. van Beek, P. 1990. Reasoning about qualitative tem- poral information. In Proc. 8th AAAI, 728-734. Vilain, M. B., and Kautz, H. A. 1986. Constraint propagation algorithms for temporal reasoning. In Proc. 5th AAAI, 377-382. Vilain, M. B.; Kautz, H. A.; and van Beek, P. G. 1989. Constraint propagation algorithms for tempo- ral reasoning: A revised report. In Weld, D. S., and de Kleer, J., eds., Readings in Qualitative Reason- ing about Physical Systems. San Mateo, CA: Morgan Kaufmann. 373-38 1. Weida, R., and Litman, D. 1992. Terminological rea- soning with constraint networks and an application to plan recognition. In Nebel, B.; Swartout, W.; and Rich, C., eds., PrincipZes of Knowledge Representa- tion and Reasoning: Proc. 3rd Int. Conf., 282-293. Cambridge, MA: Morgan Kaufmann. Tractable Problems 361 | 1994 | 149 |
1,485 | DeDartment of Commuter Sciences finiversity of T&s at Austin Taylor Hall 2.124 Austin, TX 78712-1188 hewett@cs.utexas.edu Abstract The RETE algorithm had a great impact on the develop- ment of efficient production systems by providing a fast pat- tern matching mechanism for activation. No similar mecha- nism has been available to speed up activation and schedul- ing in blackboard systems. In this paper we describe effl- cient, general-purpose efficiency mechanisms that are better suited to blackboard systems than RETE-like networks. We describe a knowledge source compiler that produces match networks and demons for efficient activation and rating while compiling the entire system for increased execution speed. Experiments using the enhancements in a general- purpose blackboard shell illustrate a substantial improve - ment in run time, including an N-92% decrease in activa- tion time. The mechanisms we describe are general enough to be used in most existing blackboard systems. 1 Introduction The blackboard architecture is a flexible framework for solving complex problems. It supports incremental de - velopment of solutions, integrated use of different types of knowledge at different levels of abstraction, and oppor- tunistic control of reasoning. This flexibility can lead to complex interrelationships among blackboard states, po- tential actions, and strategies, causing difficulty in imple- menting an efficient system. For these reasons, black- board applications are sometimes perceived as “slow”. The efficiency of a knowledge-based system can be measured at many levels [Carver and Lesser, 19921. At one level is the overhead of activating, selecting, and exe- cuting actions. Another measure is the quality of a sys- tem’s controlknowledge, which is used to select the best action to perform. In this paper we address the need to re- duce the processing overhead of blackboard systems, inde- pendent of the quality of knowledge in the system. The RETE algorithm [Forgy, 19821, a fast pattern matching mechanism, was a major breakthrough in reduc- ing the overhead of production systems. However, there has been no similar breakthrough in blackboard systems; each blackboard system implements different ad-hoc effi - ciency mechanisms for the basic execution cycle. Earlier work on efficiency of blackboard systems has been at higher or lower levels of the architecture. Some examples include meta-level frameworks for efficient con - trol KZiasri and Ma?tre, 19891, blackboard structures to op- timize storage and retrieval operations [Corkill et al., 19881, and distributed and parallel implementations of Micheal Hewett Computer Science and Engineering Department Florida Atlantic University P.O. Box 3091 Boca Raton, FL, 33431-0091 hewett@cse.fau.edu blackboard systems [Lesser and Corkill, 1988; Corkill, 1989; Bisiani and Forin, 1989; Rice et al., 19891. In this paper we will argue why a direct application of the RETE algorithm is not appropriate for blackboard sys - terns. We then describe a set of general-purpose mecha- nisms that can be used in many blackboard systems. Ex- periments based on implementing the enhancements in the BB 1 architecture illustrate substantial decreases in execu- tion times. Since most blackboard systems share the same basic execution cycle, with minor variations, our results can be applied to most blackboard architectures. 2 board architecture Various blackboard architectures have been imple- mented, including Hearsay-II [Errnan and Lesser, 19751, AGE [Nii and Aiello, 19791, BB 1 [Hayes-Roth and Hewett, 19881, and ERASMUS [Baum et al., 19893 (see Engelmore and Morgan, 19881 for detailed descriptions of these and other blackboard architectures). Most imple- mentations of the blackboard architecture provide know1 - edge sources for execution, a mixture of frame-based and semantic network representation methods, and a general mechanism for control of reasoning. The knowledge rep- resentation component in a blackboard system is the blackboard. Blackboards contain objects, frame-like structures that form the basic unit of representation. A blackboard object has attributes and links to other objects. A blackboard is usually partitioned into levels containing related objects. In a blackboard system the knowledge source is the basic unit of execution, similar to productions or rules in a production system. The action of a knowledge source makes one or more changes to the blackboard such as cre- ating or deleting a blackboard object, changing the value of an attribute, or creating a link between two objects. Each change to the blackboard is logged as an event. Each knowledge source is triggered by an event de- scribed in its trigger conditions. When the described event occurs, the knowledge source is activated and one or more activations are created as appropriate for the context in which the knowledge source was triggered. Each acti- vation, called a KSA, is then placed on the agenda. The agenda has two parts: the triggered agenda and the executable agenda. Only KSAs on the executable agenda are eligible for execution. A knowledge source contains state-based preconditions that determine whether Enabling Technologies 465 From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. the KSA is executable. A KSA’s state may change from triggered to executable and back as the state of its precon- ditions varies in response to changes on the blackboard. A knowledge source also has obviation conditions. If these become true while theKSA is on the agenda, it is removed from the agenda and permanently discarded. The component responsible for selecting KSAs for ex - ecution is the scheduler. It uses control knowledge to se- lect the best KSA available on each cycle. 2.1 The blackboard execution cycle 1. [ACTIVATE.] for every Event E of the last cycle do for every knowledge source KS do if KStriggerConditions are satisfied by E then for every context C of KS do generate a KSA; 2. [ENABLE.] for every KSA do if KSA.preconditions are satisfied then place KSA on the executable agenda else place KSA on the triggered agenda; 3. [OBVIATE.] for every KSA do if KSA.obviationConditions are satisfied then remove KSA from the agenda; 4. [SCHEDULE.] rate KSAs and select a KSA to execute; 5. [EXECUTE.] execute a KSA, collecting events; 6. [LOOP.] go to Step 1; At runtime the execution cycle activates and executes knowledge sources using their conditions and actions, as in the BBl execution cycle shown above. There are several places where a naive implementa- tion can encounter efficiency problems. Steps 1, 2, 3 and 4 all loop through every existing KS or KSA, of which there can be many. In our experience, agenda manage- ment (Steps 1 and 2) involves considerable overhead and often consumes more processing time than execution. 3 Efficiency mechanisms Common techniques for gaining efficiency in AI ar- chitectures include compilation, pattern-matching net- works, and demons. In our approach we apply all three techniques to construct efficient general mechanisms for use in blackboard systems. A demon is a small process that is activated by a specific change to working memory. They have been used previously in blackboard systems. Pal igon [Rice et al., 19891, a parallel, distributed blackboard system, uses demons to directly invoke rules. However Po 1 igon was designed to operate without global control and does not have an agenda per se. While Poligon uses demons to direct execution, we will use demons for agenda mainte- nance and rating. Compilation is the main technique for speeding up ex - ecution. As is well known, compiled code executes as much as one hundred times faster than interpreted code, so compiling knowledge sources into lower-level functions will increase execution speed. Additionally, compiling a pattern matching network can improve match times by about 15% [Scales, 19861. 3. Why not use TE? A RETE network is built by parsing the conditions (LHS) of a set of productions and constructing a network with two parts. The match part detects when working memory elements (WME) match a single pattern in a con- dition. The join part detects when the entire condition is satisfied. Each terminal node in the join network corre- sponds to a production. Whenever a WME is added, deleted, or modified, the WME is passed through the net- work. If a terminal node is activated, its production is placed in the conflict set. Every production system uses some variation of RETE as an efficient activation mecha- nism. Our original intent was to construct a version of RETE for use in blackboard systems. However, we now realize that RETE is not an appropriate activation mecha- nism for blackboard applications. Figure 1 illustrates the difference in activation for production systems and black- board systems. In a production system, rules (prod- uctions) are activated when several WMEs, denoted by the shaded circles, match the patterns in a rule. These patterns may be scattered throughout working memory. In a black- board system, initial activation (triggering) is caused by a single event, denoted by the unshaded circle in the figure. Full activation, satisfaction of the knowledge source’s pre- conditions, is usually determined by objects related to the triggering object via links or by virtue of being at the same level on the blackboard. These objects are specified by bindings in the preconditions. Thus, there is usually no need to match patterns throughout the entire black- board-the objects needed to match the patterns can be di - rectlv accessed. Rule1 Rule2 = state-based match = event-based match Fig. 1. D rent activation mechanisms. This section describes how to apply the techniques of the previous section to the major components of black- board systems. We will describe how to use demons for efficient state-based activation and rating, a discrimination network for efficient event-based activation, and compila- tion for efficient execution. 466 Enabling Technologies 4.1 Activation As described in Section 2, activation in blackboard systems is a two-stage process. The initial event-based ac - tivation generates activations (KSAs) from knowledge sources. In the second stage, a KSA’s state-based precon- ditions must be satisfied for it to be eligible for execution. 4.1.1 Event-based activation EXECUTED ACTIVATED Event-based activation (triggering) involves compar- ing a number of events against the trigger conditions of each knowledge source on each execution cycle. In our approach, the trigger conditions are compiled into a discrimination network, much like the match part of a RETE network. We require that trigger conditions bind variables when they are first referenced. This simple re- striction eliminates the need for the join part of the RETE network and makes trigger conditions equivalent to a knowledge representation mechanism known as access paths [Crawford, 19901. Access paths provide a well- defined semantics for ordering conjunctive queries so that knowledge base access is contained and controlled, and therefore is more efficient. In our mechanism, the dis- crimination network uses the event type, event level, at- tribute and/or link as the match keys at its branch points. The leaf nodes of the network are knowledge sources. Each cycle the events of the last cycle are passed through the discrimination network. If a knowledge source’s node is activated (i.e. an event was accepted by the network), its state-based trigger conditions (if any) are checked and one or more KSAs are generated using the knowledge source’s context. The discrimination network improves efficiency by reducing the number of compar- isons needed to match events with trigger conditions. For additional speed, our network is compiled into functional form rather than maintained as a data structure. 4.1.2 State-based activation Like production systems, state-based activation in blackboard systems often consumes much more process- ing time than knowledge source execution. A typical ap- plication has dozens of KSAs on the agenda, each with several preconditions. The task of an efficient architecture is to check a precondition only when its state may have changed. We have determined that a demon-based archi- tecture can provide a very large decrease in activation time while maintaining the generality of the architecture. In BB 1, a KSA is in one of several states: executable, triggered, or obviated. Thus, the Agenda Manager must continually check both preconditions and obviation condi- tions of many KSAs. The state of each precondition de- pends on the state of one or more blackboard components (levels, objects, attributes, etc.) which may or may not change state from cycle to cycle. BBl currently uses some ad-hoc mechanisms to improve the efficiency of pre- condition checking. However, BBl does not attempt to provide optimal precondition checking and performs very poorly when an agenda contains a large number of exe- cutable KSAs [Hewett and Hewett, 19931. Our implementation uses a demon-based mechanism to indicate which conditions need to be rechecked. A Fig. 2. emous for state-based activation. state-based condition must, by definition, reference some item on a blackboard. A condition, then, must be reevalu- ated when that blackboard item is modified. We place a demon on a blackboard item to note a relationship be- tween the item and a precondition of a KSA, thus provid- ing a way to notify the architecture when a condition must be rechecked. The appropriate location for a demon can be noted by a knowledge source compiler, thus relieving the user of the need to create and place them. A potential disadvantage of this method is the overhead of adding and removing demons as KSAs are created and disposed. However, we will show in Section 5 that demon-based ac- tivation produces a large decrease in activation time for every tested application. The compiler produces demon templates that are in- stantiated when their corresponding KSAs are instantiated. In the example of Figure 2, when KSA-k is instantiated and one of its local variables is bound to Object-i, the demon is instantiated and placed on Object-i. When Object-i is modified by the execution of KSA-j, the demon is activated and causes the third precondition of KSA-k to be evaluated. The actual evaluation of the precondition is delayed until the agenda maintenance phase of the execu- tion cycle. The activation signal is also passed to the su- perior of the demon’s location. For example, a change to the value of an attribute is also a change to the object, which is a change to its level, which is a change to the blackboard containing the level. Demons could be activat- ed at any point along the signal’s path. Demons are re- moved whenever a KSA is executed or is obviated. Sec- tion 5 shows the time improvements obtained using this mechanism. 4.2 Controi In a blackboard system, control is the process of se- lecting the next action or actions to execute. In agenda- based systems, control has two parts: rating and schedul- ing. Rating assigns a priority to each executable KSA. Scheduling selects a KSA and queues it for execution. There are many factors affecting the efficiency of control, including the size of the agenda, the complexity and num- ber of rating functions, and the frequency with which ac- tions must be re-rated. See [Carver and Lesser, 19921 for more details. 4.2.1 Scheduling The scheduler selects an action from the agenda and queues it for execution. Usually the selected action is the highest-priority action, but with a flexible control module, the actual criteria for selection are user-definable. For Enabling Technologies 467 most situations, a sorted agenda would seem to be an ap- propriate data structure. However, when we implemented a sorted agenda in BB 1, we found that the execution time of the system in- creased by 10% even though the time to retrieve the highest-priority item was significantly reduced. This is be- cause we need not sort all of the actions. In most systems only the highest-priority action needs to be identified. The unnecessary sorting of lower-priority actions is simply a waste of time. A simple linear search on an unsorted agenda provides suitable performance for a scheduler. 4.2.2 The BBl rating mechanism RATE-ALL-KSAs (Operative, Dynamic, Changed) for every KSA in KSAs-TO-RATE do if KSA is newly-executable for every R in Operative-criteria do Rate(KSA, R); else for every R in Rating-criteria do rate(KSA, R); for every R in Deactivated-criteria do Remove-rating( KSA, R); end-if Prioritize(KSA) end-for In BB 1, the Rater applies rating functions from active control elements in the control plan to executable KSAs. Control elements can be dynamic or static. A dynamic control element is one whose rating criterion is state- dependent, potentially causing its ratings to change each cycle. During the control phase each control element in the current control plan rates every new KSA, and each dynamic control element re-rates every executable KSA. To rate each executable KSA, the Rater identifies the op- erative, dynamic, new, and changed control elements. It uses them as shown in the algorithm above. The Rater frequently rates new executable KSAs, rates KSAs when there is a new control element, re-rates KSAs when a control element is modified, and removes ratings from existing KSAs when a control element is de- activated. Additionally, the rater re-rates every KSA by dynamic control elements every cycle. There is potentially a lot of redundant computation in rating, especially when dynamic control is used. Our goal is to reduce the number of KSAs that are repeatedly re-rated unnecessarily by using demons to relate control elements to specific KSAs. 4.2.3 Demon-based rating Similar to the way demons can be used to associate blackboard items with state-based preconditions, we also associate blackboard items with control elements used in rating KSAs. A well-designed control element focuses on a certain part of the solution state. If the relevant part of the solution state changes, KSAs that are related to that part of the solution state will need to be re-rated. These KSAs can be located by following activation demons (4) from the objects in the solution state. The control ele- ments that need to re-rate these KSAs can be located by following control demons ((I) from the same objects. The Rater is then notified that certain KSAs need to be rated EXECUTED ACTIVATED Fig. 3. Using control demons for rating. by certain control elements. In the example of Figure 3, Object-i has an activation demon to KSA-k and a control demon to dynamic control element CE-I. If Object-i is modified, the control demon tells CE-i to re-rate KSA-k. During the rating phase of the execution cycle, the Rater processes activated control demons, much like the demon-based agenda mechanism described above. Rating occurs in the following situations: 1. New KSA: When placing new agenda demons, check for any control demons in the same location. If they exist, activate them to rate the new KSA. 2 New GE: When placing control demons, check for any agendademons in the same location. If they exist, acti - vate the new control demons to rate the existing KSAs. 3. CE deleted: Remove any associated control demons while checking for agenda demons in the same loca- tion. If they exist, activate the control demon one last time to mu-ate the existing KSAs. 4. CE modified : If a CE is modified so that its rating cri- terion or weight has changed, it will be necessary to re- rate any associated KSAs. This is handled by combin - ing operations 2 and 3 above. 5. BB modified: If a blackboard object changes, activate its associated control demons. Notice that the demon-based rating mechanism de- pends on the existence of the demon-based activation mechanism. Since the two mechanisms are closely relat- ed, the actual implementation of the rating mechanism was very simple. However, while it is fairly easy to write knowledge sources in such a way that the compiler can de- termine where to place activation demons, it is not as easy to construct “knowledge source independent” control knowledge. The appropriate methods of structuring con- trol knowledge to ensure that it can operate in this manner require further research. 4.3 Execution We improve the efficiency of executing the actions of a knowledge source by the simple expedient of compiling the actions. The compiler also inserts code into the ac- tions to place and remove activation and control demons. The speed increase from compilation is related to the com- plexity of the actions. Our test applications, which have fairly trivial actions, do not show a large increase in exe- cution speed. Since demon activation and removal occurs during execution, it is possible for the overhead of these operations to increase the execution time of some systems. In our test systems, the overhead was noticeable, but was 468 Enabling Technologies negligible compared to the decrease in activation time. 4.4 Knowledge source compilation The mechanisms described in the sections above all refer to a knowledge source compiler that produces the discrimination network for triggering, produces the activa- tion demons and control demons, and compiles the actions and conditions of the knowledge sources. In this section we provide an overview of the compiler. The compiler requires that the knowledge sources be written in a simple blackboard language described in [Hewett and Hewett, 19931. Some examples of sentences in this language are: 1. ADD <object-name> TO <level-name> [WITH [ATTRIBUTES <attribute-value-list>] [LINKS <link-object-list>]] 2. DELETE <object-name> 3. CHANGE <attribute-name> OF <object-name> TO <new-value> 4. LINK <link-name> FROM xobjectl-name> TO <object2-name> The knowledge source compiler processes knowledge sources and compiles their conditions and actions into a lower-level language (e.g. LISP, C or C++). Through an analysis of the conditions, it generates a set of activation demons for each knowledge source. Additional code is added to each compiled KS to instantiate and remove demons at the appropriate times. At runtime, each KSA has a set of local variable bind- ings that differentiate it from other instances of the same knowledge source. The conditions and actions of each knowledge source are compiled into functions that accept the set of variable bindings for the particular KSA being evaluated. 5 Experimental results We used three applications to study the effects of the new activation mechanism in BBl. The first, TSP, is a heuristic solution of a lo-city traveling salesman problem that has previously been used to benchmark BBl. The other two applications, TEST-T and TEST-E, are designed to test the effect of the enhancements on different agenda characteristics. Both applications always have twenty ac- tive KSAs. TEST-T has nineteen KSAs on the triggered agenda and one KSA on the executable agenda. TEST-E keeps all twenty KSAs on the executable agenda every cycle. TSP uses a small amount of control knowledge, while the other two applications use no control. Figure 4 summarizes the characteristics of the benchmark applica- tions. We measured the time spent in the agenda mainte- nance, control, and execution phases of the execution cycle as well as the total time of run. Included inthe exe- cution time is the cost of demon activation and the over- head of instantiating and removing demons. All timed runs were made on a single-user Sun IPC running Lucid Common LISP and BBl ~2.5. The times labeled efl1 show the improvement gained by implementing only the efficient activation mechanism. The times labeled efs-2 include both the efficient activation and the efficient con- trol mechanisms. Figure 5 shows the run times of all three applications. 5.1 Results for TSP Figure 5a shows the runtime of TSP in standard BBl and in new implementations using the efficiency mecha- nisms described in this paper. TSP runs for 17 execution cycles, so the overall runtime is relatively short. Because TSP maintains a large executable agenda and has a lot of movement to and from different parts of the agenda, the large increase in agenda maintenance speed using the demon-based architecture is not a surprise. Notice the very slight increase in execution time when the demon- based control mechanism is used. This is the effect of the demon activation overhead. Overall, the performance gain is approximately 55%. 5.2 esults for TEST-T and TEST- Figure 5b shows the runtime of the TEST-T applica- Time @=a Agenda Control Execution Total 200 150 100 50 0 Agenda Control Execution Total 0 Agenda Control Execution Total a) TSP b) TEST-T c) TEST-E Fig 5. Run times for the test applications. Enabling Technologies 469 tion in standard BBl and the new implementations. The standard BBl handles TEST-T relatively well, so we should expect our new mechanism to have a relatively smaller impact on the performance of TEST-T than on the performance of the other applications. Despite this, the performance increase for TEST-T is quite large. Agenda maintenance times are reduced by 80% and the overall runtime is reduced by about 60%. Notice that in standard BBl agenda maintenance con- sumes a much larger amount of time than the control and execution phases, while in the new implementation, it con - sumes much less time. Figure 5c compares the runtime of the TEST-E appli- cation in standard BBl and in the new implementations. As expected, the results for TEST-E are similar to those of TEST-T, with as good or better improvement in agenda maintenance time. 53 Summary of results Overall, the results show a substantial decrease in agenda maintenance time and significant reductions in other phases. Applications with more complex knowledge source actions will show a larger decrease in execution time, while those with a large amount of complex, dynam- ic control will show a larger decrease in control time. The overhead of demon processing is not substantial. A final example demonstrates that the results are con - sistent for larger applications. As described above, TEST- E maintains an executable agenda of 20 KSAs. We ran TEST-E in modes using 20, 40, and 80 KSAs, producing the total run times shown in Figure 6. The total runtime of the new implementation managing eighty KSAs is approx - imately two-thirds of the total nmtime of the old imple- mentation managing only twenty KSAs. 6 Summary We have illustrated why a EtETE-like pattern- matching network is not suitable for blackboard systems. As an alternative, we presented efficient blackboard acti- vation, execution and rating mechanisms. Our mecha- nisms combine compilation techniques, a matching net- work, and demon-based activation and rating to achieve a substantial decreases in runtime for all tested systems. We have demonstrated that, like production systems, activation in blackboard systems can be a major efficiency problem. Our efficiency mechanisms address this and re- duce the time spent in activation to approximately 15% of Total 100 Runtime 50 Agenda size TEST-E Fig. 6. TEST-E with different agenda sizes. the total runtime. Further work includes finding suitable formulations of control knowledge that can reap the bene- fits of the demon-based rating mechanism. References Baum, L.S., Dodhiawala, R.T. and Jagannathan, V. (1989) “The Erasmus System.” Blackboard Architectures and Applications, Jagannathan, V., R. Dodhiawala and L. S. Baum, editors, pp. 347-370, Academic Press. Bisiani, R. and Forin, A. (1989) “Parallelization of Blackboard Architectures and the Agora System.” Blackboard Architec- tures and Applications, Jagannathan, V., R. Dodhiawala and L. S. Baum, editors, pp. 137-152, Academic Press. Carver, N. and V. Lesser (1992). The Evolution of Blackboard Control Architectures. CMPSCI Technical Report 92-71, Uni - versity of Massachusetts at Amherst. Corkill, D.D. (1989) “Design Alternatives for Parallel and Dis - tributed Blackboard Systems.” Blackboard Architectures and Applications, Jagannathan, V., R. Dodhiawala and L. S. Baum, editors, pp. 99-136, Academic Press. Corkill, D.D., Gallagher, K.Q. and Murray, K.E. (1988) “GBB: A Generic Blackboard Development System.” Blackboard Systems, Engehnore, R., and T. Morgan, editors, pp. 503-517, Addison-Wesley. Crawford, J. (1990). Access-Limited Logic -- A Language for Knowledge Representation. PhD Thesis, University of Texas at Austin, Technical Report AI90- 141. Engelmore, R. and Morgan, T., editors (1988). Blackboard Sys - tents. Addison-Wesley. Erman, L.D. and Lesser, V.R. (1975) A multi-level organization for problem-solving using many diverse cooperating sources of knowledge. In: Proceedings of the Fourth International Joint Conference on Artificial Intelligence (IJCAI-75), pp. 483-90. Forgy, C. (1982) RETE: a fast algorithm for the many pattern/many object pattern match problem. Artificial Intelli - gence 19(1):17-37. Hayes-Roth, B. and Hewett, M. (1988) “BB 1: An Implementa- tion of the Blackboard Control Architecture.” Blackboard Sys - tents, Engelmore, R., and T. Morgan, editors, pp. 297-313, Addison-Wesley. Hewett, M. and Hewett, R. (1993) A Language and Architecture for Efficient Blackboard Systems. In: Proceedings of the Ninth International IEEE Conference on AI Applications (CAIA ‘93), Orlando, Florida. Lsasri, H. and Ma”itre, B. (1989) “Flexibility and Efficiency in Blackboard Systems: Studies and Achievements in ATOME.” Blackboard Architectures and Applications, Jagannathan, V., R. Dodhiawala and L. S. Baum, editors, pp. 309-322, Aca- demic Press. Lesser, V.R. and Corkill, D.D. (1988) “The Distributed Vehicle Monitoring Testbed.” Blackboard Systems, Engelmore, R., and T. Morgan, editors, pp. 353-386, Addison-Wesley. Nii, H.P. and Aiello, N. (1979) AGE: A Knowledge-Based Pro - gram for Building Knowledge-Based Programs. In: Proceed- ings of the Sixth International Joint Conference on Arttficial Intelligence (IJCAI-79), pp. 645-655. Rice, J., Aiello, N., and Nii, H.P. (1989) “See How They Run . ...” Blackboard Architectures and Applications, Jagannathan, V., R. Dodhiawala and L. S. Baum, editors, pp. 153-178, Aca- demic Press. Scales, D. J. (1986) Efficient Matching Algorithms for the SOAR/OPSS Production System, Technical Report STAN-CS- 86-l 124, Stanford University, Stanford, California. 470 Enabling Technologies | 1994 | 15 |
1,486 | A filtering algorithm for eonstrai Jean-Charles R&XV GDR 1093 CNRS LIRMM UMR 9928 Universite Montpellier II / CNRS 161, rue Ada - 34392 Montpellier Cedex 5 - France e-mail : regin@lirmm.fr Abstract Many real-life Constraint Satisfaction Problems (CSPs) involve some constraints similar to the alld- ifferent constraints. These constraints are called con- straints of difference. They are defined on a subset of variables by a set of tuples for which the values oc- curing in the same tuple are all different. In this pa- per, a new filtering algorithm for these constraints is presented. It achieves the generalized arc-consistency condition for these non-binary constraints. It is based on matching theory and its complexity is low. In fact, for a constraint defined on a subset of p variables hav- ing domains of cardinality at most d, its space com- plexity is OCpd) and its time complexity is O(p2d2). This filtering algorithm has been successfully used in the system RESYN (Vismara et al. 1992), to solve the subgraph isomorphism problem. Introduction The constraint satisfaction problems (CSPs) form a simple formal frame to represent and solve some prob- lems in artificial intelligence. The problem of the ex- istence of solutions in a CSP is NP-complete. There- fore, some methods have been developed to simplify the CSP before or during the search for solutions. The consistency techniques are the most frequently used. Several algorithms achieving arc-consistency have been proposed for binary CSPs (Mackworth 1977; Mohr & Henderson 1986; Bessiere & Cordier 1993; Bessiere 1994) and for nary CSPs (Mohr & Masini 1988a). Only limited works have been carried out on the semantics of contraints : (Mohr & Masini 1988b) have described an improvement of the algorithm AC-4 for special constraints introduced by a vision problem, (Van Hentenryck, Deville, & Teng 1992) have studied monotonic and functional binary constraints. In this work, we are interested in a special case of n-ary con- straints : the constraints of difference, for which we propose a filtering algorithm. A constraint is called constraint of diflerence if it is defined on a subset of variables by a set of tuples *This work was supported by SANOFI-CHIMIE for which the values occuring in the same tuple are all different. They are present in many real-life problems. These constraints can be represented as n-ary con- straints and filtered by the generalized arc-consistency algorithm GAC4 (Mohr & Masini 1988a). This filter- ing efficiently reduces the domains but its complexity can be expensive. In fact, it depends on the length and the number of all admissible tuples. Let us consider a constraint of difference defined on p variables, which take their values in a set of cardinality d. Thus, the number of admissible tuples corresponds to the number of permutations of p elements selected from d elements without repetition : dPP = &. Therefore some constraint resolution systems, like CHIP (Van Henten- ryck 1989), represent these n-ary constraints by sets of binary constraints. In this case, a binary constraint of difference is built for each pair of variables belonging to the same constraint of difference. But the pruning performance of arc-consistency, for these constraints is poor. In fact, for a binary alldifferent constraint be- tween two variables i and j, arc-consistency removes a value from domain of i only when the domain of j is reduced to a single value. Let us suppose we have a ~llabl Representation by 3-ary constraint Representation by binary constraints of difference Figure 1. CSP with 3 variables ~1, x2, x3 and one constraint of difference between these variables (see figure 1). The domains of variables are D1 = {a, b}, 02 = {a, 13) and 03 = {a, b, c}. The GAC4 filtering with the con- straint of difference represented by a 3-ary constraint, 362 Constraint Satisfaction From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. removes the values b and c from the domain of x3, while arc-consistency with the constraint of difference represented by binary constraints of difference, does not delete any value. In this paper we present an efficient way of imple- menting the generalized arc-consistency condition for the constraints of difference, in order to benefit from its pruning performances. Its space complexity is in O(pd) and its time complexity is in O(p2d2). The rest of the paper is organized as follows. Sec- tion 2 gives some preliminaries on constraint satisfac- tion problems and matching, and proposes a restricted definition of arc-consistency, which concerns only the constraints of difference : the diff-arc-consistency. Sec- tion 3 presents a new condition to ensure the diff-arc- consistency in CSPs having constraints of difference. In section 4 we propose an efficient implementation to achieve this condition and analyse its complexity. In section 5, we show its performance and its interest with an example. A conclusion is given in section 6. Preliminaries A finite CSP (Constraint Satisfaction Problem) P = (X, D,C) is defined as a set of n variables X = {Xl, "', x~), a set of finite domains V = (01, . . . . Dn) where Di is the set of possible values for variable i and a set of constraints between variables C = {G,C2, '", Cm}. A constraint Ci is defined on a set of variables (zil, . . . . xii) by a subset of the Cartesian product Di, x . . . x Dij . A solution is an assignment of value to all variables which satisfies all the constraints. We will denote by : D (X’) the union of domains of variables of X’ C X (i.e D(X’) = UiExt Di). XC the set of variables on which a constraint C is difined. a p the arity of a constraint C : p = IXc I. . d the maximal cardinality of domains. A value ai in the domain of a variable xi is consis- tent with a given n-ary constraint if there exists values for all the other variables in the constraint such that these values with ai together simultaneously satisfy the constraint. More generally, arc-consistency for n-ary CSPs or the generalized arc-consistency is defined as follows (Mohr & Masini 1988a): Definition 1 A CSP P = (X, V,C) is arc- consistent ifl : VX~ E X,Vai E Di,vC E C con- straining Xi, VXj, . . . . xk E Xc, %j, . . . . ak such that C(dj, . . . . ai, . . . . ak) holds. Definition 2 Given a CSP P = (X, 2>, C), a con- straint C is called constraint of difference if it is defined on a subset of variables Xc = {xii, . . ..xik} by a set of tuples, denoted by tuples(C) such that : tuples(C) s Di, X . . . X Di, \ {(dl, . . . . dk) E Di, X . . . X Di, s-t. 3 U, v 1 d, = d,} From the previous definition, we propose a special arc- consistency which concerns only the constraints of dif- ference : Definition 3 A CSP P = (X, 27, C) is diff-arc- consistent ifl all of its constraints of difference are arc-consistent. Definition 4 Given a constraint of difference C, the bipartite graph GV(C) = (XC, D(Xc), E) where (xi, a) E E iff a E Di is culled value graph of C. Figure 2 gives an example of a constraint of difference and its value graph. X=(x1,x2,x3,x4,x5,x6} Dxl={ 1,2} Dx2={2,3} Dx3={ 1,3} Dx4={2,4} Dx5={3,4,5,6} Dx6={6,7} Figure 2: A constraint of difference defined on a set X and its value graph. Definition 5 A subset of edges in a graph G is called matching if no two edges have a vertex in common. A matching of maximum cardinality is called a max- imum matching. A matching it4 covers a set X if eve y vertex in X is an endpoint of an edge in M. Note that a matching which covers X in a bipartite graph G = (X, Y, E) is a maximum matching. From the definition of a matching and the value graph we present, in the next section, a new necessary condition to ensure the diff-arc-consistency in CSPs having constraints of difference. A new condition for CSPs having constraints of difference The following theorem establishes a link between the diff-arc-consistency and the matching notion in the value graph of the constraints of difference. The&m 1 Given a CSP P = (X, 2), C). P is diff- arc-consistent ifl for each constraint of diflerence C of C every edge in GV(C) belongs to a matching which covers XC in GV( C) . proof + : Let us consider a constraint of difference C and GV(C) its value graph. From each admissible tuple of C, a set of pairs can be built. A pair consists of a variable and its assigned value in the tuple. The set Tractable Problems 363 of pairs contains a pair for each variable. This set cor- responds to a set of edges, denoted by A in GV(C). Since P is diff-arc-consistent, the values in each tuple are all different. Thus, two edges of A cannot have a vertex in common and A is a matching with covers Xc. Moreover, each value of each variable in the constraint belongs to at least one tuple. So, each edge of GV(C) belongs to a matching which covers XC. -e : Let us consider a variable xi and a value a of its domain. For each constraint of difference C, the pair (xi, a) belongs to a matching which covers Xc in GV(C) . Since in a matching no two edges have a ver- tex in common, there exists values for all the other variables in the constraint such that these values to- gether simultaneously satisfy the constraint. So P is diff-arc-consistent. •I The use of matching theory is interesting because (Hopcroft & Karp 1973) have shown how to compute a matching which covers X in a bipartite graph G = (X, Y, E), with m edges, ’ in time 0( mm). This theorem gives us an efficient way to represent the constraint of difference in a CSP. In fact, a con- straint of difference can be represent only by its value graph, with a space complexity in O(pd). It also allows us to define a basic algorithm (algorithm 1) to filter the domains of variables of the set on which one con- straint of difference is defined. This algorithm builds the value graph of the constraint of difference and com- putes a matching which covers XC in order to delete every edge which belongs to no matching covering XC. Figure 3 gives an application of this filtering. Algorithm 1: DIFF-INITIALIZATION(~) % returns false if there is no solution, otherwise true % the function COMPUTEMAXIMUMMATCHING(G) com- putes a maximum matching in the graph G begin 1 Build G = (Xc, ww, El 2 M(G) e COMPUTEMAXIMUMMATCHING(G) if lAd( < IXcj then return false 3 REMOVEEDGESFROMG(G,M(G)) return true end The complexity of step 1 is O(dlXcI + IXcl + ID(Xc Step 2 costs O(dlXcldm). And we now show that it is possible to compute step 3 in linear time. So the complexity for one constraint of difference will be O(dlX&/~). Deletion of every edge which belongs to no matching which covers X In order to simplify the notation, we consider a bi- partite graph G = (X, Y, E) rather than the bipartite ’ (Alt et al. 1991) give an implementation of Hopcroft and Karp’s algorithm which runs in time 0(1X1’-“,/m). F or d ense graph this is an improve- ment by a factor of dm. 364 Constraint Satisfaction graph G = (XC, D(Xc), E), and a matching M which covers X in G. In order to understand how we can Figure 3: A value graph before and after the filtering. delete every edge which belongs to no matching, we present a few definitions about matching theory. For more information the reader can consult (Berge 1970) or (Lovasz & Plummer 1986). Definition 6 Let M be a matching, an edge in M is a matching edge; every edge not in M is free. A vertex is matched if it is incident to a matching edge and free otherwise. An alternating path or cycle is a simple path or cycle whose edges are alternately matching and free. The length of an alternating path or cycle is the number of edges it contains. An edge which belongs to every maximum matching is vital. Figure 3 gives an example of a matching which covers X in a bipartite graph. The bold edges are the match- ing edges. Vertex 7 is free. The path (7, x6,6, x5,5) is an alternating path which begins at a free vertex. The cycle (1, x3,3, x2,2, xl, 1) is an alternative cycle. The edge (x4,4) is a vital. Property 1 (Berge 1970) An edge belongs to some of but not all maximum matchings, iff, for an arbitrary maximum matching M, it belongs to either an even begins at a free vertex, or an alternating path even alternating which cycle. From this property we can find for an arbitrary match- ing M which covers X, every edge which belongs to no matching covering X. There are the edges which be- long to neither M (there are not vital), nor an even alternating path which begins at a free vertex, nor an even alternating cycle. Proposition 1 Given a bipartite graph G = (X, Y, E) with a matching M which covers X and the graph Go = (X, Y, Succ), o bt ained from G by orienting edges with the function : Vx;~X:Succ(x)={y~Y/(x,y)~M} V~EY:S~~~(~)={~EX/(~,~)EE-M} we have the two following properties : 1) Every directed cycle of Go corresponds to an even alternating cycle of G, and conversely. 2) Every directed simple path of Go, which begins at a free vertex corresponds to an even alternating path of G which begins at a free vertex, and conversely. proof If we ignore the parity, it is obvious that the propo- sition is true. In the first case, since G is bipartite it does not have any odd cycle. In the second case, we must show every directed simple path of Go which begins at a free vertex to corresponds to an even alter- nating path of G which begins at a free vertex. M is a matching which covers X, so there is no free vertex in X. Since G is bipartite and since every path begins at a free vertex, in Y, every odd directed simple path ends with a vertex in X. From this vertex, we can al- ways find a vertex in Y which does not belong to the path, because every vertex in X has one successor and because a vertex in Y has one predecessor. Therefore from an odd directed simple path we can always build an even directed simple path.0 From this proposition we produce a linear algorithm (algorithm 2), that deletes every edge which does not belong to any matching which covers X. Algorithm 2: REMOVEEDGESFROMG(G,M(G)) % RE is the set of edges removed from G. % M(G) is a matching of G which covers X % The function returns RE egin Mark all directed edges in Go as “unused”. Set RE to 0. Look for all directed edges that belong to a directed simple path which begins at a free vertex by a breadth-fist search starting from free vertices, and mark them as “used”. Compute the strongly connected components of Go. Mark as “used” any directed edge that joins two vertices in the same strongly connected component. for each directed edge de marked as “unused” do I set e to the corresponding edge of de if e E M(G) then mark e as “vital” else 1 REtREU{e) remove e from G return RE Step 2 corresponds to the point 2 of the proposition 1. Step 13 computes the strongly connected component of Go, because an edge joining two vertices in the same strongly connected component belongs to a directed cycle and conversely. These edges belong to an even alternating cycle of G (cf point 1 of proposition 1). Af- ter this step the set A of all edges belonging to some but not all matchings covering X are known. The set RE of edges to remove from G is: RE = E - (A U M). This is done by step 4. The algorithm complexity is the same as the search for strongly connected compo- nents(Tarjan 1972) , i.e O(m + n) for a graph with m edges and n vertices. We have shown how for one constraint of difference C every edge which belongs to no matching which cov- ers XC can be deleted. But a variable can be con- strained by several constraints and it is necessary to propagate the deletions. In fact, let us consider xi a variable of XC, xi can be constrained by several con- straints. Thus, a value of Di can be deleted for rea- sons independant from C. This deletion involves the removal of one edge from GV(C). So, it is necessary to study the consequences of this modification of the GV(C) structure. Propagation of deletions The deletion of values for one constraint of differ- ence can involve some modifications for the other con- straints. And for the other constraints of difference we can do better than repeat the first algorithm by us- ing the fact that before the deletion, a matching which covers X is known. The propagation algorithm we propose has two sets as parameters. The first one represents the set of edges to remove from the bipartite graph, and the sec- ond the set of edges that will be deleted by the fil- tering. The algorithm needs a function, denoted by MATCHINGCOVERINGX(G, Ml, Mz), which computes a matching M2, which covers X, from a matching Ml which is not maximum. It returns true if Mz exists and false otherwise. The new filtering is represented by algorithm 3. Algorithm 3: DIFF-PROPAGATION(G,M(G),ER,RE) % the function returns false if there is no solution % G is a value graph % M(G) is a matching which covers XC % ER is the set of edges to remove from G % RE is the set of edges that will be deleted by the filtering 1 1 2 3 gin ComputeMatching t false for each e E ER do 1 if e E M(G) then 1 M(G) +- W3 - kl if e is marked as “vital” then return false else ComputeMatching t true remove e from G if computeMatching then 1 if -, MATCHINGCOVERINGX(G,M(G),M’) then 1 return false else L M(G) c M’ RE c REMOVEEDGESFROMG(G,M(G)) return true end It is divided into three parts. First, it removes edges from the bipartite graph. Second, it eventually com- putes a new matching which covers XC. Third, it deletes the edges which does not belongs to any match- ing covering XC. The algorithm returns false if ER Tractable Problems 365 contains a vital edge or if there does not exist a match- ing which covers XC. Now, let us compute its complexity. Let m be the number of edges of G, and n be the number of ver- tices. Let us suppose that we must remove Ic edges from G (IERI = TG). The complexity of 1 is in O(k). Step 2 involves, in the worst case, the computation of a matching covering XC from a matching of cardinality 1 M - rCl. This computation has cost 0(&m) (see the- orem 3 of (Hopcroft & Karp 1973)). The complexity of step 3 is in O(m). In the worst case, the edges of G can be deleted one by one. Then the previous function will be called m times. So the global complexity is in O(m2). If p = IXcl and d is the maximum cardinality of domains of variables of XC, then the complexity is in O(p2d2) for one con- straint of difference. An example : 1. There are five houses, each of a different color and inhabited by men of different nationalities, with differ- ents pets, drinks and cigarettes. 2. The Englishman lives in the red house. 3. The Spaniard owns a dog. 4. Coffee is drunk in the green house. 5. The Ukrainian drinks tea. 6. The green house is immediately to the right of the ivoiry house. 7. The Old-Gold smoker owns snails. 8. Kools are being smoked in the yellow house. 9. Milk is drunk in the middle house. 10. The Norwegian lives in the first house on the left. 11. The Chesterfield smoker lives next to the fox owner. 12. Kools are smoked in the house next to the house where the horse is kept. 13. The Lucky-Strike smoker drinks orange juice. 14. The Japanese smokes Parliament. 15. The Norwegian lives next to the blue house. The query is : Who drinks water and who owns the zebra ? This problem can be represented as a constraint net- work involving 25 variables, one for each of the five colors, drinks, nationalities, cigarettes and pets : Cl red I31 coffee NI Englishman Tl Old-Gold AI dog CZ. green I32 tea NZ Spaniard TZ Chesterfield A2 snails C3 ivoiry I33 milk N3 Ukranian T3 Kools A3 fox C4 yellow B4 orange N4 Norwegian T4 Lucky-Strike A4 horse C5 blue Bg water N5 Japanese T5 Parliament A5 zebra Each of the variables has domain values { 1,2,3,4,5}, each number corresponding to a house position (e.g. assigning the value 2 to the variable horse means that the horse owner lives in the second house) (Dechter 1990). The assertions 2 to 15 are translated into unary and binary constraints. In addition, there are three 366 C.onstraint Satisfaction ways of representing the first assertion which means that the variables in the same cluster must take differ- ent values : 1. A binary constraint is built between any pair of vari- ables of the same cluster ensuring that they are not assigned the same value. In this case we have a bi- nary CSP. 2. Five 5-ary constraints of difference are built (one for each of the clusters). And the CSP is not binary. 3. The five 5-ary constraints of difference are repre- sented by their value graphs. The space complexity of one constraint is in O(pd). The first representation is generally used to solve the problem (Dechter 1990; Bessiere & Cordier 1993). From these three representations we can study the dif- ferent results obtained from arc-consistency. They are given in figures 4 and 5. The constraints corresponding to the assertions 2 to 15 are represented in extension. The constraints of difference among the variables of each cluster are omitted for clarity. For the first representation, the result of the filtering by arc-consistency is given in figure 4. Figure 4. For the second representation, the filtering algorithm employed is the generalized arc-consistency. Figure 5 shows the new results. It has pruned more values that the previous one. For the third representation, the filtering algorithm employed is arc-consistency for the binary constraints combined with the new filtering for the constraints of difference. The obtained results are the same as with the second method. Let us denote by a the number of binary constraints corresponding to the assertions 2 to 15, p the size of a cluster, c the number of clusters, d the number of 12 fl EM 2” 1” 13 = ?!I 2’ 2” 4 4 5 5 14 = 2” 2” Tl 3 3 4 4 5 5 l5 l5 ” H 3 4 5 Figure 5. values in a domain and O(ed2) the complexity for arc- consistency2 in binary CSPs. Let us compute the com- plexity for the three methods : 1. For the first representation, the number of binary constraints of difference added is in O(cp2). So, the filtering complexity is 0( (a + cp2)d2). 2. In the second case, we can consider that the com- plexity is the sum of the lengths of all admissible tu- ples for the five 5-ary constraints. It is in 0(&p). 3. For the third method arc-consistency is in O(ud2) and the filtering for the constraints of difference is in O(cp2d2). The total complexity is in O(ucZ2) + O(cp2d2). It is equivalent to the first one. The second filtering eliminates more values than the first one. But its complexity is higher. The represen- tation and the algorithm proposed in this paper give pruning results equivalent to the second approach with the same complexity as the first one. So we can con- clude that the new filtering is good for problems look- ing like the zebra problem. Conclusion In this paper we have presented a filtering algorithm for constraints of difference in CSPs. This algorithm can be viewed as an efficient way of implementing the generalized arc-consistency condition for a special type of constraint : the constraints of difference. It allows us to benefit from the pruning performance of the previ- ous condition with a low complexity. In fact, its space complexity is in O(pd) and its time complexity is in O(p2d2) for one constraint defined on a subset of p variables having domains of cardinality at most d. It has been shown to be very efficient for the zebra prob- lem. And it has been successfully used to solve the subgraph isomorphism problem in the system RESYN (Vismara et al. 1992)) a computer-aided design of com- plex organic synthesis plan. Acknowledgments We would like to thank particularly Christian Bessiere and also Marie-Catherine Vilarem, Tibor Kijkkny and the anonymous reviewers for their comments which helped improve this paper. eferences Alt, H.; Blum, N.; Melhorn, K.; and Paul, M. 1991. Computing a maximum cardinality matching in a bi- partite graph in time o(n1v5 Jm7iog;E>. Information Processing Letters 37:237-240. Berge, C. 1970. Graphe et Hypergraphes. Paris: Dunod. Bessiere, C., and Cordier, M. 1993. Arc-consistency and arc-consistency again. In Proceedings AAAI, 10% 113. Bessiere, C. 1994. Arc-consistency and arc- consistency again. Artificial Intelligence 65(1):179- 190. Dechter, R. 1990. Enhencement schemes for con- straint processing : Backjumping, learning, and cut- set decomposition. Artificial Intelligence 41:273-312. Hopcroft, J., and Karp, R. 1973. n5i2 algorithm for maximum matchings in bipartite graphs. SIAM Journal of Computing 21225-231. Lovasz, L., and Plummer, M. 1986. Matching Theory. North Holland mathematics studies 121. Mackworth, A. 1977. Consistency in networks of relations. Artificial Intelligence 899-118. Mohr, R., and Henderson, T. 1986. Arc and path consistency revisited. Artificial Intelligence 28:225- 233. Mohr, R., and Masini, G. 1988a. Good old discrete relaxation. In Proceedings ECAI, 651-656. Mohr, R., and Masini, G. 1988b. Running effi- ciently arc consistency. Syntactic and Structural Pat- tern Recognition F45:217-231. Tarjan, R. 1972. Depth-first search and linear graph algorithms. SIAM Journal of Computing 1:146-160. Van Hentenryck, P.; Deville, Y.; and Teng, C. 1992. A generic arc-consistency algorithm and its special- izations. Artificial Intelligence 57:291-321. Van Hentenryck, P. 1989. Constraint Satisfaction in Logic Programming. M.I.T. Press. Vismara, P.; Regin, J.-C.; Quinqueton, J.; Py, M.; Laurenco, C.; and Lapied, L. 1992. RESYN : Un systeme d’aide a la conception de plans de synthese en chimie organique. In Proceedings 12th International Conference Avignon’92, volume 1, 305-318. Avignon: EC2. 2(Mohr & Masini 198813) reduce this complexity to O(ed) for the binary alldifferent constraints Tractable Problems 367 | 1994 | 150 |
1,487 | erent Level of Local Consistency in Constraint Networks Peter van Beek Department of Computing Science University of Alberta Edmonton, Alberta, Canada T6G 2Hl vanbeek@cs.ualberta.ca Abstract We present a new property called constraint loose- neas and show how it can be used to estimate the level of local consistency of a binary constraint net- work. Specifically, we present a relationship between the looseness of the constraints, the size of the do- mains, and the inherent level of local consistency of a constraint network. The results we present are useful in two ways. First, a common method for finding solu- tions to a constraint network is to first preprocess the network by enforcing local consistency conditions, and then perform a backtracking search. Here, our results can be used in deciding which low-order local consis- tency techniques will not change a given constraint network and thus are not useful for preprocessing the network. Second, much previous work has identified conditions for when a certain level of local consistency is sufficient to guarantee a network is backtrack-free. Here, our results can be used in deciding which local consistency conditions, if any, still need to be enforced to achieve the specified level of local consistency. As well, we use the looseness property to develop an al- gorithm that can sometimes find an ordering of the variables such that a network is backtrack-free. Introduction Constraint networks are a simple representation and reasoning framework. A problem is represented as a set of variables, a domain of values for each variable, and a set of constraints between the variables. A cen- tral reasoning task is then to find an instantiation of the variables that satisfies the constraints. Examples of tasks that can be formulated as constraint networks include graph coloring (Montanari 1974), scene label- ing (Waltz 1975)) natural language parsing (Maruyama 1990), temporal reasoning (Allen 1983), and answering conjunctive queries in relational databases. In general, what makes constraint networks hard to solve is that they can contain many local inconsisten- cies. A local inconsistency is a consistent instantiation of k: - 1 of the variables that cannot be extended to a kth variable and so cannot be part of any global solu- tion. If we are using a backtracking search to find a solution, such an inconsistency can lead to a dead end 368 Constraint Satisfaction in the search. This insight has led to the definition of conditions that characterize the level of local consis- tency of a network (F’reuder 1985; Mackworth 1977; Montanari 1974) and to the development of algo- rithms for enforcing local consistency conditions by removing local inconsistencies (e.g., (Cooper 1989; Dechter & Pearl 1988; Freuder 1978; Mackworth 1977; Montanari 1974; Waltz 1975)). However, the defini- tions, or necessary and sufficient conditions, for all but low-order local consistency are expensive to ver- ify or enforce as the optimal algorithms are O(nk), where ) is the level of local consistency (Cooper 1989; Seidel 1983). In this paper, we present a simple, sufficient con- dition, based on the size of the domains of the vari- ables and on a new property called constraint Zoose- ness, that gives a lower bound on the the inherent level of local consistency of a binary constraint net- work. The bound is tight for some constraint networks but not for others. Specifically, in any constraint net- work where the domains are of size d or less, and the constraints have looseness of m or greater, the net- work is strongly ( [d/( d - m)l )-consistent’. Informally, a constraint network is strongly L-consistent if a solu- tion can always be found for any subnetwork of size k: in a backtrack-free manner. The parameter m can be viewed as a lower bound on the number of instan- tiations of a variable that satisfy the constraints. We also use the looseness property to deve:op an algorithm that can sometimes find an ordering of the variables such that all solutions of a network can be found in a backtrack-free manner. The condition we present is useful in two ways. First, a common method for finding solutions to a constraint network is to first preprocess the network by enforcing local consistency conditions, and then perform a back- tracking search. The preprocessing step can reduce the number of dead ends reached by the backtracking al- gorithm in the search for a solution. With a similar aim, local consistency techniques can be interleaved with backtracking search. The effectiveness of using ’ [zl, the ceiling of 2, is the smallest integer greater than or equal to x. From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. local consistency techniques in these two ways has been studied empirically (e.g., (Dechter & Meiri 1989; Gaschnig 1978; Ginsberg et al. 1990; Haralick & Elliott 1980; Prosser 1993)). In this setting, our results can be used in deciding which low-order local consistency techniques will not change the network and thus are not useful for processing a given constraint network. For example, we use our results to show that the n- queens problem, a widely used test-bed for comparing backtracking algorithms, has a high level of inherent local consistency. As a consequence, it is generally fruitless to preprocess such a network. Second, much previous work has identified condi- tions for when a certain level of local consistency is sufficient to guarantee a solution can be found in a backtrack-free manner (e.g., (Dechter 1992; Dechter & Pearl 1988; Freuder 1982; 1985; Montanari 1974; van Beek 1992)). Th ese conditions are important in applications where constraint networks are used for knowledge base maintenance and there will be many queries against the knowledge base. Here, the cost of preprocessing will be amortized over the many queries. In this setting, our results can be used in deciding which local consistency conditions, if any, still need to be enforced to achieve the specified level of local consistency. Background We begin with some needed definitions. Definition I. (binary constraint networks; Montanari (1974)) A b inary constraint network consists of a set X of n variables {x1,x2,. . .,xn), a domain Di of pos- sible values for each variable, and a set of binary con- straints between variables. A binary constraint or re- lation, Rij, between variables xi and xj, is any subset of the product of their domains (i.e., Rij E Di x Dj). An instantiation of the variables in X is an n-tuple (X1,X2,.*. ,X,), representing an assignment of Xi E Di to xi. A consistent instantiation of a network is an instantiation of the variables such that the constraints between variables are satisfied. A consistent instantia- tion is also culled a solution. Mackworth (1977; 1987) defines three properties of networks that characterize local consistency of net- works: node, arc, and path consistency. Freuder (1978) generalizes this to k-consistency. Definition 2 (strong k-consistency; Freuder (1978; 1982)) A network is k-consistent if and only if given any instantiation of any k - 1 variables satisfying all the direct relations among those variables, there exists an instantiation of any kth variable such that the k val- ues taken together satisfy all the relations among the k variables. A network is strongly k-consistent if and only if it is j-consistent for all j 5 k. Node, arc, and path consistency correspond to strongly one-, two-, and three-consistent, respectively. A strongly n-consistent network is called globally con- sistent. Globally consistent networks have the property that any consistent instantiation of a subset ofthe-vari- ables can be extended to a consistent instantiation of all the variables without backtracking (Dechter 1992). Following Montanari (1974), a binary relation &j between variables xi and xj is represented as a (O,l)- matrix with IDi 1 rows and IDj 1 columns by imposing an ordering on the domains of the variables. A zero entry at row ca, column b means that the pair consist- ing of the ath element of Di and the bth element of Dj is not permitted; a one entry means the pair is permit- ted. A concept central to this paper is the looseness of constraints. Definition 3 (m-loose) A binary constraint is m- loose if every row and every column of the (O,l)-matrix that defines the constraint has at least m ones, where 0 5 m 5 IDI - 1. A b inlsry constraint network is m- loose if all its binary constraints are m-loose. Figure 1: (a) not S-consistent; (b) not 4-consistent Example 1. We illustrate some of the definitions using the well-known n-queens problem. The prob- lem is to find all ways to place n-queens on an n x n chess board, one queen per-column,so that each pair of queens does not attack each other. One possible con- straint network formulation of the problem is as fol- lows: there is a variable for each column of the chess board, x1, . . . , x,; the domains of the variables are the possible row positions, Di = { 1, . . . , n}; and the bi- nary constraints are that two queens should not attack each other. The (0, 1)-matrix representation of the con- straints between two variables xi and xj is given by, if a # b A la - bl # Ii - jl otherwise, for a,b= I,..., n. For example, consider the constraint network for the $-queens problem. The constraint R12 between x1 and 22 is given by, &2 = Entry R12,43 is 0, which states that putting in column 1, row 4 and a queen in column queen row 3 Tractable Problems 369 is not allowed by the constraint since the queens at- tack each other. It can be seen that the network for the 4-queens problem is 2-consistent since, given that we have placed a single queen on the board, we can always place a second queen such that the queens do not attack each other. However, the network is not S-consistent. For example, given the consistent place- ment of two queens shown in Figure la, there is no way to place a queen in the third column that is consistent with the previously placed queens. Similarly the net- work is not 4-consistent (see Figure lb). Finally, every row and every column of the (O,l)-matrices that define the constraints has at least 1 one. Hence, the network is l-loose. A Sufficient Condition for Local Consistency In this section, we present a simple condition that es- timates the inherent level of strong k-consistency of a binary constraint network. The condition is a sufficient but not necessary condition for local consistency. It is known that some classes of constraint net- works already possess a certain level of local consis- tency and therefore algorithms that enforce this level of local consistency will have no effect on these net- works. For example, Nadel(1989) observes that an arc consistency algorithm never changes a constraint net- work formulation of the n-queens problem, for n > 3. Dechter (1992) b o serves that constraint networks that arise from the graph k-coloring problem are inherently strongly k-consistent. The following theorem charac- terizes what it is about the structure of the constraints in these networks that makes these statements true. Theorem 1 If a binary constraint network, R, is m- loose and all domains are of size IDI or less, then the network is strongly ( [&j >-consistent. Proof. We show that the network is k-consistent for all k 5 [lDl/(lDl - m)]. Suppose that variables Xl ““’ xk-1 can be consistently instantiated with val- ues Xl,...,Xk-1. To show that the network is k- consistent, we must show that there exists at least one instantiation Xk of variable xk that satisfies all the constraints, (xi,&) E&k i= l,...,k- I simultaneously. We do so as follows. The instantia- tions Xi,..., Xk- 1 restrict the allowed instantiations of xg. Let vi be the (O,l)- vector given by row Xi of the (O,l)-matrix &k, i = 1,. . . , k - 1. Let pos(Vi) be the positions of the zeros in vector vi. The zero entries in the vi are the forbidden instantiations of xk, given the instantiations Xl, . . . , X&l. No consistent instantia- tion of xii: exists if and only if pos(vr ) U . . . U pos(vk- 1) = {l,..., IDI}. Now, the key to the proof is that all the vi contain at least m ones. In other words, each vi contains at most IDI - m zeros. Thus, if (k - 1>Wl - 4 < PI, it cannot be the case that pos(vi) U. S-U pos(z]k-1) = (l,..., ID]). (To see that this is true, consider the “worst case” where the positions of the zeros in any vector do not overlap with those of any other vector. That is, pos(vi) n pos(vj) = 8, i # j.) Thus, if PI ks F ’ 1 1 all the constraints must have a non-zero entry in com- mon and there exists at least one instantiation of &??k that satisfies all the constraints simultaneously. Hence, the network is k-consistent. 0 Theorem 1 always specifies a level of local consis- tency that is less than or equal to the actual level of in- herent local consistency of a constraint network. That is, the theorem provides a lower bound. Graph col- oring problems provide examples where the theorem is exact, whereas n-queens problems provide examples where the theorem underestimates the true level of lo- cal consistency. Example 2. Consider again the well-known n- queens problem discussed in Example 1. The problem is of historical interest but also of theoretical interest due to its importance as a test problem in empirical evaluations of backtracking algorithms and heuristic repair schemes for finding solutions to constraint net- works (e.g., (Gaschnig 1978; Haralick & Elliott 1980; Minton et al. 1990; Nadel 1989)). For n-queens net- works, each row and column of the constraints has 101-3 < m 5 101-l ones, where IDI = n. Hence, The- orem 1 predicts that n-queens networks are inherently strongly ( [n/31)-consistent. Thus, an n-queens con- straint network is inherently arc-consistent for n 2 4, inherently path consistent for n 2 7, and so on, and we can predict where it is fruitless to apply a low or- der consistency algorithm in an attempt to simplify the network (see Table 1). The actual level of inherent consistency is ln/2] for n > 7. Thus, for the n-queens problem, the theorem underestimates the true level of local consistency. Table 1: Predicted ([n/31) and actual (ln/2], for n 2 7) level of strong local consistency for n-queens networks n 4 5 6 7 8 9 10 11 12 pred. 2 2 2 3 3 3 4 4 4 actual 2 2 2 3 4 4 5 5 6 The reason Theorem 1 is not exact in general and, in particular, for n-queens networks, is that the proof of the theorem considers the “worst case” where the positions of the zeros in any row of the constraints 370 Constraint Satisfaction &,i= I,..., k - 1, do not overlap with those of any other row. For n-queens networks, the positions of some of the zeros do overlap. However, given only the looseness of the constraints and the size of the domains, Theorem 1 gives as strong an estimation of the inherent level of local consistency as possible as examples can be given where the theorem is exact. Example 3. Graph k-colorability provides exam- ples where Theorem 1 is exact in its estimation of the inherent level of strong k-consistency. The constraint network formulation of graph coloring is straightfor- ward: there is a variable for each node in the graph; the domains of the variables are the possible colors, D = {l,...,k); and the binary constraints are that two adjacent nodes must be assigned different colors. As Dechter (1992) states, graph coloring networks are inherently strongly k-consistent but are not guaranteed to be strongly (k + l)- consistent. Each row and column of the constraints has m = IDI - 1 ones, where IDI = k. Hence, Theorem 1 predicts that graph k-colorability networks are inherently strongly k-consistent. Example 4. We can also construct examples, for all m < IDI - 1, where Theorem 1 is exact. For example, consider the network where, n = 5, the domains are D = {1,...,5}, and the binary constraints are given bY3 01111 00111 Rij = I 1 10011 , Ili<j<n, 11001 11100 and Rji = Rg, for j < i. The network is S-loose and therefore strongly S-consistent by Theorem 1. This is exact, as the network is not 4-consistent. We conclude this section with some discussion on what Theorem 1 contributes to our intuitions about hard classes of problems (in the spirit of, for exam- ple, (Cheeseman, Kanefsky, & Taylor 1991; Williams & Hogg 1992)). Hard constraint networks are in- stances which give rise to search spaces with many dead ends. The hardest networks are those where many dead ends occur deep in the search tree. Dead ends, of course, correspond to partial solutions that cannot be extended to full solutions. Thus, networks where the constraints are loose are good candidates to be hard problems since loose networks have a high level of inherent strong consistency and strong k-consistency means that all partial solutions are of at least size k. Computational experiments we performed on ran- dom problems provide evidence that loose networks can be hard. Random problems were generated with n = 50, IDI = 5, . . . 10, and p, q = 1, . . . ,100, where p/100 is the probability that there is a binary con- straint between two variables, and q/100 is the proba- bility that a pair in the Cartesian product of the do- mains is in the constraint. The time to find one solu- tion was measured. In the experiments we discovered that, given that the number of variables and the do- main size were fixed, the hardest problems were found when the constraints were as loose as possible without degenerating into the trivial constraint where all tu- ples are allowed. That networks with loose constraints would turn out to be the hardest of these random prob- lems is somewhat counter-intuitive, as individually the constraints are easy to satisfy. These experimental re- sults run counter to Tsang’s (1993, p.50) intuition that a single solution of a loosely constrained problem “can easily be found by simple backtracking, hence such problems are easy,” and that tightly constrained prob- lems are “harder compared with loose problems.” As well, these hard loosely-constrained problems are not amenable to preprocessing by low-order local consis- tency algorithms, since, as Theorem 1 states, they pos- sess a high level of inherent local consistency. This runs counter to Williams and Hogg’s (1992, p.476) specu- lation that preprocessing will have the most dramatic effect in the region where the problems are the hardest. acktrack-free Networks Given an ordering of the variables in a constraint network, backtracking search works by successively instantiating the next variable in the ordering, and backtracking to try different instantiations for pre- vious variables when no consistent instantiation can be given to the current variable. Previous work has identified conditions for when a certain level of local consistency is sufficient to ensure a solution can be found in a backtrack-free manner (e.g., (Dechter 1992; Dechter & Pearl 1988; Freuder 1982; 1985; Montanari 1974; van Beek 1992)). S ometimes the level of inher- ent strong k-consistency guaranteed by Theorem 1 is sufficient, in conjunction with these previously derived conditions, to guarantee that the network is globally consistent and therefore a solution can be found in a backtrack-free manner. Otherwise, the estimate pro- vided by the theorem gives a starting point for apply- ing local consistency algorithms. In this section, we use constraint looseness to iden- tify new classes of backtrack-free networks. First, we give a condition for a network to be inherently glob- ally consistent. Second, we give a condition, based on a directional version of the looseness property, for an ordering to be backtrack-free. We also give an efficient algorithm for finding an ordering that satisfies the con- dition, should it exist. We begin with a corollary of Theorem 1. Corollary 1 If a binary constraint network, R, is m- loose, all domains are of size IDI or less, and m > SlDl, the network is globally consistent. Proof. By Theorem 1, the network is strongly n- consistent if [lDl/( IDI - m)l > n. This is equivalent to, lDl/(lDl - m) > n - 1 and rearranging for m gives the result. 0 Tractable Problems 371 As one example, consider a constraint network with = ;“Dl 5 variables that has domains of at most size = 10 and constraints that are 8-loose. The net- work is globally consistent and, as a consequence, a solution can be found in a backtrack-free manner. An- other example is networks with n = 5, domain sizes of IDI = 5, and constraints that are 4-loose. Global consistency implies that all orderings of the variables are backtrack-free orderings. Sometimes, however, there exists a backtrack-free ordering when only much weaker local consistency conditions hold. F’reuder (1982) identifies a relationship between the width of an ordering of the variables and the level of local consistency sufficient to ensure an ordering is backtrack-free. FINDORDER(R, n) 1. I + (1,2 ,..., n) 2. for p 4- n downto 1 do 3. find a j E I such that, for each &j, i E I and i # A TIWWI - mij)l > wj, where wj is the number of constraints &j, i E I and i # j, and mij is the directional m-looseness of &j (if no such j exists, report failure and halt) 4. put variable xj at position p in the ordering 5. I + I - (j) Definition 4 (width; Freuder (1982)) Let o = (xl, . . . . xn) be an ordering of the variables in a binary constraint network. The width of a variable, xi, is the number of binary constraints between xi and variables previous to xi in the ordering. The width of an order- ing is the maximum width of all variables. Theorem 2 (Freuder (1982)) An ordering of the variables in a binary constraint network is backtrack- free if the level of strong k-consistency of the network is greater than the width of the ordering. Dechter and Pearl (1988) define a weaker version of k-consistency, called directional k-consistency, and show that Theorem 2 still holds. Both versions of k- consistency are, in general, expensive to verify, how- ever. Dechter and Pearl also give an algorithm, called adaptive consistency, that does not enforce a uniform level of local consistency throughout the network but, rather, enforces the needed level of local consistency as determined on a variable by variable basis. We adapt these two insights, directionality and not requiring uni- form levels of local consistency, to a condition for an ordering to be backtrack-free. Example 5. Consider the network in Figure 2. The network is a-consistent, but not 3-consistent and not 4-consistent. Freuder (1982), in connection with Theorem 2, gives an algorithm for finding an ordering which has the minimum width of all orderings of the network. Assuming that the algorithms break ties by choosing the variable with the lowest index, the mini- mal width ordering found is (x5, x4, x3, x2, xl), which has width 3. Thus, the condition of Theorem 2 does not hold. In fact, this ordering is not backtrack-free. For example, the partial solution x5 + 1, x4 + 3, and xs t 5 is a dead end, as there is no instantiation for 22. The ordering found by procedure FINDORDER is (24,x3, x2,21,25), which has width 4. It can be ver- ified that the condition of Theorem 3 holds. For ex- ample, wr, the width at variable 21, is 2, and the con- straints R41 and R31 are both S-loose, so [,p-pkiJ = 3 > Wl = 2. Therefore all solutions of the network can be found with no backtracking along this ordering. 00111 10011 &j = [ 1 11001 , i=l,2; j=3,4 11100 11110 Rd5= Definition 5 (directionally m-loose) A binary con- straint is directionally m-loose if every row of the (O,l)-matrix that defines the constraint has at least m ones, where 0 < m 5 I Dl - 1. Theorem 3 An ordering of the variables, o = (xl, . . . . xn), in a binary constraint network, R, is back- track-free if [&I > wj, 1 5 j 5 n, where wj is the width of variable xj in the ordering, and mj is the minimum of the directional looseness of the (non- trivial) constraints Rij, 1 5 i < j. Proof. Similar to the proof of Theorem 1. 0 Rji =R$, j<i Figure 2: Constraint network for which a backtrack- free ordering exists Conclusions and A straightforward algorithm for finding such a back- Local consistency has proven to be an important con- track-free ordering of the variables, should it exist, is cept in the theory and practice of constraint networks. given below. However, the definitions, or necessary and sufficient 372 Constraint Satisfaction conditions, for all but low-order local consistency are expensive to verify or enforce. We presented a suffi- cient condition for local consistency, based on a new property called constraint looseness, that is straight- forward and inexpensive to determine. The condition can be used to estimate the level of strong local con- sistency of a network. This in turn can be used in (i) deciding whether it would be useful to preprocess the network before a backtracking search, and (ii) de- ciding which local consistency conditions, if any, still need to be enforced if we want to ensure that a solution can be found in a backtrack-free manner. Finally, the looseness property was used to identify new classes of “easy” constraint networks. A property of constraints proposed by Nude1 (1983) which is related to constraint looseness counts the number of ones in the entire constraint. Nude1 uses this count, called a compatibility count, in an effective variable ordering heuristic for backtracking search. We plan to examine whether m-looseness can be used to develop even more effective domain and variable or- dering heuristics. We also plan to examine how the looseness property can be used to improve the aver- age case efficiency of local consistency algorithms. The idea is to predict whether small subnetworks already possess some specified level of local consistency, thus potentially avoiding the computations needed to en- force local consistency on those parts of the network. Acknowledgements. Financial assistance was re- ceived from the Natural Sciences and Engineering Re- search Council of Canada. References Allen, J. F. 1983. Maintaining knowledge about tem- poral intervals. Comm. ACM 26:832-843. Cheeseman, P.; Kanefsky, B.; and Taylor, W. M. 1991. Where the really hard problems are. In Proceed- ings of the Twelfth International Joint Conference on Artificial Intelligence, 331-337. Cooper, M. C. 1989. An optimal k-consistency algo- rithm. Artificial Intelligence 41:89-95. Dechter, R., and Meiri, I. 1989. Experimental evalua- tion of preprocessing techniques in constraint satisfac- tion problems. In Proc. of the Eleventh International Joint Conference on Artificial Intelligence, 271-277. Dechter, R., and Pearl, J. 1988. Network-based heuristics for constraint satisfaction problems. Ar- tificial Intelligence 3411-38. Dechter, R. 1992. From local to global consistency. Artificial Intelligence 55:87-107. Freuder, E. C. 1978. Synthesizing constraint expres- sions. Comm. ACM 21:958-966. Freuder, E. C. 1982. A sufficient condition for backtrack-free search. J. ACM 29:24-32. Freuder, E. C. 1985. A sufficient condition for backtrack-bounded search. J. A CM 32:755-761. Gaschnig, J. 1978. Experimental case studies of back- track vs. waltz-type vs. new algorithms for satisficing assignment problems. In Proceedings of the Second Canadian Conference on Artificial Intelligence, 268- 277. Ginsberg, M. L.; Frank, M.; Halpin, M. P.; and Tor- rance, M. C. 1990. Search lessons learned from cross- word puzzles. In Proceedings of the Eighth National Conference on Artificial Intelligence, 210-215. Haralick, R. M., and Elliott, 6. L. 1980. Increasing tree search efficiency for constraint satisfaction prob- lems. Artificial Intelligence 14:263-313. Mackworth, A. K. 1977. Consistency in networks of relations. Artificial Intelligence 8:99-118. Mackworth, A. K. 1987. Constraint satisfaction. In Shapiro, S. C., ed., Encyclopedia of Artificial Intelli- gence. John Wiley & Sons. Maruyama, H. 1990. Structural disambiguation with constraint propagation. In Proceedings of the 28th Conference of the Association for Computational Lin- guistics, 31-38. Minton, S.; Johnston, M. D.; Philips, A. B.; and Laird, P. 1990. Solving large-scale constraint sat- isfaction and scheduling problems using a heuristic repair method. In Proceedings of the Eighth National Conference on Artificial Intelligence, 17-24. Montanari, U. 1974. Networks of constraints: Fun- damental properties and applications to picture pro- cessing. Inform. Sci. 7:95-132. Nadel, B. A. 1989. Constraint satisfaction algorithms. Computational Intelligence 5~188-224. Nudel, B. 1983. Consistent-labeling problems and their algorithms: Expected-complexities and theory- based heuristics. Artificial Intelligence 21:135-178. Prosser , P. 1993. Hybrid algorithms for the con- straint satisfaction problem. Computational Intelli- gence 9:268-299. Seidel, R. 1983. On the complexity of achieving k- consistency. Department of Computer Science Tech- nical Report 83-4, University of British Columbia. Cited in: A. K. Mackworth 1987. Tsang, E. 1993. Foundations of Constraint Satisfac- tion. Academic Press. van Beek, P. 1992. On the minimality and decompos- ability of constraint networks. In Proceedings of the Tenth National Conference on Artificial Intelligence, 447-452. Waltz, D. 1975. Understanding line drawings of scenes with shadows. In Winston, P. H., ed., The Psychology of Computer Vision. McGraw-Hill. 19- 91. Williams, C. P., and Hogg, T. 1992. Using deep struc- ture to locate hard problems. In Proceedings of the Tenth National Conference on Artificial Intelligence, 472-477. Tractable Problems 373 | 1994 | 151 |
1,488 | Divide and Conquer in Multi-agent Planning Eithan Ephrati Computer Science Department University of Pittsburgh Pittsburgh, PA tantushQcs.pitt.edu Abstract In this paper, we suggest an approach to multi- agent planning that contains heuristic elements. Our method makes use of subgoals, and derived sub-plans, to construct a global plan. Agents solve their individ- ual sub-plans, which are then merged into a global plan. The suggested approach may reduce overall planning time and derives a plan that approximates the optimal global plan that would have been derived by a central planner, given those original subgoals. We consider two different scenarios. The first involves a group of agents with a common goal. The second considers how agents can interleave planning and exe- cution when planning towards a common, though dy- namic, goal. ecomposition Reducing Complexity The complexity of a planning process is measured by the time (and space) consumed. Let b be the branching factor of the planning problem (the average number of new states that can be generated from a given state by applying a single operator), and let cb denote the depth of the problem (the optimal path from the initial state to the goal state). The time complexity of the planning problem is then G(bd) (Korf 1987). In a multi-agent environment, where each agent can carry out each of the possible operators (possibly with differing costs), the complexity may be even worse. A centralized planner should consider assigning each op- erator to each one of the n agents. Thus, finding an optimal plan becomes O(n x b)d. However, if the global goal can be decomposed into n subgoals ((gr, . . . , gn)) the time complexity may be reduced significantly. Let bi and di denote respectively the branching factor and depth of the optimal plan that achieves gi . Then, as shown by Korf in (Korf 1987), if the subgoals are independent or serializable,’ the central multi-agent planning time complexity can be reduced to Ci((n x bi)di), where bi M i and di M $. ‘A set of subg oals is said to be independent if the plans that achieve them do not interact. If the subgoals are se- riulizable then there exists an ordering among them such that achieving any subgoal in the series does not violate any of its preceding subgoals. Jeffrey S. Rosenschein Institute of Computer Science The Hebrew University Jerusalem, Israel jeff@cs.huji.ac.il This phenomenon of reduced complexity due to the division of the search space can be exploited most nat- urally in a multi-agent environment. The underlying idea is to assign to each agent a subgoal and let that agent construct the plan that achieves it. Since agents plan in parallel, planning time is further reduced to max;(n x bi)di. Moreover, if each agent is to generate its plan according to its own view (assuming that the available operators are common knowledge) then the complexity becomes mw(bi)d’. The global plan can then be constructed out of local plans that are based upon the agents’ local knowledge. Unfortunately, un- less the subgoals are independent or serial, the plans that achieve the set of subgoals interfere, and conflicts (or redundant actions) may arise and need to be re- solved. In this paper we suggest a heuristic approach to multi-agent planning that exploits this phenomenon of decomposed search space. The essential idea is that the individual sub-plans serve to derive a heuristic function that is used to guide the search for the global plan. This global search is then done in the space of world states which is pruned using the A* algorithm. Our method makes use of pre-existing subgoals. These subgoals are not necessarily independent, nor are they necessarily serial. The separate agents’ sub-plans, each derived separately and in parallel, are ultimately merged into a unified, valid global plan. The suggested approach may reduce overall planning time while deriving the optimal global plan that would have been derived, given those original subgoals. In multi-agent environments this ap- proach also removes the need for a central planner that has global knowledge of the domain and of the agents involved. Our scenario involves a group 4. = (al, . . . , a,} of n agents. These agents are to achieve a global goal G. The global goal, G, has been divided into n subgoals W 1,***9 G,}), and formulated as a subgoal planning problem (i.e., the interrelationship among subgoals has been specified). The agents communicate as they con- struct a global plan. A Simple Example Consider a scenario in the slotted blocks world. As de- scribed in Figure 1 there are three agents (al, o2,u3) Collaboration 375 From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. and 4 blocks (a,b,c,d) with lengths of 1,2,2, and 3 respectively. The world may be described by the fol- lowing relations: Clear(b)(-there is no object on b); On(b, x, V/H)(-b is located on block/location x ei- ther vertically (V) or horizontally (H)); At(x, Zoc)(- the left edge of object x (agent or block) is at Zoc). The functions r(b) and Z(b) return the region of b’s left edge, .and the length of b, respectively. We will use only the first letter of a predicate to denote it. Figure 1: An Arch in the Blocks World The available operators (described in a STRIPS-like fashion) are: Takei(b, x, y)- Agent i takes b from region/block x to region/bloc y: [cost: IZoc(=) - Zoc(y)l x Z(b), pre: C(b),C(y),A(e, a), del: C(Y)P+, x)4(4 x),O(h 2, Z)r add: C(x)90(b, Y9 z)4(%s Y),A(b~ Y)l Rotatei -i rotates b by *F: [cost: Z2(b), pre: C(b), A(ai, r(b)), del: O(b, X, z), add: O(b, X, z)] (.I? denotes V and vice versa) Movoi(x, y)--i goes from x to y: [cost: IX - 91, pre: A(%, x), del: A(%, x), add: A(a;, y)] The initial state is described in the left side of Fig- ure 1. The agents are to construct an arch such as the one pictured in the right side of the figure. A straight- forward division into subgoals is to first construct left and right columns (appropriately distant and aligned) and then put up a top. Given this Q priori breakdown into subgoals, our agents are to go through a planning process that will result in satisfying the original goal. Assumptions and Definitions o The global goal, G, is a set of predicates, possibly including uninstantiated variables. g denotes any grounded instance of G (a set of grounded predicates that specifies a set of states). We assume that G is divided into n abstract subgoals (Gr, G2, . . . , G,), such that there exists a consistent set of instances of these subgoals that satisfies G (Uigi b G). In accordance with the possibly required (partial) order of subgoal achievement, we denote the pre- conditions for any plan, pi, that achieves gi by go (which for most subgoals is simply the initial state). e Each pi is expressed by the set of the essential propo- sitions that enable any sequence of operators that construct it. These propositions are partially or- dered according to their temporal order in pie2 2Using SNLP (McAllester & Rosenblitt 1991) terminol- ogy, these are the propositions in the causal links that con- struct the “nonlinear abstraction” of pi, partially ordered For the merging/composition process to find the (op- timal) global plan, it will, in general, be necessary to generate more than just one plan for some sub- goals. 4 denotes the depth (radius) of the search that is needed so as to generate the sufficient num- ber of sub-plans that achieve gi. We assume that 4 is known ahead of time (for each gi)a3 Pi denotes the (sufficient) set of plans that is generated within this h-depth search. Each agent has a cost jknction over the domain’s op- erators. The cost of aj’~ plan cj(pj) is Cr.r cj(opk). Given that the set of propositions E holds (in some world state), F&,,,,, (E) is defined to be the set of all propositions that can be satisfied by invoking at most one operator at that state. (Fjllow (E) = (I I %[OP(E) I= 111 h w ere qp(E) denotes the invocation of op at a state that satisfies E.) Similarly, F&-(E) is the set of propositions that can be achieved by invoking at most two operators simultaneously (by two agents) given E, and Fslow (E) is the set that can be achieved by at most n simultaneous actions. The Process At the beginning of the planning process each agent, i, is assigned (for the purposes of the planning pro- cess) one subgoal gi. Given that subgoal, the agent de- rives Pi, the (sufficient) set of sub-plans that achieves it given some initial configuration gf. The significant savings in time and space complex- ity of the search is established by the decomposition of the search space and by parallelism of the search. How- ever, the primary phase of the subgoal technique is the process of merging the sub-plans that achieve the given subgoals. A sub-plan is constructed by an agent with only a local view of the overall problem. Therefore, conflicts may exist among agents’ sub-plans, and re- dundant actions may also have been generated. Given the set of sub-plans, we are looking for a method to inexpensively merge them into an optimal global plan. To do so we employ an iterative search. The un- derlying idea is the dynamic generation of alternatives that identifies the optimal global plan. At each step, all agents state additional information about the sub-plan of which they are in charge. The current set of candi- date global plans is then expanded to comprise the new set of candidate global plans. The process continues according to their safety conditions, and stated as prerequi- sites (preconditions in UCPOP’s terminology (Penberthy & Weld 1992)) (e.g., if step w has prerequisite On(z, B) and step 8 enables it by establishing On(A,B), the essential proposition is On(x, B) rather than On( A, B)). 3This unrealistic assumption is needed only for the com- pleteness of the planning process. However, using domain dependent knowledge, the corresponding d;‘s may be as- sessed heuristically. In general, the more the sub-plans will tend to interact (and the closer to optimal the solution needs to be) the deeper the d;‘s that are needed. 376 Distributed AI until the optimal plan is found. Plans are represented by the partially ordered sets of the essential proposi- tions that enable them. These sets of propositions are aggregated throughout the process. We use ordered propositions instead of sequences of operators for the following reasons. First, the con- structed sub-plans serve only to guide the heuristic search for the actual global multi-agent plan. The cat- t& multi-agent action to establish a proposition is de- termined only during the merging process itself. This is essential for the efficiency of the resulting multi-agent plan.4 Second, the propositions encode all the infor- mation needed for the heuristic evaluation. And third, by dealing with propositions we achieve more flexibil- ity (least commitment) in the merging process, both in the choice of operators and in their bindings. Note that the essential search method is similar to the search employed by progressive world-state plan- ners. In general, the search through the space of states is inferior to the search, as conducted by POCL plan- ners, through the space of plans (Minton, Bresina, & Drummond 1991). The reason is that any (nonde- terministic) choice of action within the first method also enforces the timing of that action (and thus, a greater breadth of search is needed to ensure complete- ness, e.g., in the Sussman anomaly). However, given the individual sub-plans, our merging procedure need consider only a small number of optional expansions, among which the heuristic evaluation “foresees” most commitments that may result in backtracking. Thus, becomes possible and worthwhile to avoid the causal- link-protection step of the POCL planners. To achieve that, the search method employs an A* algorithm where each path represents one optional global multi-agent plan. The heuristic function (f’ = g+h’) that guides the search is dynamically determined by the agents during the process. g is the actual cost of the partial path (multi-agent plan) that has already been constructed. h’ is the sum of the approximate re- maining costs, hi, that each agent assigns to that path, based on its own generated sub-plan. Since based upon an actually constructed plan, each individual estimate, hi, is absolutely accurate in isolation. Thus, if the sub- goals are independent, then the global heuristic func- tion (Cy hi) will b e accurate, and the merging process will choose the correct (optimal) candidate for further expansion at each step of the process. Unfortunately, since in general sub-plans will tend to interfere with one another, h’ is an underestimate (the individual estimates will turn out to be too op- timistic). An underestimated heuristic evaluation is also desirable, since it will make the entire A* search *For examp Ie it might be the case that towards the , achievement of his assigned subgoal a; planned to perform spa in order to establish proposition P, but in the multi- agent plan P will actually be established by oj performing op,, Therefore, what counts for the global plan is what is established, rather than how it is established. admissible, meaning that once a path to the global goal has been found, it is guaranteed to be the optimal one. However, due to overlapping constraints (“favor relationsn (Martial 1990), or “positive” interactions) the global heuristic evaluation might sometimes be an overestimate. Therefore, the A* search for the optimal path would (in those cases) have to continue until the potential effect of misjudgment in the global heuristic evaluation fades away. In general, the more overlap ap- pears in the individual sub-plans, the more additional search steps are needed. More specifically, the agents go through the follow- ing search 100~:~ At step Ire one aggregated set (of propositions), A:+, is chosen from all sets with minimal heuristic value, J@+. This set (with its corresponding multi-agent plan) is the path currently being considered. Each agent declares the maximal set of propositions, Et, such that: (a). These propositions represent some possible se- quence of consecutive operators in the agents’ pri- vate sub-plan, and all their necessary predecessors hold at the current node. (b). The declaration is “feasible,” i.e., it can be achieved by having each of the agents perform at most one action simultaneously with one another (ET c ~L7u (A;+)). All (set-theoretic) maximal feasible expansions of A!+ with elements of the agents’ declarations are glnerated. [Each expansion, Ez(Ai+), is one of the fixed points {I 1 (I E Ui El)r\(I U Ez(Ai+) E Fzrilow (A:+)). Note that this is only a subset of the expansions that a “blind planner” should generate.] At this stage, based on the extended set of proposi- tions, the agents construct additions to the ongoing candidate plans. Each expansion that was generated in the previous step induces a sequence of operations that achieves it. The generation of these sequences is discussed below. All expansions are evaluated, in a central manner, so as to direct the search (i.e., find the value, f’ = g+h’, of the A* evaluation function): (a). The g component of each expansion is simply taken to be the cost of deriving it (the cost of the plan that is induced by the current path plus the additional cost of the multi-agent plan that derives the expansion). (b). To determine the heuristic component, h’, each agent declares hi, the estimate it associates with ‘The set of all aggregated sets of propositions at step k is denoted by A’ (its constituent sets will be denoted by A$, where j is simply an index over those sets). dk+ denotes the set that has the maximal value according to the heuristic function at step k. Collaboration 377 5. each newly-formed set of aggregated propositions. This is the cost it associates with completing its “private” sub-plan, given that the expansion is es- tablished. The h’ value is then taken to be the sum of these estimates (xi hi(Ez,(Ak+))). The aggregated set A:+ is replaced by its union with all of its expansions: A’+’ = (A’; \ A$+) U (Ai+U Ea,(A;+)}. The process ends when all “best plans” have been found. Since the heuristic function is not guaranteed to be an underestimate, stopping when the first global plan has been generated may not result in the optimal global plan. It is a matter of policy, how much more searching the agents should do (if at all) to discover better global plans. The entire process has the following advantages from a complexity point of view. First, the branching factor of the search space is strictly constrained by the indi- vidual plans’ propositions. Second, the A* algorithm uses a relatively good heuristic function, because it is derived “bottom-up” from the plans that the agents have already generated (not simply an artificial h’ func- tion). Third, generation of successors in the search tree is split up among the agents (each doing a part of the search for a successor). Fourth, the heuristic function is calculated only for maximally “feasible” alternatives (infeasible alternatives need not be considered). Theorem 1 Given Pi, the sub-plans that achieve each subgoal (gi), the merging algorithm will find the opti- mal multi-agent plan that achieves these subgoals. The process will end within O(q x d) steps where d is the length of the optimal plan, and q is a measure of the positive interactions between overlapping propositions. In comparison to planning by a central planner, the overall complezity of the planning process, O((nx b)d), is reduced to O(mq b? + bxnxqxd), where bfi M ( i) fk. Proof: The formal proofs of the theorems in this paper appear in (Ephrati 1993). Construction of the Global The multi-agent plan is constructed throughout the process (Step 3 of the algorithm). At this step, all the optimal sequences of operators are determined. We re- quire that the actual plan be constructed dynamically in order to determine the g value of each alternative. The construction of the new segments of plans is de- termined by the cost that agents assign to each of the required actions; each agent bids for each action that each expansion implies. The bid that each agent gives takes into consideration the actions that the agent has been assigned so far. Thus, the global minimal cost sequence can be determined. An important aspect of the process is that each ex- pansion of the set of propositions belongs to the FzIow of the already achieved set. Therefore, it is straightfor- ward to detect actions that can be performed in par- 378 Distributed AI allel. Thus the plan that is constructed is not just cost-efficient, but also time-efficient. There are several important tradeoffs to be made here in the algorithm, and the decision of the system designer will affect the optima&y of the resulting plan. First, it would be possible to use Best First Search instead of A* so as to first determine the entire set of propositions, and only then construct the induced plan. Employing such a technique would still be less time-consuming than global planning. Second, when the agents add on the next steps of the global plan, they could consider the (developing) global plan from its beginning to the current point when deciding on the least expensive sequence of additional steps. This will (eventually) result in a globally optimal plan, but at the cost of continually reevaluating the developing plan along the way. Alternatively, it is possible to save all possible combinations of the actions that achieve any Ex, (A;+), and thus have a set of plans corre- spond to each expansion. E; c F;,,(&+) t Third, each agent declares 0 ensure maximal parallelism in the resulting global plan. However, agents may relate just to Fillow (.A”+) and establish parallelism only after the global plan is fully constructed. Back to the Example Consider again the example. Assume that the agents’ subgoals are (respectively): gr = {A(c, l), O(c, 1, V), C(c)39 Q2 = (A@, a), O(h 3, V), C(b)), and g3 = (A(4 1),0(4 =, V)). To simplify things we will use throughout this example FtllouI (A”+) instead of F,$- (A’+). Th e resulting multi-agent plan is illus- trated in Figure 2. Given these subgoals, the agents will generate the following sets of propositions:” PI = ([C(a), A(c, l), A(ui, +#olu[C(c)][‘lu [A(ai,r(c))][2]U[O(c, 1, V)][“]} (this ordered set corre- sponds to the plan (Tl(a, 2,3), Mr(3, l), RI(C))). 232 = ([C(b), A( uj, r(b)), C(3)J[‘b[O(b, 3, V)][“J} (in- ducing the plan (Tf(b, 4,3))). ~3 = ([C(b), C(3)]["] U [A(a, r(b)), C(3)][21U[C(d)][21 u[A(Q, r(d)][‘]u[g2, gr, O(d, c, H)]l’l) (inducing the -- plan (~3(6,4),T3(b,4,3),M3(3,4),T3(d,4,1))). Notice that there exists a positive relation between ~2% and ~3’s sets of propositions (both would need block b to be removed from on top of block d), but there is a possible conflict, slot 3, between their plans and al’s plan. At the first iteration, there is only one candidate set for expansion-the empty set. The aggregated set 6We underline propositions that, once satisfied, must stay valid throughout the process (e.g., propositions that construct the final subgoal). The region b denotes any re- gion besides r(b). W e use only the first letter of operators and predicates to denote them. The additional estimated cost of satisfying a subset of propositions appears in the superscript brackets. of declared DroDositions is: [A(c, l), C(O), e(b), C(3), A(ai, r(a)), A(uj, +)I- The (sole) expansion is fully satisfied by the initial state; therefore, g(A’) = 0, and f’(A’) is equal to its h’ value (that is, the sum of the individual estimate costs, which is 23, i.e., = 7 + 2 + 14, a heuristic overestimate of 4). t the second iteration, ~1 declares [C(c)], u2 de- -_- clares [O(b, 3, V)], and u3 may already declare [C(d)]. All declarations are in F&,(.,4’). Thus, Ex(A1) = [C(c), O(b, 3, V), C(d)]). These propositions can be achieved by Ti (a, 2,0) and Tj(b, 4,3). The bids that czl,a2 and u3 give to these actions are respectively [2,5], [4,2], and [6,4]. Therefore, u1 is “assigned” to block u and (~2 is assigned to block b while us remains unemployed. ‘l!he constructed plan is ((Tl(a, 2,0), T2(b, 4,3))) (where both agents perform in parallel), yielding a g value of 4. At the third iteration, (A2+ = [C(c), O(b, 3, V), C(d)]), u1 declares [A(a;,r(c))] and u3 declares [A(uk, f@)hwl~ A ccording to the agents’ bids, this expan- sion can best be achieved by ({M’l(O, l), &(3,4)}) At the fourth iteration, only al has a feasible ex- pansion to the current best set, that is [O(c, 1, V)] (note that a3 may not declare his final subgoal before ~2’s and ~1% assigned subgoals are satisfied). The cor- responding segment of the multi-agent plan is (RI(c)). Finally, at the fifih iteration, only ~3% assigned goal is not satisfied, and he declares [O(d, c, H)]. This last expansion is best satisfied by (Tz(d, 4,1)). Thus, the overall cost is 19. Notice that the final goal is achieved without any physical contribution on the part of as. Figure 2: The resulting multi-agent plan Interleaved Planning and Execution The multi-agent planning procedure is based on the incremental process of merging sub-plans. This at- tribute of the process makes it very suitable for sce- narios where the execution of the actual plan is ur- gent. In such scenarios it is important that, parallel to the planning process, the agents will actually exe- cute segments of the plan that has been constructed so far (Dean & Boddy 1988; Durfee 1990). We assume that there is some look-ahead factor, 1, that specifies the number of planning steps that should precede the actual execution step(s). We also assume that each agent can construct the- first 1 optimal steps (in terms of propositions) of its own sub-plan. The fact that in order to find the first step(s) of the multi-agent optimal plan it is important for the merg- ing process to have only the corresponding first step(s) of the individual sub-plans, also makes the process very suitable for scenarios where the global goal may change dynamically. In such cases, the required revision of the (merged) multi-agent plan may sufficiently be ex- pressed only by the first 1 look-ahead steps. More- over, the process is flexible in response to such global changes, since they may be handled through the di- vision of the new goal into subgoals. Thus, a change in the global goal may be reflected only in changes in several subgoals, and plan revision is needed only in several sub-plans. We can therefore use the planning algorithm in sce- narios where planning and execution are interleaved, and goals may dynamically change. As in the previous scenario, the key element of the approach is a cost- driven merging process that results in a coherent global plan (of which the first 1 sets of simultaneous operators are most relevant), given the sub-plans. At each time step t each agent, i, is assigned (for the purposes of the planning process) one task and derives (the first 1 steps of) p,“, the sub-plan that achieves it. Note that once i has been assigned g4 at any given t, the plan it derives to accomplish the subgoal stays valid (for the use of the algorithm) as long as gi remains the same. That is, for any time t + Ic such that gj+k = gi, it holds that t+k pi = pi. Thus, re-planning is modularized among agents; one agent may have to re-plan, but the others can remain with their previous plans. As in the previous scenario, at each step, all agents state additional information about the sub-plan of which they are in charge. The next 1 optimal steps are then determined and the current configuration of the world, st, is changed to be st+‘. The process continues until all tasks have been accomplished (the global goal as of that specific time has been achieved). Since steps of the plan are executed in parallel to the planning process, the smaller the look-ahead factor is, the smaller the weight of the g component of the eval- uation function becomes (and the more the employed search method resembles Hill Climbing). Therefore, the resulting multi-agent plan may only approximate the actual optimal global plan. Theorem 2 Let the cost eflect of “positive” interac- tions among members of some subset, p, of the set of sub-plans, P t, that achieves Gt be denoted by 6$, and let the cost e#ect of “negative” interactions among these sub-plans be denoted by 6;. Accordingly, let 6 = maxpEps 1 6p+ - 6; 1.7 We say that the multi- agent plan that achieves Gt is 6-optimal, if it diverges from the optimal plan by at most 6. Then, at any time step t, employing the merging al- gorithm, the agents will follow a 6-optimal multi-agent plan that achieves Gt. ‘The effect of heuristic overestimate (due to positive fu- ture interaction between individual plans) and the effect of heuristic underestimate (due to interference between indi- vidual plans) offset one another. Collaboration 379 Conclusions and Related Work In this paper, we presented a heuristic multi-agent planning framework. The procedure relies on an u pri- ori division of the global goal into subgoals. Agents solve local subgoals, and then merge them into a global plan. By making use of the computational power of multiple agents working in parallel, the process is able to reduce the total elapsed time for planning as com- pared to a central planner. The optima&y of the pro- cedure is dependent on several heuristic aspects, but in general increased effort on the part of the planners can result in superior global plans. An approach similar to our own is taken in (Nau, Yang, & Hendler 1990) to find an optimal plan. It is shown there how planning for multiple goals can be done by first generating several plans for each subgoal and then merging these plans. The basic idea there is to try and make a global plan by repeatedly merging complete plans that achieve the separate subgoals and answer several restrictions. In our approach, there are no prior restrictions, the global plan is created incre- mentally, and agents do the merging in parallel. In (Foulser, Li, 8z Yang 1992) it is shown how to handle positive interactions efficiently among different parts of a given plan. The merging process looks for redundant operators (as opposed to aggregating propo- sitions) within the same grounded linear plan in a dy- namic fashion. In (Yang 1992), on the other hand, it is shown how to handle conflicts efficiently among different parts of a given plan. Conflicts are resolved by transforming the planning search space into a con- straint satisfaction problem. The transformation and resolution of conflicts is done using a backtracking al- gorithm that takes cubic time. In our framework, both positive and negative interactions are addressed simul- t aneously. Our approach also resembles the CEMPLAN plan- ning system (Lansky & Fogelsong 1987; Lansky 1990). There, the search space is divided into “regions” of ac- tivity. Planning in each region is done separately, but an important part of the planning process within a re- gion is the updating of its overlapping regions (while the planning process freezes). Our planning framework also relates to the approach suggested in (Wellman 1987). There too the planning process is viewed as the process of incremental con- straint posting. A method is suggested for assigning preferences to sets of constraints (propositions in our terminology) that will direct the planner. However, the evaluation and comparison between alternatives is done according to the global-view of the single planner, and is based on pre-defined dominance relations. Acknowledgments This work has been supported in part by the Air Force Of- fice of Scientific Research (Contract F49620-92-J-0422), by the Rome Laboratory (RL) of the Air Force Material Com- mand and the Defense Advanced Research Projects Agency (Contract F30602-93-C-0038), and by an NSF Young Inves- tigator’s Award (IRI-9258392) to Prof. Martha Pollack. And in part by the Israeli Ministry of Science and Tech- nology (Grant 032-8284). eferences Dean, T., and Boddy, M. 1988. An analysis of time- dependent planning. In Proceedings of the Seventh National Conference on Artificial Intelligence, 49-54. Durfee, E. H. 1990. A cooperative approach to plan- ning for real-time control. In Proceedinga of the Work- shop on Innovative Approaches to Planning, Schedul- ing and Control, 277-283. Ephrati, E. 1993. Planning and Consensus among Autonomous Agents. Ph.D. Dissertation, The Hebrew University of Jerusalem, Jerusalem, Israel. Foulser, D. E.; Li, M.; and Yang, Q. 1992. Theory and algorithms for plan merging. Artificial Intelligence 57:143-181. Korf, R. E. 1987. Planning as search: A quantitative approach. Artificial Intelligence 33:65-88. Lansky, A. L., and Fogelsong, D. S. 1987. Local- ized representation and planning methods for parallel domains. In Proceedings of the Sixth National Con- ference on Artificial Intelligence, 240-245. Lansky, A. L. 1990. Localized search for controlling automated reasoning. In Proceedings of the Work- shop on Innovative Approaches to Planning, Schedul- ing and Control, 115-125. Martial, F. von 1990. Coordination of plans in multia- gent worlds by taking advantage of the favor relation. In Proceedings of the Tenth International Workshop on Distributed Artificial Intelligence. McAllester, D., and Rosenblitt, D. 1991. Systematic nonlinear planning. In Proceedings of the Ninth Na- tional Conference on Artificial Intelligence, 634-639. Minton, S.; Bresina, J.; and Drummond, M. 1991. Commitment strategies in planning: A comparative analysis. In Proceedings of the Twelfth International Joint Conference on Artificial Intelligence, 259-265. Nau, D. S.; Yang, Q.; and Hendler, J. 1990. Op timization of multiple-goal plans with limited inter- action. In Proceedings of the Workshop on Innova- tive Approaches to Planning, Scheduling and Control, 160-165. Penberthy, J., and Weld, D. 1992. UCPOP: A sound, complete, partial order planner for ADL. In Proceed- ings of the Third International Conference on Knowl- edge Representation and Reasoning, 103-l 14. Wellman, M. P. 1987. Dominance and subsumption in constraint-posting planning. In Proceedings of the Tenth International Joint Conference on Artificial In- telligence, 884-889. Yang, Q. 1992. A theory of conflict resolution in planning. Artificial Intelligence 58( l-3):361-393. 380 Distributed AI | 1994 | 152 |
1,489 | rogressive Negotiation Ong istri Heterogeneous Cooperating Agents Taha Khedro Department of Civil Engineering Stanford University Stanford, CA 943054020 khedro@cive.stanford.edu Abstract Progressive negotiation is a strategy for resolving conflicts among distributed heterogeneous cooperating agents. This strategy aims at minimizing backtracking to previous solutions and provably ensures consistency of agents’ distributed solutions and convergence on a globally- satisfiable solution. The progressive negotiation strategy is enforced by a task-independent agent called Facilitator, which coordinates and controls the interaction of cooperating agents. The interaction of cooperating agents includes the communication of messages, the identification of conflicts, and the negotiation of conflicts as a way to resolve them. In this paper, we formally present our conceptualization of cooperating agents and their interaction via the facilitator. We next discuss the conflict types identified by agents and then present the progressive negotiation strategy for resolving conflicts. We then present two theorems that discuss the consistency and convergence of distributed solutions ensured by the strategy. Finally, we conclude with a summary of this paper and remarks about the strategy. Introduction The topic of negotiation has been a subject of central interest in Distributed Artificial Intelligence @AI) (Zlotkin & Rosenschein 1993). The word has been used in a variety of ways although it generally refers to communication mechanisms that improve coordination (Kuwabara & Lesser 1989; Conry, Meyer, dz Lesser 1988). Negotiation procedures have included the exchange of partial global plans (Durfee 1988), the communication of information intended to alter other agents’ goals (Sycara 1989), and the use of incremental suggestions leading to joint plans of action (Kraus & Wilkenfield 1991). In this paper, we propose a negotiation strategy called Progressive Negotiation for resolving conflicts among distributed heterogeneous agents. The strategy aims at minimizing backtracking to previous solutions while agents are cooperating to reach a globally-consistent satisfiable solution. We assume that cooperating agents have disparate knowledge and interact via a task-independent agent called Facilitator by sending and receiving messages, which are assertions and Michael R. Genesereth Department of Computer Science Stanford University Stanford, CA 943052 140 genesereth @ cs.stanford.edu retractions of predicate logic sentences (Genesereth 1992). Each agent has a theory, which involves a vocabulary of predicate symbols, function symbols, and constant symbols, a set of predicate-logic axioms expressing the agent’s task-specific knowledge, and another set of predicate-logic axioms expressing the agent’s criteria constraints, which can be relaxed. Because of the nature of the tasks performed by cooperating agents, their theories overlap and subsets of the vocabularies are shared among them. Also, cooperating agents are allowed to have part of their vocabulary not shared with other agents. In addition, agents are allowed to have vocabularies that are related by a set of predicate logic axioms, provided in the facilitator. In this paper, we first formally describe cooperating agents and their interaction via the facilitator. We then formalize conflict types and introduce the strategy of progressive negotiation for resolving conflicts. Then, we present theorems that discuss the consistency of distributed solutions and the solution convergence of the progressive negotiation strategy. Finally, the paper concludes with a summary and a few remarks about the strategy. Cooperating Agents We consider that a cooperating agent a has knowledge Ku as a set of predicate logic axioms, a set of criteria constraints Ca as predicate logic axioms, and a database Da as a set of ground predicate logic atoms, all expressed over a vocabulary consisting a set of predicate, function, and constant symbols, Xa. A cooperating agent a has authority to make a final decision over a vocabulary YU c xa. Final decisions are those decisions that conclude a disagreement between agents. The goal of every cooperating agent a is to find a complete local solution G” that is consistent with both its knowledge K” and constraints Cu. Formally, this can he expressed as follows: Collaboration 381 From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. Ga u K” u Ca is consistent. (1) u? = sentences(Vf ) u R+ is consistent. (4) At time t, agent amaintains a partial local solution, which consists of a set of predicate-logic atoms Dia. In the process of finding a solution, agent. a generates a set of assertions and retractions of atoms VP that is consistent with its knowledge Kp” and constraints CF, and updates its partial local solution to DF. Agent a also updates its solution when it receives a set of messages reflecting assertions and retractions of predicate logic sentences. Formally the solution update step can be expressed as follows: V assertion(v) E Vp” I DF = D,? v v, and V retraction(v) E Vp” I DF= Dp” - v. (2) Agent Interaction Cooperating agents interact via a task-independent agent called facilitator, which coordinates and controls the exchange of messages. The facilitator captures the interests of agents and performs various functions aimed at facilitating the exchange of assertions and retractions of predicate-logic sentences. In the event of receiving messages, the facilitator determines the appropriate recipient agents of the messages and forwards them accordingly. In addition, it translates between vocabularies used by different agents in their exchange of sentences. The translation is achieved through a set of predicate- logic axioms R@ defined over a subset of all agents’ vocabularies X4 cX. Consider the group of agents r ={ U, p, . . . . 5) that are cooperating on solving a agents’ knowledge Ku, 8 roblem defined by K , . . . . Kc over a vocabulary X. The group of agents interact via the facilitator $, which captures the agents’ interests expressed as sets of predicate logic axioms I”, Zp, . . . . I* and a set of translation axioms R4 over X4. This can formally be expressed as follows: x@cx= xa v x@ u . . . v xc, K=KauK13v...uK~vR~,md z= I” u zp v...u 15. (3) At time t, when an agent 5 E rgenerates messages Vf , it updates its current local solution to Dj and then communicates Vf to the facilitator. When the facilitator receives the set of messages V,S, it first deduces additional sentences based on the axioms R4. The result of this translation step is the set of sentences U! whose number is typically greater than that in Vf . This translation step can formally be expressed as follows: Then the facilitator checks for the agents that are interested in the communicated sentences, V,!. For every 6 E r- { 9, if I” u U: is consistent, agent 6 is interested and is added to the set of interested agents A. For every interested agent 6~ A, the facilitator forwards an appropriate set of messages W;. When agent 6 receives the set of messages Wt , it updates its current local solution to Df . Conflict Types Agent 6 checks the consistency of the updated local solution 0,” with respect to its knowledge K,! and constraints C,S. If D,! u K,! u CF is consistent, (5) there is no conflict and agent 6 accepts the messages. Conversely, if Df u K,! u C: is inconsistent, (6) there is a conflict and agent 8 identifies the conflict as one of three types: critical conflict, non-critical conflict with authority, and non-critical conflict without authority. In this section, the three conflict types are formally discussed. e Critical Conflict: a critical conflict is a conflict in which the updated solution based on the received messages is inconsistent with the agent’s knowledge K,! . Formally, a critical conflict can be expressed as follows: Dr u K,! is inconsistent. (7) 0 Non-Critical Conjlict with Authority.* a non-critical conflict with authority is a conflict in which the updated solution is consistent with the agent’s knowledge Kf but inconsistent with the agent’s constraints CF, and the vocabularies of each sentence in Wf belong to Xs over which agent S has authority. Formally, a non-critical conflict with agent S having authority over the vocabulary can be expressed as follows: Df u K[ is consistent, Df u Ct is inconsistent, and ‘d message(w) E Wf I vocabulary(w) E X”. (8) e Non-Critical Conflict without Authority.* a non- critical conflict without authority is a conflict in 382 Distributed AI which the updated solution is consistent with the agent’s knowledge Kt” but inconsistent with the agent’s constraints C,‘, and the vocabularies of each message in Wf do not belong to X” over which agent 6 has authority. Formally, this can be expressed as follows: Dt u Kl is consistent, Dt u C, is inconsistent, and V message(w) E Wf I vocabulary(w) P X”. (9) Progressive Negotiation: A Conflict Resolution Strategy Conflict resolution is an essential requirement for cooperation for autonomous, intelligent, interacting agents (Adler et al. 1989). In conflict resolution, the role of negotiation has been emphasized as the focal point for conflict resolution in distributed problem solving for different domains @m-fee & Lesser 1987; Laasri, Laasri, & Lesser 1990; Lander & Lesser 1989) Our research has focused on developing a strategy called progressive negotiation for resolving conflicts that aims at minimizing backtracking to previous solutions. In this strategy, conflict resolution is carried out by agents. Depending on the type of conflict, negotiation takes place in an attempt to resolve the conflict. In this strategy, critical conflicts are always resolved because they result from the agent’s knowledge, which must be satisfied in order to have a satisfiable solution. Non-critical conflicts are resolved by getting one of the agents to relax some of its violated criteria constraints in order for an agreement to be reached. The following is a formal treatment of how conflicts are resolved for the three types outlined in the previous section. Critical Conflicts When a critical conflict is identified, agent S determines a set of axioms Qf c K,” that caused the conflict and sends it to the facilitator. The facilitator in turn forwards appropriate axioms to the sending agent { and other agents in A interested in Q:. Formally, the violated axioms can be expressed as follows: Q:={qjVq~ K$uchthat D,! u q is inconsistent}. (10) Once the sending agent e receives the set of axioms Q$ , at time t’, forwarded by the facilitator, it checks the consistency of the axioms Qj with its knowledge K! . If Qf u K$ is inconsistent, there is no solution that is consistent with the knowledge of both agents. If Q! u Kf is consistent, however, then there could be a solution that is consistent with the knowledge of both agents. In this case, the a ent’s constraints are updated to Cf ensuring that Ct, u Qf is consistent. B This update may involve relaxation of some constraints in C!. After updatin f its constraints, agent 5 generates new messages Vtl and updates its local solution to D$, in a way that ensures the consistency of the updated solution with its knowledge and updated constraints. Formally, we can write: D,! u Kj u Cf is consistent. (11) The messages Vj are then sent to the facilitator, which forwards appropriate messages to all interested both agents. At time t ’ ‘, agent 6 receives the new messages Wt! and updates its local solution to Dr!. The new local solution, at time t’ ‘, for agent 6 will provably be consistent with the agent’s knowledge since Kr= Kf . Formally, we can write: Df, u Kf, is consistent. (12) Non-Critical Conflicts with Authority When a critical conflict is identified, agent 6 determines a set of axioms Pf 5 Kf that caused the conflict and sends it to the facilitator. The facilitator in turn forwards appropriate axioms to the sending agent 5 and other agents in A interested in Pf. Formally, the violated axioms can be expressed as follows: P~=(P)VPE Cfsuchthat Df u p is inconsistent}. (13) Once the sending agent 5 receives the set of axioms P!, at time t ’ forwarded by the facilitator, it checks the consistency of the axioms Pj with its knowledge KJ! knowing that Kj= K:. If Pj u Kj is inconsistent (i.e., the sets of violated axioms and the agent’s knowledge do not lead to a solution), agent 4 rejects the received set of axioms P$, and they are sent back to agent 6, which relaxes its set of constraints to Cr!, such that D: u C: is consistent since Df, = Df . If P$ u K$ is consistent, however, then agent 5 relaxes some of its constraints to Cj, if necessary, such that Pj u C,$ is consistent. Then agent 6 generates new messages Vf and updates its local solution to D! in a way that ensures the consistency of the updated solution with the agent’s knowledge and constraints, Collaboration 383 D$ u KS u Cj is consistent. (14) The messages Vj are then sent to the facilitator, which forwards it to all interested agents. Agent 8 receives the new messages Wl! at time t ’ ‘, and updates its local solution to Z$ . The new local partial solution for agent 6 will provably be consistent with the agent’s knowledge and constraints since Kf = Kf and Ct! = C.f , D,? u Kf, u C: is consistent. (1% Non-Critical Conflicts without Authority This case is similar to the case of non-critical conflicts with authority. The only difference is that when agent e receives the constraints Z$ forwarded by the facilitator, it checks Z$u K’! u Cc knowing that Kf = Kf and C,f, = C:. If Z$ u Ktt u C$ is t inconsistent, agent c rejects the set of constraints, and they are sent back to agent 8, which relaxes its set of constraints. to C,! such that DtY u C,? is consistent since D!, = Df . If Pt, u Kj u Cf is consistent, however, then t agent 4 generates new messages Vf and updates its local solution to D$ in a way that ensures the consistency of its knowledge and constraints, Dj u Pj u Kj u Cf is consistent. (16) Again the messages Vj are then sent to the facilitator, which forwards it to all interested agents. For agent 6, the new messages W: are received at time t ’ ’ and the local solution is updated to D: . The new local solution for agent 6 will provably satisfy the agent’s knowledge since K: = Kf and C: = Cf , D: u Kf, u C: is consistent. (17) Solution Consistency and Convergence In this section, we present two theorems for proving the consistency of distributed local solutions and the convergence on a common globally satisfiable solution under certain conditions for the progressive negotiation strategy. Before presenting these two theorems, we give a number of definitions that are used in the proofs. Definition 1: Let 5 denote an agent from the group of agents, 4 E r. The interests of agent {are said to be complete if and only if: V x E X5, 3s E Z4 expressing interest in x . (18) Definition 2: Let Ku, KB, . . . . Kc, denote the knowledge for the group of agents r={ a, p, . . . . [} and R4 ¬e the translation axioms captured in the facilitator 9. The initial sets of axioms expressing agents’ knowledge are said to be consistent if and only if: K= Ku u KB u . . . u Kc u R0 is consistent.(l9) Definition 3: Let C” , Cp , . . . . C* denote the initial sets of constraints for the group of agents r ={ a, ,‘:, . . . . c}. The initial sets of constraints are said to be consistent if and only if: C= C” u CD u . . . u CT is consistent. WV Definition 4: Let Y”, YB, . . . . Yc respectively denote the authorities of the group of agents r={ a: p, . . . . C}. The authorities of the group of agents Z%e said to be exhaustive if and only if: YU u YP u...u YC=X. (21) Definition 5: Let Ya, Yp, . . . . Yc respectively denote the authorities of the group of agents r = { 4 fl, . . . . c}. The authorities of the group of agents rare said to be disjoint (i.e., authorities over sets of interrelated vocabularies do not overlap) if and only if: vg~oIy*~x4 A YWCXW a Yt n Y’=IZI,and V rER# overZ4cX4 AV~WE rl YhZ++0r\ YwnZO=O. (22) Consistency of Distributed Solutions In this section, we present a theorem that states the consistency of distributed local solutions for the progressive negotiation strategy. Theorem 1: For the group of agents Z’, if the interests of the agents are complete (which is usually the case), the progressive negotiation strategy guarantees the consistency of the distributed local solutions generated after every exchange among agents. Formally, the distributed local solutions after an exchange at time tn can be expressed as follows: Dti = 0,” u D[ u . ..u Di is consistent. (23) Proof: Suppose that Dm= 0,” u Df, u . ..u Di is inconsistent. Then there must exist at least two local solutions 0,” and flm such that 384 Distributed AI DE u flm is inconsistent. (24) This means that there are at least two ground atoms, da and dp , in the solutions 0,” and flti respectively such that da= l dp, which in turn, means that one of the agents a or /? did not receive a message about it or update its solution. This is not possible because it contradicts the solution update step of agent interaction (2). Therefore, there cannot be any two partial solutions such that DE u 9 is inconsistent. Thus, D,,, = DE u Df, u . ..u Dm is ‘p always consistent according to the progressive negotiation strategy. Convergence on a Gammon Solution In this section, we present a theorem that states the convergence conditions of the progressive negotiation strategy. Theorem 2: For the group of agents r, if the interests are complete (which is usually the case), and the sets of axioms expressing agents’ knowledge are consistent, whereas the set of initial axioms expressing agents’ constraints is not necessarily consistent, and the authorities are exhaustive and disjoint, then the progressive negotiation strategy guarantees convergence on a common solution that satisfies all agents’ knowledge, and a relaxed version of the initial constraints. Formally, the following must be proven: Dm u Km” u . . . v Ki u R@ is consistent. (2% Proof: Suppose that Dm u Km” u . . . u Ki u Rs is inconsistent. Thus, one can write: D,au...uD~uK~u...uK~uR”is inconsistent. (26) Given that Km” u . . . u K,$ u R$ is consistent as stated in the theorem conditions, and having proved that DE u . ..u Di is always consistent in Theorem 1, one can conclude that equation (26) can hold if and only if either of two possibilities holds: * There exists at least one agent 5 E r such that its local solution is inconsistent with the facilitator’s translation axioms: Di u R@ , or 0 There exist at least two agents c, w E r such that their local solutions are inconsistent with their knowledge: Di u Dl u Ki u Kz. Case 1: In order for Df, u R0 to be inconsistent, there must be sent or received messages (respectively Vi or Wi> that are inconsistent with R” since the local solution atoms are typically updates of atoms in Vi or Wi. Formally, one of the following must hold: V5 u R# is inconsistent, or Wm u R” is inconsistent. “5: (27) This is not possible because of the facilitator’s translation step expressed in equation (4) and the completeness of agents’ interests. Therefore, D$ u R” is consistent and there cannot be any agent whose solution does not satisfy the facilitator’s translation axioms. Case 2: Di u Dl u Ki u Kz can never be inconsistent because Di u DE is consistent from Theorem 1, K$, u Kl is consistent from the theorem conditions, and Di u Ki is consistent and Dz u Klr is consistent from equations (I 1, 12, 14, 15, 16, and 17) and other equations of the progressive negotiation strategy. Thus, there cannot be any agent whose local solution is inconsistent with other agents’ knowledge. Therefore, Dm u Km” u . . . u Ki u R@ is always consistent and the solution convergence of the progressive negitiation strategy is guaranteed. Summary and Concluding In this paper, we formally presented a conflict resolution strategy called Progressive Negotiation that guarantees the consistency of distributed solutions and convergence on a globally-satisfiable solution. The strategy involves communication of predicate-logic axioms to alter distributed agents’ local solutions incrementally to reach a globally- consistent satisfiable solution. It aims at minimizing backtracking to previous solutions by getting agents to communicate their violated axioms and thus to inform all agents involved in a conflict about those axioms, which ensures they will not commit that violation again. The strategy assumes that interacting agents exchange their messages via a task- independent agent called facilitator, which controls the exchange of messages in a way that ensures the satisfaction of agents’ knowledge and constraints with respect to their current local solutions. The theorems presented for proving convergence of the progressive negotiation strategy show the conditions under which a group of distributed cooperating heterogeneous agents are guaranteed to reach an agreement in the course of negotiation. It is possible that an agreement Collaboration 385 among agents can be reached in some cases even if Kraus, S. and Wilkenfeld, J. 1991. Negotiations they do not comply with all the convergence over Time in a Multi Agent Environment: conditions. However, if these conditions are not Preliminary Report. In Proceedings of the Twelfth followed, convergence is not guaranteed in every International Joint Conference on Artificial exchange. Intelligence, pages 56-61. References Adler, M. R., Davis, A. B., Weihmayer, R., and Worrest, R. W. 1989. Conflict-Resolution Strategies for Non-hierarchical Distributed Agents. Research Notes in Artificial Intelligence, Distributed Artificial Intelligence, Morgan Kaufmann Publishers, Inc., pages 139-161. Conry S., Meyer, R., and Lesser, V. 1988. Multiagent Negotiation in Distributed Planning. Readings in Distributed Artificial Intelligence, A. Bond and L. Gasser, editors, pages 367-384. Morgan-Kaufmann Publishers, Inc. Durfee, E. H. and Lesser, V. R. 1987. Using Partial Global Plans to Coordinate Distributed Problem Solvers. In Proceedings of the Tenth International Joint Conference on Artificial Intelligence, pages 857-883. Durfee, E. H. 1988. Coordination of Distributed Problem Solvers. Kluwer Academic Publishers. Genesereth, M. R. 1992. An Agent-Based Framework for Software Interoperability. In Proceedings of DARPA Software Technology Conference, pages 359-366. Kuwabara, K. and Lesser, V. R. 1989. Extended Protocol for Multistage Negotiation. In Proceedings o of the Ninth Workshop on Distributed Artificial Intelligence, pages 129- 161. Laasri, B. Laasri, H. and Lesser, V. R. 1990. Negotiation and Its Role in Cooperative Distributed Problem Solving. In Proceedings of the Tenth AAAI International Workshop on Distributed Artificial Intelligence. Lander, S. and Lesser, V. R. 1989. A Framework for the Integration of Cooperative Knowledge-Based Systems. In Proceedings of the IEEE International Symposium on Intelligent Control, Albany, NY. Sycara, K. P. 1989. Argumentation: Planning other Agents’ Plans. In Proceedings of the Eleventh International Joint Conference on Artificial Intelligence, pages 5 17-523. Zlotkin, G. and Rosenschein, J. S. 1993. A Domain Theory for Task Oriented Negotiation. In Proceedings of the Thirteenth International Joint Conference on Artificial Intelligence, pages 416-422. 386 Distributed AI | 1994 | 153 |
1,490 | A Collaborative Parametric Design Agent Daniel Kuokka and Brian Livezey Lockheed Palo Alto Research Laboratories Orgn 96-20, Bld 254F 3251 Hanover Street Palo Alto, CA 94304 kuokka@aic.lockheed.com, livezey@aic.lockheed.com Abstract ParMan combines the use of agent communication protocols, constraint logic programming, and a graph- ical presentation interface to yield an intelligent para- metric design tool supporting collaborative engineer- ing. This provides one of the first complete, end-to- end applications of distributed knowledge-level com- munication among engineering tools, as envisioned by PACT (Cutkosky et al. 1993). In addition, it repre- sents a significant extension of parametric design to a distributed setting. This paper describes the under- lying technologies of ParMan, based on CLP(R), the Knowledge Query and Manipulation Language, and knowledge-based facilitation agents. IEntroduction Parametric design is a common and important class of design. Numerous systems and approaches have been developed to address this problem (Bouchard 1992; Frayman & Mittal 1987; Kolb 1989), but they have largely ignored a fundamental issue: the constraints typically come from multiple sources, making para- metric design a collaborative task. ParMan is a dis- tributed parameter management system that addresses this basic omission by coupling a parametric design tool, based on constraint logic programming (Jaffar & Lassez 1987; Jaffar et al. 1992), with an agent-based collaborative engineering infrastructure (Cutkosky et al. 1993; McGuire et al. 1993). The merged func- tionality is presented to the user in a highly intuitive fashion via a specialized graphical user interface. The result is an intelligent parametric design tool support- ing collaborative engineering. Ironically, just as ParMan introduces collaboration into parametric design, it proposes a much needed pro- cess for the agent-based engineering community. Work on agent infrastructures for engineering has brought previously isolated tools on-line, allowing a high degree of knowledge sharing among design tools. However, the technology for controlling such highly integrated tools has not kept pace with the integration technology. When should changes be transferred to the network of agents: as they are made, at some intermediate check point, upon file saving, or upon version update? Par- Man provides an interface for controlling when various aspects of the local design are made public, in addition to providing a means of specifying and testing para- metric constraints. Thus, ParMan provides one of the first systems to support the design process by utilizing distributed yet highly integrated tools. There is a growing body of work in collaborative en- gineering design systems and distributed AI such as (Birmingham et al. 1993; Pan & Tenenbaum 1991; Petrie 1993; Saad & Maher 1993; Sriram 1993; We- ber et al. 1992; Werkman 1992). ParMan is no- table in that it emphasizes parametric design as the single formal model for collaboration. Work on in- telligent software agents, such as (Dent et al. 1992; Maes & Kozierok 1993) is similar in spirit, but these systems tend to emphasize autonomy and learning in- stead of communication among agents. In this re- spect, the area of computer supported cooperative work (Reeves & Shipman 1992: Stefik et a2. 1987) is very relevant. Throughout this paper, the term “agent” is used to refer to a semi-autonomous participant in a dis- tributed design scenario. Agents typically are not stand-alone software systems, but consist of a tool and a human user. Agents are semi-autonomous in that they act spontaneously and dynamically according to local goals, yet they must contribute to the joint prob- lem solving effort, so they cannot act without consid- eration of other agents. The remainder of this paper is organized as follows: the next section gives a user view of ParMan, focusing on an example. Next, a system view of ParMan is pre- sented, covering algorithms and implementation issues. Finally, we evaluate ParMan based on experiments in several domains. User View An engineer uses ParMan to define and analyze con- straints over a set of parameters. As parameters and constraints are entered, ParMan interacts with other agents, maintaining a distributed constraint set that is presented back to the local user. Other agents are usu- Collaboration 387 From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. Bus max force I Payload mass Payload cost I I Payload slew rat i3us length Figure 1: Each participant in the design task uses Par- Man to communicate constraints on specific parame- ters. ally ParMan users as well, but the underlying protocols are quite general, allowing any conformant software system to participate. Thus, ParMan may interact with a broad range of CAD tools, either autonomous or human guided, as long as they can communicate parameter constraints. ParMan is illustrated via an example from the Lock- heed FSAT (Frugal Satellite) program, an effort to design a simple, inexpensive, reusable satellite. The satellite consists of a simple tubular bus with a pay- load. Imagine an Integrated Product Design (IPD) team working on the conceptual design of the satel- lite. The team consists of several participants: a sys- tems engineer, a bus designer, and a payload designer. Each participant has specific interests, shown in Fig- ure 1 (the Matchmaker, described in the Agent Inter- face section, is used by ParMan to route messages). ParMan aids in the conceptual design of the satel- lite as follows. First, the engineer uses the Project Selector to choose from a list of all projects that are of interest to members of the design team (see Figure 2). The list is populated dynamically by the distributed set of ParMan users, consistency being maintained by the exchange of KQML messages such as advertise, subscribe, tell, and ask (see the Agent Interface sec- tion). Thus, all users are kept apprised of all projects and can collaborate on any or all of them. In the ex- ample at hand, each team member selects the FSAT project. Once a project is selected, each team member uses the Parameter Graph Editor to define a set of parame- ters of specific interest (also in Figure 2). The graphi- cal structure allows the user to define components and associate parameters with those components. When a parameter or component is placed in the graph, Par- Man asserts its existence to the other agents (users may also define private parameters). Upon receipt of a parameter assertion, ParMan adds it to the Parameter Graph Editor, using a different color from that used for parameters specified by the local user. This allows the user to identify forgotten or inconsistently-named parameters. From the Parameter Graph Editor, each user selects a subset of the parameters to be displayed in the Pa- rameter Table, a worksheet of parameters relevant to his aspect of the design (see Figure 3). Notice that the hierarchical namespace implicit in the Parameter Graph Editor is carried over to the Parameter Table. The user can now begin defining constraints over the parameters in the Parameter Table. These constraints are expressed in ParMan’s Constraint Language, which includes standard infix arithmetic constraints and an extensible set of predicates. Constraints are entered via the Constraint Editor (also shown in Figure 3)) which is brought up by selecting a Parameter Table cell. Equality constraints (e.g., bus.Zength = 3~71) ap- pear as entries directly in the Parameter Table; other- wise, “***” appears. As constraints are entered, Par- Man looks for inconsistencies among the constraints and indicates them by displaying the associated cells in red. Constraints entered by the user are reflected in the center column, which represents the user’s local con- straints. There are also right and left columns (the left constraint column is collapsed in the figure), which represent constraints shared with other agents (those “to the right” and “to the left,” as described below). The arrow columns between constraint columns con- trol when and how constraints are propagated between columns. The arrow columns on the far right and left similarly control when and how constraints are adver- tised, subscribed, asked, and told to other agents. At any point, the user may choose to share his con- straints with other agents. He can also ask to be kept informed about any constraints that other agents place on selected parameters. This is a two-step process: first the user turns on advertisement and subscription via the arrows in the right-hand column, which ex- presses ParMan’s commitment to answer questions and receive updates about parameter constraints. Next, the user controls which constraints are actually shared by propagating (via an interior column of arrows) the constraints to the right-hand column. If advertisement is turned on, those constraints present in the right- hand value column are sent to the network of agents. If subscription is turned on, constraints arrive from other agents and are added to the set of constraints in the right-hand column, and the consistency of the expanded set of constraints is tested. If there is an inconsistency among this distributed set of constraints, ParMan displays the cell for each parameter involved in the inconsistency in yellow (recall that if a local conflict 388 Distributed AI Figure 2: The Project Selector and Parameter Graph Editor permit users to collaboratively define projects and parameter spaces. is detected, the cells turn red, since local conflicts are considered more serious). The entire set of constraints, those defined locally and those defined by other agents, can be observed by expanding the “right constraints” region in the Constraint Editor (see Figure 3). ParMan provides several tools to aid conflict res- olution. The Constraint Grapher, shown in Figure 4, presents a graphical display of the parameters and con- straints, with links connecting related parameters and constraints. Constraint nodes are presented in green if the constraint is satisfied. Red is used to indicate con- straints that are in direct conflict. Yellow is used to indicate constraints that participate in a conflict, but cannot be directly implicated. In Figure 4, the shaded nodes would appear in red. The Constraint Grapher is valuable in isolating problems and finding solutions, even across multiple users. In our example, the bus designer notices that there is a problem centered around the strength of the bus (the red shading draws his attention to those con- straints). He might be tempted to try different ma- terials to overcome the strength limitation, but the bus.materiaZ.type node is not highlighted, indicating that choice of material has no bearing on the problem at hand. (The constraint checker determined that all other materials have problems as well: composite con- struction is too expensive and alloy construction is too heavy.) Noticing that a constraint on bus.Zength, supplied by the payload designer, does participate in the problem, the bus designer calls the payload designer to see if the constraint can be loosened. This is acceptable to the payload designer, so she loosens the constraint on the bus length, which is automatically communicated be- tween the ParMan agents. The bus designer’s ParMan checks the new constraint set, determines that there is no longer a conflict, and removes all the highlighting in the Parameter Table. In this manner, ParMan not only provides automated constraint analysis, it also serves as an excellent visualization and collaboration tool. Thus far in our example, parameters have only been propagated to and from the “right.” ParMan also in- cludes a symmetrical “left” side, which is typically used for communication with a CAD tool. In our example, the user could have a rigid-body dynamics tool that, given the payload mass and slew rate, computes the force exerted on the mounting. In this case, the user would advertise mass and slew rate to the left, and sub- scribe to force from the left. (Since our example does not include such a dynamics tool, we approximate force via the simple formula shown). ParMan includes several other facilities not illus- trated in this example: a constraint solver, which at- tempts to find a closed form solution to the set of con- straints; a Constraint Tester, which allows the user to drag individual constraints into a test region to iso- late problems; a clique finder, which separates the con- straints into independent sets; and a units converter, which is used implicitly in the example above, and can also be called explicitly. From a more global perspective, there are several modes of use spanning two dimensions: use with or without a tool, and with or without an external agent network. As illustrated above, ParMan can be used without a tool if the user has constraints for, or man- ually computes all parameters. The addition of a tool simply automates aspects of this process. ParMan is useful in the absence of other agents if the user simply wants to trade-off local parameters under local con- straints. The addition of other agents makes the para- metric design problem distributed and collaborative. ParMan may, in fact, be used in several modes Collaboration 389 Figure 3: Parameters and simple constraints are displayed on the Parameter for each parameter are entered and viewed via the Constraint Editor Table, more complicated constraints throughout the life of a single project. In the early stages of a project, a single user might experiment lo- cally with various parameter settings. As he becomes confident about some of his choices, he may advertise his parameters and begin to collaborate and negotiate with other designers with ParMan’s support. As the design evolves, initial rough estimates may be refined by incorporating design tools into the process. System View ParMan is implemented in Tcl/Tk, utilizing CLP(R) for constraint computation, and the SHADE agent in- frastructure for communications. Each of these is de- scribed below. Agent Interface The collaborative aspect of ParMan is built on research underway in the area of agent-based, knowledge-level engineering communications (Cutkosky et al. 1993; McGuire et al. 1993). Much of this work has focused on four basic problems: developing an adequate knowl- edge representation to serve as an interlingua; devel- oping a vocabulary, or ontology, that defines the terms used in communication; developing an agent speech act protocol; and developing a set of facilitation agents that improve communication among end-user agents. ParMan is designed to work with one set of solu- tions to these problems, corresponding to the SHADE architecture (Kuokka et aZ. 1993). Specifically, Par- Man uses KIF (Knowledge Interchange Format (Gene- sereth & Fikes 1992)) as its interlingua, although other representations such as Step/Express, IS0 10303, are being explored. KQML (Knowledge Query and Ma- nipulation Language (Finin et al. 1992)) is used as the speech act language, which carries the embedded KIF sentence. Furthermore, ParMan assumes the existence 390 Distributed AI of a Matchmaker (Kuokka et al. 1993), a facilitator that matches and routes advertisements and subscrip- tions among the set of cooperating agents. Even though ParMan assumes a very basic ontol- ogy for communicating parameter existence and con- straints, it does not depend on the existence of an on- tology for the engineering domain. Instead, the Project Selector and Parameter Graph Editor allow a dis- tributed set of users to define a vocabulary of projects, components, and parameters interactively. The Con- straint Language allows the users to define relation- ships among the components and parameters. This capability illustrates an incomplete but practical solu- tion to the semantic unification problem (Petrie 1992), and is critical to successful knowledge-level communi- cation among agents. Previous solutions have been centered around the definition of common data mod- els (e.g., STEP) and formal shared ontologies (Gruber 1993). Both of these approaches require a high degree of a priori agreement among the participants, even be- fore the specifics of the problem are known. By con- trast, ParMan’s Parameter Graph Editor provides a very dynamic, albeit human-aided, approach. Looking more closely at the messages exchanged among ParMan agents in the example, when the user advertises his willingness to supply constraints on a particular parameter (e.g., bus.mass), the following messages are sent: (advertise :content (stream-about : language kif : content (mass (bus fsat)))> (advertise : content (subscribe : content (stream-about : language kif : content (mass (bus fsat>>))> Figure 4: The Constraint Grapher aids in the visualization of relationships among parameters and constraints. Conversely, when the user wants to hear about con- straints on a particular parameter, ParMan seeks to recruit all agents that might assert constraints on that parameter. This is achieved via the following messages: (recruit-all :content (stream-about :language kif :content (mass (bus fsat)>>> (recruit-all : content (subscribe :content (stream-about :language kif :content (mass (bus fsat>>>>) In each case, the first message (with top-level content stream-about) indicates a query/response modality. Stream-about is essentially a one-time query with the stipulation that replies are in the form of a stream. The second message (with top-level content subscribe) in- dicates a continuous update modality. These messages depend on a Matchmaker, which matches subscribes to stream-abouts and recruits to advertises. Upon finding a match, the Match- maker sends the appropriate subscription or query on to the advertising agent. By depending on the Match- maker, ParMan need not worry about sending the right message to the right place, it need only send it’s capa- bilities, interests, and assertions to the Matchmaker, which performs the appropriate message routing. Once a connection is made between an information producer and information supplier, specific constraints are then sent via messages of the form: (tell : content (< (mass (bus fsat) > (* 100 kg)) > Constraints are withdrawn via messages of the form: (untell :content (< (mass (bus fsat>> (* 100 kg))) ParMan depends heavily upon the agent communi- cations infrastructure being developed by projects such as SHADE. However, in return, it provides one of the first complete agent-based engineering tools to make extensive use of the infrastructure, providing valuable insights about useful protocols and necessary exten- sions. Constraint Solver The constraint checking of ParMan uses the CLP(R) language (Jaffar et al. 1992), which is an instance of the Constraint Logic Programming scheme defined by Jaffar and Lassez (Jaffar & Lassez 1987). ParMan ap- plies the basic CLP( R) en g ine in several different ways to implement the clique finding, conflict detection, and constraint simplification functions. Each of these is described below. Since engineering domains typically intermingle a number of different unit systems, all con- straints are converted to SI units before being passed to any of the constraint solving functions. This ensures that values are related accurately. The clique finder may be used by the user to simplify the parameter space, and it is used by the conflict de- tection algorithm as a preprocessor. A clique is a group of all parameters, such that each parameter is linked to each other parameter in the group, directly or transi- tively, by one or more constraints. Cliques are detected Collaboration 391 by simply traversing the graph formed by taking the parameters as nodes and the constraints as links. Conflict detection is invoked whenever a constraint is added, deleted, or modified. ParMan first forms two sets of potentiaZ2y a$ected constraints. A constraint is said to be potentially affected by a modified con- straint if it contains parameters that are in the same clique as one or more of the parameters that occur in the modified constraint. The first set of potentially affected constraints consists solely of local constraints. The second set is the union of the first set and the potentially affected right and left constraints. Once the two constraint sets have been formed, each is passed separately to the constraint solver. Given a constraint set, CLP(R) successively adds the con- straints to a set of collected constraints, at each step determining whether the set of constraints has a solu- tion. If there is no solution, the system backtracks, trying alternative solutions to some of the previous constraints. If no solution can be found in the first set of constraints, the corresponding parameters are marked as locally inconsistent (and appear in red in the Parameter Table). If the first set is found to be consis- tent, the second set is tested by an identical method. If no solution can be found, the corresponding param- eters are marked as globally inconsistent (and appear in yellow in the Parameter Table). When an inconsistent set of constraints is displayed in the Constraint Grapher or in the Constraint Tester, a slight variant of the above procedure is used to de- termine the status of each constraint. A single set of constraints is formed. Initially, all constraints are la- beled as exonerated. An exoneration of a constraint is lifted if removal of that constraint from any inconsis- tent subset leaves the subset consistent. Next, all con- straints whose exonerations have been lifted are labeled as indicted. An indictment is lifted if removal of that constraint from one of the inconsistent subsets leaves that subset inconsistent. All constraints that remain unlabeled are now labeled as accused. Exonerated con-’ straints are displayed in green, accused constraints in yellow, and indicted constraints in red. Finally, constraint simplification proceeds in a man- ner similar to conflict detection. However, if the con- straint set is found to be consistent, the CLP(R) dump predicate is used to produce the simplified set of con- straints. Evaluation and Conclusions ParMan has been used on example problems in several domains: the FSAT domain, a bicycle design domain, and a meeting scheduling domain. In all cases, the users were geographically distributed (in separate of- fices). In each test, it is striking how the apparent com- plexity of the global problem was significantly reduced when distributed among multiple people. Distribution allows each user to focus on and ensure satisfaction of those constraints representing a single perspective. Once each user assumed his role, ParMan proved to be a very natural system for encoding and solving dis- tributed constraint satisfaction problems. Each user’s constraints were easily entered, and their propagation to other users as external constraints was simple and understandable. Flagging of conflicts by color coding also proved to convey the essential information with- out complexity. Finally, the facility to find a solution proved quite beneficial. In fact, in early tests with- out this facility, the users expressed concern that, even though the system indicated that there were no con- flicts, the solution was not readily apparent. Another problem was encountered with an earlier prototype which allowed only numerical parameters and algebraic constraints. This proved to be extremely limiting, as every domain had significant symbolic pa- rameters (e.g., material type) and extrinsic constraints (e.g., tables of material cost). As a result, ParMan’s constraint language was extended to permit symbolic constants and user-defined predicates. In many domains, ParMan’s clique finder divides constraints into subsets of manageable computational complexity. However, in domains where these sub- sets become large, performance of constraint testing and satisfaction may become a problem. The practical limitations of ParMan, and techniques to improve con- straint satisfaction efficiency are under investigation. The underlying agent communication infrastructure used by ParMan has been invaluable to the overall sys- tem. KQML provides the language by which agents coordinate the exchange of knowledge, and the Match- maker allows ParMan to route messages by content rather than by name of a responsible agent. Even though ParMan, in its current form, is quite promising, there are still several areas for improve- ment. The user interface does not yet allow direct editing of the constraint graph, and the interface to extrinsic constraints is still under development. Fi- nally, significant user testing by engineers is needed to further gauge the applicability of the basic parametric design paradigm. ParMan represents an integrated, cross-disciplinary contribution to three fields: artificial intelligence, en- gineering design, and human-computer interaction. From an AI perspective, it demonstrates and extends the use of CLP(R). Al so, it represents one of the most significant applications of emerging agent communica- tion techniques. In fact, the development of ParMan has resulted in several extensions to and clarifications of KQML and matchmaking. From an engineering per- spective, it represents a highly interactive, intelligent concurrent engineering paradigm, with an emphasis on practicality as a key requirement. And from a human computer interaction perspective, ParMan represents a general tool for knowledge-level computer supported collaboration, with emphasis on an intuitive interface to a complex task. 392 Distributed AI Acknowledgments The authors would like to acknowledge the contribu- tions of Larry Harada for his implementation of the Matchmaker and feedback on ParMan interface, and Bill Mark for his insights and support of this work. References Birmingham, W.; Darr, T.; Durfee, E.; Ward, A.; and Wellman, M. 1993. Supporting mechatronic de- sign via a distributed network of intelligent agents. In Proceedings of the AAAI Workshop on AI in Collab- orative Design. Bouchard, E. 1992. Concepts for a future aircraft design environment. In Proceedings of the Aerospace Design Conference. Cutkosky, M.; Engelmore, R.; Fikes, R.; Gruber, T.; Genesereth, M.; Mark, W.; Tenenbaum, J.; and We- ber, J. 1993. PACT: An experiment in integrat- ing concurrent engineering systems. IEEE Computer 26(l). Dent, L.; Boticario, J.; McDermott, J.; Mitchell, T.; and Zabowski, D. 1992. A personal learning appren- tice. In Proceedings of the National Conference on Artificial Intelligence. AAAI Press. Finin, T.; Weber, J.; Wiederhold, G.; Genesereth, M.; Fritzson, R.; McGuire, J.; McKay, D.; Shapiro, S.; Pelavin, R.; and Beck, C. 1992. Specification of the KQML agent communication language. Official doc- ument of the DARPA Knowledge Sharing Initiative’s External Interfaces Working Group. Technical Report 92-04, Enterprise Integration Technologies, Inc. Frayman, F., and Mittal, S. 1987. Cossack: A constraints-based expert system for configuration tasks. In Knowledge Based Expert Systems in En- gineering: Planning and Design. D. Sriram and R. Adey (eds.). Genesereth, M., and Fikes, R. 1992. Knowledge in- terchange format, version 3.0 reference manual. Tech- nical Report Logic-92-1, Computer Science Depart- ment, Stanford University. Gruber, T. 1993. A translation approach to portable ontology specifications. Knowledge Acquisition 5(2). Jaffar, J., and Lassez, J. 1987. Constraint logic pro- gramming. In Proceedings of the 14th ACM Sympo- sium on Principles of Programming Languages. As- sociation for Computing Machinery. Jaffar, J.; Michaylov, S.; Stuckey, P.; and Yap, R. 1992. The CLP(R) language and system. ACM Transactions on Programming Languages and Sys- tems (TOPLAS) 14(3). Kolb, M. 1989. Investigation of constraint-based component modeling for knowledge representation in computer aided conceptual design. Technical report, Massachusetts Institute of Technology, Dept of Aero- nautics and Astronautics. Kuokka, D.; McGuire, J.; Weber, J.; Tenenbaum, J.; Gruber, T.; and Olsen, G. 1993. SHADE: Knowledge- based technology for the re-engineering problem; an- nual report. Technical report, Lockheed Artificial In- telligence Center. Maes, P., and Kozierok, R. 1993. Learning interface agents. In Proceedings of the National Conference on Artificial Intelligence. AAAI Press. McGuire, J.; Kuokka, D.; Weber, J.; Tenenbaum, J.; Gruber, T.; and Olsen, G. 1993. SHADE: Technology for knowledge-based collaborative engineering. Con- current Engineering: Research and Applications (1). Pan, J., and Tenenbaum, J. 1991. Toward an intel- ligent agent framework for enterprise integration. In Proceedings of the National Conference on Artificial Intelligence. AAAI Press. Petrie, C. 1992. Introduction. In Enterprise Integra- tion Modeling. MIT Press. Petrie, C. 1993. The Redux’ server. In Proceedings of the Intl. Conf. on Intelligent and Cooperative In- formation Systems. Reeves, B., and Shipman, F. 1992. Supporting com- munication between designers with artifact-centered evolving information spaces. In CSCW. Saad, M., and Maher, M. 1993. A computational model for synchronous collaborative design. In Pro- ceedings of the AAAI Workshop on AI in Collabora- tive Design. Sriram, D. 1993. Computer supported collaborative engineering. In Proceedings of the AAAI Workshop on AI in Collaborative Design. Stefik, M.; Foster, G.; Bobrow, D.; Kahn, K.; Lan- ning, S.; and Suchman, L. 1987. Beyond the chalk board: Computer support for collaboration and prob- lem solving in meetings. Communications of the ACM 30( 1). Weber, J.; Livezey, B.; McGuire, J.; and Pelavin, R. 1992. Spreadsheet-like design through knowledge- based tool integration. International Journal of Ex- pert Systems: Research and Applications 5(l). Werkman, K. 1992. Multiple agent cooperative design evaluation using negotiation. In Artificial Intelligence in Design. Collaboration 393 | 1994 | 154 |
1,491 | Exploiting Meta-Level Information in a Distributed Scheduling System* Daniel E. Neiman, David W. Hildum, Victor R. Lesser and Tuomas W. Sandholm Computer Science Department University of Massachusetts Amherst, MA 01003 DANN@CS.UMASS.EDU Abstract In this paper, we study the problem of achieving ef- ficient interaction in a distributed scheduling system whose scheduling agents may borrow resources from one another. Specifically, we expand on Sycara’s use of resource texture measures in a distributed schedul- ing system with a central resource monitor for each re- source type and apply it to the decentralized case. We show how analysis of the abstracted resource require- ments of remote agents can guide an agent’s choice of local scheduling activities not only in determining local constraint tightness, but also in identifying ac- tivities that reduce global uncertainty. We also exploit meta-level information to allow the scheduling agents to make reasoned decisions about when to attempt to solve impasses locally through backtracking and con- straint relaxation and when to request resources from remote agents. Finally, we describe the current state of negotiation in our system and discuss plans for inte- grating a more sophisticated cost model into the nego- tiation protocol. This work is presented in the context of the Distributed Airport Resource Management Sys- tem, a multi-agent system for solving airport ground service scheduling problems. Introduction The problem of scheduling resources and activities is known to be extremely challenging (Garey & Johnson 1979; Fox 1983; Smith, Fox, & Ow 1986; Sadeh 1991). The complexity increases when the scheduling process becomes dependent upon the ac- tivities of other concurrent schedulers. Such interac- tions between scheduling agents arise when, for exam- ple, agents must borrow resources from other agents in order to resolve local impasses or improve the quality of a local solution. Distributed scheduling applications are not uncommon, for example, the classic meeting planning problem (Sen & Durfee 1993) can be consid- ered as a distributed scheduling problem; the airport *This work was partly supported by DARPA contract N00014-92-J-1698 and NSF contracts CDA-8922572 and IRI-9208920. The content of this paper does not neces- sarily reflect the position or the policy of the Government and no official endorsement should be inferred. 394 Distributed AI ground service scheduling (AGSS) problem we address in this paper is another; and similar problems may arise in factory floor manufacturing domains. In distributed scheduling systems, problem-solving costs will likely increase because of the interaction among agents caused by the lending of resources. One method of increasing the quality of solutions devel- oped by such multi-agent schedulers and minimizing the costs of backtracking is to allow agents to com- municate abstracted versions of their resource require- ments and capabilities to other agents. The use of this meta-level information allows the scheduling agents to develop models of potential interactions between their scheduling processes and those of other agents, where an interaction is defined as a time window in which the borrowing or lending of a resource might occur. We show how the identification of interactions affects the choice of scheduling heuristics, communication, and negotiation policies in a distributed scheduling system. We discuss our heuristics in the context of a specific testbed application, the Distributed Airport Resource Management System ( DIS- ARM). Related Work The use of meta-level information to define the inter- actions between agents has been studied extensively by Durfee and Lesser via the use of partial global plans (Durfee & Lesser 1991). This work has been extended by Decker and Lesser (Decker & Lesser 1992; 1993) to incorporate more sophisticated coordination relationships. According to this framework, we can view our detection of potential loan requests using tex- ture measures to be an identification of facilitating re- lationships, and our modification of the scheduling al- gorithm as an attempt to exploit this perceived re- lationship. The formulation of distributed constraint satisfaction problems as distributed AI was described by Yak00 (Yakoo, Ishida, & Kuwabara 1990), how- ever, this work concentrated more on the problem of distributed backtracking rather than on coordinating agents. The problem of coordinating distributed sched- ulers has been studied extensively by Sycara and col- From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. leagues (Sycara et al. 1991). They describe a mecha- nism for transmitting abstractions of resource require- ments (textures ) between agents. Each agent uses these texture measures to form a model of the aggre- gate system demand for resources. This model is used to allocate resources using various heuristics. For ex- ample, a Zeust-constraining-value heuristic is used to allocate resources based on the minimization of the probablity that the reservation would conflict with any other. For each type of resource, one agent is assigned the task of coordinating allocations and determining whether requests can be satisfied. All resources of a given type are considered interchangeable and the cen- tralized resource monitor does not need to perform sig- nificant planning to choose the most suitable resource; instead, its role is simply to ensure that each resource is allocated to no more agents than can be served by that resource during any given time period. We investigate a similar use of abstracted resource demands for a case in which centralized resource mon- itors are not possible since resources of the same type may possess unique characteristics, and agents possess proprietary information about local resources (such as current location and readiness). Agents may respond to a request for a resource either by immediately satis- fying it with a reservation, denying it, or by performing local problem-solving actions to attempt to produce a suitable reservation. In our domain, we have found that Sycara’s texture measures alone do not convey sufficient information to allow satisfactory scheduling. Their texture measures consist of a demand profile for each resource which rep- resents, for each time interval, the sum of probabilities that resource requests will overlap that interval. These probabilities are based on the assumption that reserva- tions can occur at any time within the requested inter- val. Assignment of resources is then performed using these probabilities to implement a least-constraining- value heuristic. These texture measures do not capture sufficient in- formation regarding time-shift preferences of resource assignments within the specified interval. In our do- main, resources may legally be assigned at any time within the interval between the earliest start time and the latest finish time, but for some activities, there ex- ist strong preferences as to which end of the interval the assignment is biased. For example, when schedul- ing ground services for an airport, once a flight arrives, it is important to unload baggage as early as possible so that necessary transfers can be made to connect- ing flights. The shift preference can be determined by the assigning agent using domain knowledge, provided that it knows the nature of the task generating the re- quest. Because this information is not captured in the texture measures, the heuristic described by Sycara, et al. is likely to lead to poor schedules within the airport ground service scheduling domain. Overview: The Distributed Dynamic Scheduling System In order to test our approach to solving distributed resource-constrained scheduling problems (RCSPs), we have designed a distributed version of a reactive, knowledge-based scheduling system called DSS (the Dynamic Scheduling System) (Hildum 1994). DSS provides a foundation for representing a wide vari- ety of real-world RCSPs. Its flexible scheduling ap- proach is capable of reactively producing quality sched- ules within dynamic environments that exhibit unpre- dictable resource and order behavior. Additionally, DSS is equipped to manage the scheduling of shared tasks connecting otherwise separate orders, and handle RCSPs that involve mobile resources with significant travel requirements. DSS is implemented as an agenda-based blackboard system (Erman et al. 1980; Carver & Lesser 1994) using GBB (the Generic Blackboard System) (Corkill, Gallagher, & Murray 1986). It maintains a blackboard structure upon which a developing schedule is con- structed, and where the sets of orders and resources for a particular RCSP are stored. A group of knowl- edge sources are provided for securing the necessary re- source reservations. These knowledge sources are trig- gered as the result of developments on the blackboard, namely the creation and modification of the service goals attached to all resource-requiring tasks. Trig- gered knowledge sources are placed onto an agenda and executed in the order of their priority. The Distributed Dynamic Scheduling System ( DIS- DSS) maintains separate blackboard structures for each agent and provides communication utilities for transmitting requests and meta-level information be- tween agents. Remote analogues of service goals, task structures, and other scheduling entities are created as needed to model the state of other agents. The information about other agents’ schedules and com- mitments is incomplete and is limited to the content of goals, meta-level information, and those parts of the schedule to which the local agent itself has contributed. The approach we have taken towards distributing DSS is to view each agent as representing an au- tonomous organization possessing its own resources. It is this autonomous nature of the organizations that is the rationale for distributing the resource allocation problem. Although a centralized architecture might produce more efficient solutions, real world considera- tions such as cost and ownership often lead to confed- erations in which information transfer regarding com- mitments and capabilities is limited. In this model, the primary relationship between agents is a commitment to exchange resources as needed and a willingness to negotiate with other agents to resolve impasses. This model of a decentralized group of agents performing independent tasks in a resource-constrained environ- ment is similar to the architecture of Moehlman’s Dis- Collaboration 395 tributed Fireboss (Moehlman, Lesser, & Buteau 1992). We distinguish our work from Moehlman’s by our use of meta-level information to control the decision pro- cess by which agents choose to resolve impasses lo- cally, through backtracking and constraint relaxation, or through requests to remote agents. Because resources are owned by specific agents and possess unique characteristics regarding location and travel times that are known only to the owning agent, we can not define central resource monitors responsi- ble for allocating each type of resource. This, again, distinguishes our approach from that of Sycara, et al. (Sycara et al. 1991). Agents requiring a resource must communicate directly with the agent owning a resource of that type and negotiate for its loan. This architecture provides a rich domain for the study of agent coordination issues in a distributed envi- ronment; agents must be able to model the interactions of their tasks with those of neighboring agents closely enough to be able to determine which agents will be most likely to provide the desired resources at the low- est cost to both agents. This coordination requires local reasoning on the part of agents in order to deter- mine how to cooperate efficiently with an acceptable level of communication and redundant computation. Assumptions In our work with DIS-DSS, we have made a number of assumptions about the nature of agents, schedules, and communication overheads. Agents are cooperative and will lend a resource if it is available. Agents will only request a resource from one agent at a time - this is to avoid the possibility of redundant computation and communication if multiple agents attempt to provide the resource cf. (Sandholm 1993). Once agents have lent a resource to another agent, they will never renege on this agreement. This lim- its the ability of the system to perform global back- tracking; we intend to eliminate this restriction in the next version of the system. Communication is asynchronous and can occur at any point during the construction of a local sched- ule; therefore requests may arrive before an agent has completely determined its own requirements for resources in the time window of interest. The cost of messages is largely in the processing and in the inherent delay caused by transmission - the amount of data within the message may be large, within limits. Communication of Abstract Resource Profiles Without information regarding other agents’ abilities to supply missing resources, an agent may be unable to complete a solution, or may be forced to compromise the quality of its solution. To allow agents to construct a model of global system constraints and capabilities, we have developed a protocol for the exchange and updating of resource profiles: summarizations of the agent’s committed resources, available resources, and estimated future demand. Upon startup, each agent in DIS-DSS receives a set of orders to be processed. The agents examine these orders and generate an abstract description of their resource requirements for the scheduling period. This bottleneck-status-list consists of a list of intervals, with each interval annotated by a triple: resources in use, resources requested, and resources available. The re- quest field of this triplet represents an abstraction of the agent’s true resource requirements. Certain as- pects of a reservation such as mobile resource travel times to the objects to be serviced, cannot be easily estimated in advance. The time intervals specified for each resource request are pessimistic, consisting of the earliest possible start time and latest possible finish times for the activity requesting that resource. The true duration of the task can be estimated by the scheduling agent using its domain knowledge regard- ing the typical time required to perform a task. We define the demand for a resource T performing task 2’ in interval (tj, tk) to be: avg-demand(T, r, tj, tk) = duration@‘, r)/(tk - tj) Once resource abstractions have been developed for each resource type required (or possessed) by the agent, it transmits its abstractions to all other agents. Likewise, it receives abstractions from all agents. Once the agent has received communications from all other agents, it prepares a map of global resource require- ments and uses it to generate a set of data structures called lending possibilities. Each lending possibility represents an interval in which some agent appears to have a shortfall in a resource. For each lending possi- blity, the agent generates a list of possible lenders for that resource, based on the global resource map and its knowledge of its own resource requirements. These lending possibility structures are used to predict when remote agents may request resources and when the lo- cal agent may need to borrow resources. This infor- mation guides the agent’s decision-making process in determining both when to process local goals and when and from whom to request resources. The Distributed Airport Resource Management System The Distributed Airport Research Management Sys- tem testbed was constructed using DIS-DSS to study the roles of coordination and negotiation in a dis- tributed problem-solver. DIS-ARM solves distributed AGSS problems where the function of each schedul- ing agent is to ensure that each flight for which it is responsible receives the ground servicing (gate assign- ment, baggage handling, catering, fuel, cleaning, etc.) that it requires in time to meet its arrival and depar- 396 Distributed AI ture deadlines. The supplying of a resource is usu- the loan and details of the resource’s destination can ally a multi-step task consisting of setup, travel, and only be determined once the request has been received. servicing actions. Each resource task is a subtask of Likewise, details of the precise timing and duration of the airplane servicing supertask. There is considerable a loan can only be determined upon receipt of a remote parallelism in the task structure: many tasks can be reservation. We have added coordination heuristics to done simultaneously. However, the choice of certain the agenda scheduler of DIS-DSS whose purpose is to resource assignments can often constrain the start and end times of other tasks. For example, selection of a specific arrival gate for a plane may limit the choice‘of servicing vehicles due to transit time from their previ- ous servicing locations and may limit refueling options due to the presence or lack of underground fuel tanks at that gate. For this reason, all resources of a spe- cific type can not be considered interchangeable in the AGSS domain. Only the agent that owns the resource can identify all the current constraints on that resource and decide whether or not it can be allocated to meet a specific demand. promote problematic activities in each agent’s schedul- ing queue so that their early execution will reduce un- certainty about global system requirements. In the DIS-DSS blackboard-based architecture, tasks which require resources generate service-goals. Requests received from remote agents generate remote- service-goals. Each goal stimulates knowledge sources that act to secure an appropriate resource. The or- der of execution of knowledge sources depends on the rating of the stimulating goals. Goals are rated us- ing a basic ‘most-tightly-constrained-first’ opportunis- tic heuristic. The goals are then stratified according to the following scheme, with the uppermost levels receiving the highest priority and contention within each level being resolved according to the basic rat- ing heuristic. loiting Meta-level Information in DIS-DSS In this section, we examine three areas in which meta- level abstractions of global resource requirements are exploited in DIS-DSS. We show how the goal rating scheme of an agent’s blackboard-based scheduler is modified to satisfy the twin aims of scheduling based on global constraints and of planning activities in or- der to reduce uncertainty about agent interactions. We describe how communication of resource abstractions is based on models of agents’ interests and the man- ner in which agents choose between local and remote methods of satisfying a request. 1. Tightly constrained goals that may not be satisfiable locally or that can only be satisfied by a borrowing event and remote requests that do not overlap any local request. 2. Tightly constrained goals that can only be satisfied locally. 3. Goals representing requests from remote agents that overlap local goals. 4. Unconstrained or loosely constrained tasks. Scheduling using Texture Measures Many scheduling systems divide processing into the categories of variable selection, the choice of the next activity to schedule, and value selection, the selection of a resource and time slot for that activity. In DIS- DSS, variable selection corresponds to the satisfaction of a particular resource request. Value selection is han- dled in DSS by a collection of opportunistic scheduling heuristics. We focus here on the problem of coordinat- ing resource requests so that local variable-selection heuristics possess sufficient information to make in- formed decisions. In many knowledge-based scheduling systems, the object of control is to arrange scheduling activities so that the most tightly constrained activities are sched- uled first in order to-reduce the need for backtracking. In a distributed system, we have an additional crite- rion: to schedule problem-solving activities in such a way that global uncertainty about certain tasks is re- duced before decisions regarding those tasks are made. A scheduler may be uncertain of whether other agents will request a resource in a tightly constrained time period and whether other agents will be able to sup- ply a needed resource. While the resource abstractions may indicate a loan request is likely, the duration of 5. Goals that potentially overlap with tasks of remote agents. A goal g is considered to be tightly constrained in interval (tj, tk) if there exists a time within that inter- val such that for each resource type r that can satisfy g, the number of unreserved resources is less than the sum of the average demand for all outstanding goals. ‘d r s.t. Sat(g, r) 3 t E (tj, th) s.t. Navaaue (r, t) < x avg-demand&Sk(g), r, tj, -b) 9 A goal potentially overlaps with a task of a remote agent if there exists a lending-possibility data structure for that remote agent describing a potential shortfall within the time interval spanned by that goal for some resource type that could satisfy the goal. The rationale for this goal ordering is as follows. Goals that can not be satisfied locally must be trans- mitted to remote agents. The transmission of a goal conveys considerably more information than is avail- able in the resource texture profiles. The potential Collaboration 39 lending agent will therefore have more accurate infor- mation regarding the interval for which the resource is desired and the preferred shift preference for the reser- vation in that interval (early or late). Once it has re- ceived the goal, it will be able to make more informed decisions about the tightness of constraints for both the local and remote goals. If the agent is able to satisfy the remote goal, it will be able to update its resource demand curve and transmit it to other agents who may also have been potential lenders of that resource. For all these reasons, early transmittal and satisfaction of remote service goals is desirable. Tightly constrained goals that potentially overlap re- mote requests are deferred until some overlapping goal arrives, or until a resource update arrives indicating that the remote agent no longer requires that resource, or until no other work is available for the agent to per- form. By deferring goals until more information about interactions is available, the system can avoid making premature decisions while at the same time working on unrelated or less constrained tasks. Once a request arrives, conflicts for resources can be arbitrated accord- ing to which goal is most pressing and least conducive to backtracking and/or constraint relaxation. There are a number of competing requirements for the rating and processing of remote service goals. One would like to process a remote service goal as soon as possible in order to return information to the re- questing agent. At the same time, both local and re- mote service goals requesting the same type of resource should be rated according to the same constraint tight- ness heuristics. The goal rating function in DIS-DSS attempts to satisfy these requirements by prioritizing those remote service goals that do not overlap any lo- cal service goals and by mapping overlapping remote service goals onto the same priority level as those local goals that they overlap. Note that the “overlapping” relationship is transitive: if the priority of a goal is reduced while waiting for a remote request, any lower rated goal that overlaps that goal’s time interval must also wait even though it may not directly overlap the interval of the potential remote request. Guiding Communication using Texture Measures Reducing communication costs is an important issue in distributed systems. For this reason, DIS-DSS agents use the lending possibility models of agent interactions to guide communication activities. When its resource requirements change, an agent transmits the informa- tion about the resource type only to those agents who, based on its local information, would be interested in receiving updates concerning that resource type. An agent with no surplus resources of a given type may not be interested if the local agent increases its need for a particular resource, likewise, an agent with a surplus of a particular resource may not need to be notified if an agent reduces its demand for that resource type. However, agents who possess shortfalls in a time in- terval for a particular type of resource will receive up- dates during processing whenever an agent increases the precision of its resource abstractions by securing or releasing a resource. The use of local knowledge to guide communication episodes may lead to agents’ knowledge of the global state of the system becoming increasingly out of date. The degree to which this should be allowed to happen is dependent upon the acceptable level of uncertainty in the system and the accuracy with which resource abstractions can be made. Ordering Methods for Achieving Resource Assignments In DSS, the process of securing a resource is achieved through a series of increasingly costly methods: as- signment, preemption, and right shifting. These cor- respond roughly to request satisfaction, backtracking, and constraint relaxation. Preemption is a conserva- tive form of backtracking in which existing reservations are preempted in favor of a more constrained task. Right shifting satisfies otherwise intractable requests by shifting the time interval of the reservation down- stream (later) until a suitable resource becomes avail- able. Because this method relaxes the latest finish time constraint, it has the potential to seriously decrease the quality of a solution. In the AGSS domain, for example, right shifting a reservation may result in late departures. In DSS, methods are ordered according to increas- ing cost. In the distributed version of the system, the choice and ordering of methods is more complex. When an agent cannot immediately acquire a resource locally, it faces a decision: should it perform back- tracking or constraint relaxation locally, communicat- ing only when it has exhausted all local alternatives, or should it immediately attempt to borrow the resource from another agent ? The decision-making process be- comes even more difficult if we allow requests from re- mote agents to take precedence over local requirements such that agents may have to perform backtracking or constraint relaxation in order to satisfy a remote re- quest. We consider this last decision process a form of negotiation, because it involves determining which of two agents should bear the cost of reduced solution quality and/or increased problem-solving effort. In DIS-DSS, we use the lending possibility data structures to dynamically generate plans for achieving each resource assignment. When it appears that a re- mote agent will have surplus resources at the necessary time, then the agent will generate a request as soon as it becomes clear that the resource can not be secured locally. If, however, it appears that the resource is tightly constrained globally, the agent will choose to perform backtracking and/or constraint relaxation op- erations locally rather than engage in communication episodes that will probably prove futile. 398 Distributed AI One use of meta-information occurs during the plan- ning for constraint relaxation. The scheduling agent attempts to minimize the magnitude of the right shift in order to reduce the effect of the constraint relaxation on the quality of the solution. To do this, the agent must determine whether the minimum right shift can be achieved locally or remotely. However, requiring agents to submit bids detailing their earliest reserva- tions for a given resource would be a costly process. Instead, the agent uses the abstractions of remote re- source availability to generate a threshold value for the right shift delay. If this value is less than the delay achieved through right shifting locally, the agent se- quentially transmits the resource request to the ap- propriate remote agents. If a remote agent can pro- vide a reservation with a delay of less than or equal to the threshold value, it immediately secures the re- source. Otherwise, it returns the delay of the earli- est possible reservation. If no reservation is found, the local agent sets the threshold to the earliest pos- sible value returned by some remote agent. This new threshold is then compared to the current best local de- lay (which might have changed due to local scheduling while the remote requests were being processed). This process continues until a reservation is made or until the threshold becomes greater than the delay achiev- able by right shifting locally. Obviously, the better the initial estimate for the delay threshold, the less com- munication activities will be required. The meta-information is also used to determine the order in which agents should be asked for resources, beginning with the agent(s) with the least tightly con- strained resources. Experimental Results The performance of the mechanisms that we have de- veloped for DIS-DSS were tested in a series of ex- periments using a single agent system as a basis for comparison. We used six scenarios designed to test the performance of the system in tightly constrained situations. The number of orders in each scenario ranged from 10 to 60 and a minimal set of resources was defined for each scenario. Each scenario was dis- tributed for a three agent case. Orders were assigned to each agent on a round-robin basis such that each agent would perform approximately th_e same amount of work. Resources were distributed randomly so that in some cases each agent would possess all necessary resources while in other cases, borrowing from remote agents would be necessary. We ran DIS-ARM on each scheduling scenario using the following configurations of the scheduler: e The baseline case with a single agent. a The 3 agent case with no use of meta-level information, and an opportunistic (most-tightly- constrained-variable-first) goal rating scheme The 3 agent case using the heuristic goal rating scheme incorporating meta-level information but re- questing resources from remote agents only when all local methods have failed. The 3 agent case using heuristic goal rating, meta- level information, and dynamic reordering of re- source acquisition methods to account for the prob- ability of securing a goal either locally or remotely. For each run, we recorded the average tardiness of the schedule, the number of failed goals (if any), the number of resource-securing methods tried, the num- ber of requests, the number of satisfied remote ser- vice goals, and the number of communication episodes that occurred during problem solving. In each case, we assumed that communication costs were negligi- ble in relation to problem-solving and that requests and resource constraint updates would be received on the simulation cycle immediately succeeding the one in which they were sent. Because of the small number of test cases we have examined in our preliminary experiments, we present our results anecdotally. As expected, the distributed version of the scheduler always produces a schedule of somewhat lower quality than the centralized one. When the opportunistic scheduler of the centralized version is used for scheduling in a distributed envi- ronment, its lack of information about global con- straints causes it to produce somewhat inferior re- sults. The heuristic incorporating meta-level informa- tion consistently outperforms the opportunistic sched- uler in terms of the number of tardy tasks. The op- portunistic scheduler occasionally will produce a sched- ule with less total tardiness than the distributed algo- rithm. We interpret this as a trade-off between satisfy- ing global requirements (by delaying certain goal sat- isfactions until remote information becomes available) and satisfying local requirements by producing needed results promptly. This is an interesting trade-off that we intend to study in depth. Attempting to always solve problems locally using preemption and constraint relaxation produced schedules with much greater de- lays than when agents dynamically determined when to request resources remotely based on the meta-level resource abstractions. conclusions and The work we have performed with DIS-DSS is prelimi- nary, but promising. Our results indicate that the idea of using meta-level information to schedule activities in order to reduce local uncertainty about global con- straints results in better coordination between agents with a subsequent increase in goal satisfaction. We have also demonstrated that meta-level information can be successfully used to guide the choice between satisfying goals locally and remotely, and in optimizing the choice of agents from which to request resources. Our experiments were performed with each agent’s Collaboration 399 orders being defined statically before scheduling. This allowed the agents to develop a model of their pre- dicted resource requirements before scheduling be- gan. If we were to model a system in which orders changed dynr-nically, either due to equipment failures or timetable changes, we would expect the model of global resource requirements to become increasingly inaccurate. We would like to understand the impli- cations of allowing jobs to arrive dynamically on the performance of a distributed system using meta-level information. As well as continuing to explore the role of meta- level resource abstractions, we plan to use the DIS- DSS testbed to explore a number of important issues in distributed scheduling. One of our primary goals is to expand the idea of negotiation between agents that we have touched upon in this paper. Because the airport ground service scheduling domain represents a “real world” scenario, we are able to create a mean- ingful cost model involving not only the delay in each schedule, but the probable cost of that delay in terms of missed connections. By allowing agents to exchange this information when requesting resources, they will be able to more meaningfully weigh the importance of local tasks against the quality of the global solution. References Carver, N., and Lesser, .V. 1994. The evolution of blackboard control architectures,. Expert Systems with Applications- Special Issue on the Blackboard Paradigm and Its Applications 7(1) ~1-30. Corkill, D. D.; Gallagher, K. Q.; and Murray, K. E. 1986. GBB: A generic blackboard development sys- tem. In Proceedings of the Fifth National Conference on Artificial Intelligence, 1008-1014. Decker, K. S., and Lesser, V. R. 1992. Generaliz- ing the partial global planning algorithm. Interna- tional Journal of Intelligent and Cooperative Infor- mation Systems 1(2):319-346. Decker, K. S., and Lesser, V. R. 1993. Quantita- tive modeling of complex computational task envi- ronments. In Proceedings of the Eleventh National Conference on Artificial Intelligence, 217-224. Durfee, E., and Lesser, V. 1991. Partial global plan- ning: A coordination framework for distributed hy- pothesis formation. IEEE Transactions on Systems, Man, and Cybernetics 21(5):1167-1183. Erman, L. D.; Hayes-Roth, F.; Lesser, V. R.; and’ Reddy, D. R. 1980. The hearsay-ii speech- understanding system: Integrating knowledge to re- solve uncertainty. Computing Surveys 12(2):213-253. Fox, M. S. 1983. Constraint-Directed Search: A Case Study of Job-Shop Scheduling. Ph.D. Dissertation, Carnegie Mellon University, Pittsburgh PA. Garey, M. R., and Johnson, D. S. 1979. Comput- ers and Intractability: A Guide to the Theory of NP- Completeness. New York: W.H. Freeman. Hildum, D. W., and Corkill, D. D. 1990. Solving dy- namic sequencing problems. COINS Technical Report 90-63, University of Massachusetts. Hildum, D. W. 1994. Flexibility in a Knowledge- Based System for Solving Dynamic Resource- Constrained Scheduling Problems. Ph.D. Disserta- tion, Computer Science Dept., University of Mas- sachusetts, Amherst, MA 01003. Moehlman, T. A.; Lesser, V. R.; and Buteau, B. L. 1992. Decentralized negotiation: An approach to the distributed planning problem. Group Decision and Negotiation (2):161-191. Sadeh, N. 1991. Look-Ahead Techniques for Micro- Opportunistic Job Shop Scheduling. Ph.D. Disserta- tion, Carnegie Mellon University, Pittsburgh PA. Sandholm, T. 1993. An implementation of the con- tract net protocol based on marginal cost calculations. In Proceedings of the Eleventh National Conference on Artificial Intelligence, 256-262. Sen, S., and Durfee, E. 1993. A formal analysis of communication and commitment in distributed meet- ing scheduling. In Proceedings of the Twelfth Work- shop on Distributed AI. Smith, S. F.; Fox, M. S.; and Ow, P. S. 1986. Con- structing and maintaining detailed production plans: Investigations into the development of knowledge- based factory scheduling systems. AI Magazine 7(4):45-61. Sycara, K.; Roth, S.; Sadeh, N.; and Fox, M. 1991. Distributed constrained heuristic search. IEEE Transactions on Systems, Man, and Cybernetics 21(6):1446-1461. Yakoo, M.; Ishida, T.; and Kuwabara, K. 1990. Dis- tributed constraint satisfaction for DA1 problems. In Proceedings of the 10th International Workshop on Distributed Artificial Intelligence. 400 Distributed AI | 1994 | 155 |
1,492 | Michael P. Wellman usatio esi University of Michigan, AI Laboratory 1101 Beal Avenue Ann Arbor, MI 48109-2110 wellman@engin.umich.edu Abstract This paper1 presents a precise market model for a well-defined class of distributed configuration design problems. Given a design problem, the model defines a computational economy to allocate basic resources to agents participating in the design. The result of running these “design economies” constitutes the market solution to the original problem. After defining the configuration design framework, I describe the mapping to computational economies and our results to date. For some simple examples, the system can produce good designs relatively quickly. However, analysis shows that the design economies are not guaranteed to find optimal designs, and we identify and discuss some of the major pitfalls. Despite known shortcomings and limited explorations thus far, the market model offers a useful conceptual viewpoint for analyzing distributed design problems. Introduction With advances in network technology and infrastructure, opportunities for the decentralization of design activities are growing rapidly. Moreover, the specialization of design expertise suggests a future where teams form ad hoc collaborations dynamically and flexibly, according to the most opportunistic connections. Centralized coordination or control is anathema in this environment. Instead we seek general decentralized mechanisms which respect the local autonomy of the agents involved, yet at the same time facilitate results (in our case, designs) with globally desirable qualities. Design of complex artifacts can be viewed as fundamentally a problem of resource allocation. We generally have numerous performance and functional objectives to address, with many options for trading among these objectives and for furthering these objectives in exchange for increased cost. When the design problem is distributed, then these tradeoffs are not only across objectives, but also across agents (human or computational) participating in the design. Typically each agent is I An extended version is available via anonymous ftp as ftp://ftp.eecs.umich.edu/people/wellman/aaai94ext.ps. concerned with a subset of the components or functions of the artifact being designed, and may individually reflect a complex combination of the fundamental objectives. Consider a hyper-simplified scenario in aircraft design. Suppose we have separate agents responsible for the airfoil, engines, navigation equipment, etc. Suppose that we have a target for the aircraft’s total weight. Since total weight is the sum of the weights of its parts, we might consider allo- cating a priori each agent a slice of the “weight budget”. This approach has several serious problems. First, if we do not choose the slices correctly, then it could be that one of the agents makes extreme compromises (e.g., an under- powered engine or expensive exotic metals for the fuse- lage) while the others could reduce weight relatively easily. Second, it will typically be impossible to determine good slices in advance, because the appropriate allocation will depend on other design choices. For example, depending on the position of the wings, reducing body weight while maintaining structural soundness may be more or less ex- pensive. Or if we have a more powerful engine, then it may be that extra total weight can be accommodated, and the fixed budget itself was not realistic. Rather than allocate fixed proportions of resources, we desire an approach that dynamically adjusts the allocation as the design progresses. One way to achieve this sort of behavior is via a negotiation and trading process. For ex- ample, if the engine agent would benefit substantially from a slight increment in weight, it might offer to trade some of its drag allowance to the airfoil agent for a share of the lat- ter’s weight allocation. If there is enough of this kind of trading going on, then it makes sense to establish markets in the basic resources-weight, drag, etc. Then we can view the entire system as a sort of economy devoted to al- locating resources for the design. We call this system the design economy. In this paper, we present a precise market model for a well-defined class of distributed configuration design prob- lems. Given a design problem, our model defines a compu- tational economy to solve the design problem, expressed in terms of concepts from microeconomic theory. We can then implement these economies in our WALRAS system for market-oriented programming (Wellman 1993)) thus producing a runnable design economy. The following sections describe the configuration design framework, the mapping to computational economies, and Collaboration 401 From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. preliminary results. For some simple examples, the system can produce good designs relatively quickly. However, analysis shows that the design economies are not guaranteed to find optimal designs, and we identify and discuss some of the major pitfalls. We argue that ‘most of these problems are inherent in decentralization generally, not in the design economy per se, and moreover that the ‘market model offers useful concepts for engineering the configuration of a variety of multiagent tasks. istributed Configuration Design We adopt a standard framework for configuration design (Mittal and Frayman 1989). Our specific formulation follows (Darr and Birmingham 1994), and covers many representative systems (Balkany et al. 1993). Design in this framework corresponds to selection of parts to perform functions. More precisely, a design problem consists of: 0 A set of attributes, A1 ,. . ., An, and their associated domains, X1 ,. . ., Xn. 0 A set of available parts, P, where each part p E P is defined by a tuple of attribute-value pairs. l A distinguished subset of attributes, F, the functions. The domain of a function attribute is the set of parts that can perform that function. c A set of constraints, R, dictating the allowable part combinations (either directly or indirectly via constraints on the attributes). 0 A utility function, u: Xt X-S. x Xn + 3, ranking the pos- sible combinations of attribute values by preference. A design D is an assignment of parts to functions. Thus, the role of functions in this framework is to define when a design is complete. D is feasible if it satisfies R. The valuation of a design, vaZ( D), is the assignment to attribute values induced by the assignment of parts to functions. Typically, this is just the sum over the respective attribute values of a design’s component parts. Finally, a feasible design D is optimal if for all feasible designs D’, u( vaZ( D)) 2 u(vaZ( D’)) . In addition, it is often useful to organize the parts into catalogs. Let catalog i, 15 i 5 m, consist of parts Ci, where q,..., C,,, is a partition of P. A catalog design includes at most one part per catalog.2 In distributed design, catalogs are the units of distribution: each catalog agent selects a part to contribute or chooses not to participate. In this general form, the design task is clearly intractable (it subsumes general constraint satisfaction and op- timization). Tractability can be obtained only by imposing restrictions on R and u. Instantiations of this framework adopt specialized constraint languages, and often relax the 21n the degenerate case where each catalog lists only one part, catalog design reverts to the situation without catalogs. Typically catalogs will correspond to functions, although this is not a requirement. Indeed, this definition places no restriction on the number of functions any part may implement. requirement of optimality. We shall accept comparable compromises in our mapping to the market model. Computational Markets The market model we adopt is a computational realization of the most generic, well-studied theoretical framework, that of general equilibrium theory (Hildenbrand and Kirman 1976; Shoven and Whalley 1992). General equilibrium is concerned with the behavior of a collection of interconnected markets, one market for each good. A general-equilibrium system consists of: 0 a collection of goods, gl ,. . ., g,,, and l a collection of agents, divided into two types: - consumers, who simply exchange goods, and - producers, who can transform some goods into others. A consumer is defined by (1) a utility function, which specifies its relative preference for consuming a bundle of goods (x1 ,.. .,xn), and (2) an endowment (e, ,. . .,e,) of initial quantities of the goods. Consumers may trade all or part of their initial endowments in exchange for quantities of the other goods. All trades must be executed at the established market prices, ( pl,. . ., p,, ). A consumption bundle is feasible for a consumer if and only if it satisfies the budget constraint, ixipi s ieipi 9 i=l i=l which says that a consumer may spend only up to the value of its endowment. The decision problem faced by a consumer is to maximize its utility subject to the budget constraint. The other class of agents, producers, do not consume goods, but rather transform various combinations of some goods (the inputs) into others (outputs). The feasible com- binations of inputs and outputs are defined by the produc- er’s techr?oZogy. For example, if the producer can transform one unit of gl into two units of g2, with constant returns to scale, then the technology would include the tuples I( -x,2x,0 ,..., 0) 1 x 2 0} . This production would be prof- itable as long as p1 c 2p2. The producer’s decision prob- lem is to choose a production activity so as to maximize profits, the difference between revenues (total price of out- put) and costs (total price of input). Note that a producer does not have a utility function. In the computational market price system we use, WALRAS, we can implement consumer and producer agents and direct them to bid so as to maximize utility or profits, subject to their own feasibility constraints. The system derives a set of market prices that balance the supply and demand for each good. Since the markets for the goods are interconnected (due to interactions in both production and consumption), the price for one good will generally affect the demand and supply for others. WALRAS adjusts the prices via an iterative bidding 402 Distributed AI protocol, until the system reaches a competitive equilibrium (see (Wellman 1993) for details), i.e., an allocation and set of prices such that (1) consumers bid so to maximize utility, subject to their budget constraints, (2) producers bid so to maximize profits, subject to their technological possibilities, and (3) net demand is zero for all goods. An economy in competitive equilibrium has the desirable property of Pareto Optimality: it is not possible to increase the utility of one agent without decreasing that of other(s). Moreover, for any Pareto optimum (i.e., any admissible allocation), there is some corresponding set of initial endowments that would lead to this result. Given certain technical restrictions on the producer technologies and consumer preferences, equilibrium is guaranteed to exist and the system converges to it. Specifically, if technologies and preferences are smooth and strictly convex, then a unique equilibrium exists (see (Varian 1984) for formal treatment). If in addition, the derived demands are such that increasing the price for one good does not decrease the demand for others, then the iterative bidding process is guaranteed to converge to this equilibrium (Arrow and Hurwicz 1977; Milgrom and Roberts 1991). The Design Economy We next proceed to instantiate the general-equilibrium framework for our distributed configuration design prob - lem. As noted above, our purpose in applying the economic mechanism is to provide a principled basis for resolving tradeoffs across distributed design agents. By grouping the agents together in a single economy, we establish a com- mon currency linking the agents’ local demands for avail- able resources. Using this common currency, the market can (at least under the circumstances sketched above) effi- ciently allocate resources toward their most productive use with minimal communication or coordination overhead. The fact that all interaction among agents occurs via ex- change of goods at standard prices greatly simplifies the design of individual agents, as they can focus on their own attributes and the parameters of their local economic prob- lems. In addition, the price interface provides a straightfor- ward way to influence the system’s behavior from the out- side-by setting the relative prices of exogenously derived resources and performance attributes. And similarly, the prices within the system can be interpreted from the outside in terms of economic variables that are meaningful to those participating in the design process. Market Configuration In general, to cast a distributed resource-allocation problem in terms of a computational market, one needs to specify * the goods (commodities) traded, 0 the consumer agents trading and ultimately deriving value from the goods, 0 the producer agents, with their associated technologies for transforming some goods into other goods, and 0 the agents’ bidding behavior. There are two classes of goods we consider in the problem of distributed design. First, we have basic resource attributes, such as weight and drag in the fanciful aircraft example above. These are resources required by the components in order to realize the desired performance, but are limited or costly or both. Generally, we desire to minimize our overall use of resources. The second class of goods are performance attributes, such as engine thrust or leg room in a passenger aircraft. We also include the function attributes in this category. Performance attributes measure the capabilities of the designed artifact, and we typically desire to maximize these. In terms of our framework for distributed configuration design, both resource and performance attributes are kinds of attributes. Functions can be viewed as a subclass of performance attributes. So, the first step in mapping a de- sign problem to our market model is to identify the goods with the attributes. Although it is not strictly necessary to distinguish the resource from performance attributes, we do so for expository reasons, as it facilitates intuitive under- standing of the flow of goods in the design economy. The remaining steps are to identify the agents and define their behavior. As mentioned above, it is typical in mul- tiagent design to allocate individual agents responsibility for distinct components or functions. Within our distributed design scheme, these agents correspond to catalogs. In a distributed constraint-satisfaction formulation of the problem (Dar-r and Birmingham 1994), catalog agents select a part to contribute to the overall design. The chosen part entails a particular pattern of resource usage and performance, as specified in its associated attribute vector. Correspondingly, in the design economy, each part in the catalog is associated with a vector of resource and performance goods. The resources can be interpreted as input goods, and the performance goods as output. In this view, the catalog producer is an agent that transforms resources to performance. For example, the engine agent in our aircraft design transforms weight and drag (and noise, etc.) into thrust. The particular combinations of weight/drag/thrust/. . . represented by the various engine models constitute this producer’s technology. Thus, to specify a catalog producer’s technology (and that’s all there is to specify for a producer), we simply form a set of the attribute-value tuples characterizing each part. We then go through and negate the values for the resource goods, leaving the values for performance goods intact. 3 An economy with only producers would have no purpose. To ground the system, we define a consumer agent, conceptually the end user or customer for the overall design.4 The consumer is endowed with the basic resource 3This assumes that all attributes are measured in increasing resource usage or increasing performance. If not, we would simply rescale the values in advance. 41f there are more than one class of users or customers with distinct preferences, then we could introduce several consumer agents. The underlying computational market accommodates any Collaboration 403 goods. So for example, we will initialize the system by endowing the consumer with a total weight, typically an amount greater than the heaviest conceivable aircraft de- fined by combination of the heaviest available parts. The idea is that the consumer then (effectively) sells this weight to the various catalog agents, in exchange for performance goods like thrust. The consumer has preferences over the overall design at- tributes, as specified by its utility function. This function is essentially equivalent to the function u defined in the gen- eral framework for configuration design. There is one syn- tactic modification, however. In the original specification, u was increasing in the performance attributes and decreas- ing in the resource attributes. The consumer’s utility func- tion, in contrast, must be increasing in all attributes. To en- sure this, we define the “consumption” of a resource good as the amount left over after the consumer sells part of its endowment. Thus, if the consumer’s endowment of weight is w and the total weight of the aircraft (sum of the weight of the parts) is w’, then the effective weight consumed is w-w’ . If the consumer prefers lighter airplanes, then it prefers to “consume” as much weight as possible. Thus far, we have accounted for all elements of the configuration design framework except for the constraints (recall that the functions are a subset of performance attributes). We are able to map some kinds of constraints into this framework, but not others. The simplest type of constraint is an upper bound on the total usage of a particular resource (e.g., the total noise cannot exceed FAA regulations). To capture this kind of constraint, we simply endow the consumer with exactly the upper bound. Since the consumer is the only source of resource goods in the economy, this effectively restricts the combined choices of the producers. Some other kinds of constraints can also be handled by defining new intermediate goods. This is best illustrated by example in the next section. Finally, we must note an implicit assumption underlying the mapping described above. In order for the resource and performance goods to be reasonably traded among the consumer and producers, it must be the case that the total resource usage (resp. performance achieved) does indeed correspond to the sum of the parts. In other words, the val function described above must be additive (but note that the utility function u need not be). There may be some encoding tricks to get around this restriction in some cases, but the basic design economy assumes this property. An Example To illustrate the mapping from configuration design problems to design economies, we carry out the exercise for an example presented by (Dar-r and .Birmingham 1994). The problem is a very simplified computer configuration design. In the example, we have two functions to satisfy: processing, and serial port. Our design criteria are dollar cost and reliability, as measured in failures-per-million- number of agents, employed only one. but our design economies thus far have hours (fpmh). These criteria are represented in the design economy as resource goods. There are no performance goods, except for the two functions mentioned. There is also another good, memory access, which is conceived here not as an overall performance attribute but as an intermediate good enabling the processing function.5 We also have three constraints. The total dollar cost must not exceed 11, the total failure rate must not exceed 10 fpmh, and the RAM must be at least as fast as the CPU’s memory access. The first two of these constraints are captured simply by endowing the consumer with 11 dollars and 10 fpmh. The consumer’s utility function specifies preferences over these two goods, as well as the functions processing and port. Representing the functions as utility attributes rather than constraints deviates from the original framework somewhat by allowing the user the flexibility to trade off functionality for performance or resource savings. There are 14 available parts, organized into three catalogs. The CPU catalog contains four CPU models, each of which supplies the processing function. Five serial ports are available to support the port function, and five RAM models can supply memory access. The complete CPU catalog is presented in Table 1. EFIJl 4 pro;ess dotars fpyh mecry CPU2 CPU3 CPU4 II 1 4 7 140 1 2 2 40 1 1 5 30 Table 1: CPU catalog. The catalogs are converted into technologies simply by listing the attribute tuples as good tuples, possibly after some resealing. For example, the CPU agent’s technology is: ~(1,-6,-1,-1/40),(1,-4,-7,-1/140),(1,-2,-2,-1/40),~ [(l,-l,-s,-1/30),(0,0,0,0) This means that the agent has five options, with the net good productions listed. The first option is to produce an output of 1 unit of processing (by default, functions take values in {O,l}) from an input of 6 dollars, 1 fpmh, and l/40 memory access units. This last input requires some explanation. In order to represent the constraint that CPU be at least as fast as RAM, we define the intermediate good memory access, which is an output of RAM and an input of CPU. The constraint is then enforced by requiring that CPU buy enough speed from RAM. Since the values listed in the catalog above are memory access times, we invert them to convert to speed units. Finally, the tuple of zeros is an element of every tech- nology. An agent always has the option to produce nothing, in which case its resource usage is zero (as are its profits). 5We chose this interpretation specifically to illustrate the notion of intermediate goods; in the next section we simplify the model to treat memory as a function. 404 Distributed AI Similar technologies are generated for the RAM catalog (input dollar and fimh, output memory) and serial port (input dollar andfimh, output port), as shown in Table 2. $ fpmh memory 1 0 0 l/40 SPl -6 -1 1 l/10 SP2 -3 -3 1 l/35 SP3 -2 -4 1 l/30 SP4 -1 -8 1 l/35 SP5 -1 -6 1 Table 2: Technologies for RAM and Port catalog producers. The configuration of the design economy is best described in terms of flow of goods, depicted in Figure 1. The consumers supply the basic resources, which are transformed by the producers to performance goods (as well as intermediate resources needed by other producers), which are ultimately valued by the consumer. Goods Producers (design attributes) (catalogs) Consumer (end user) fpmh dollars processing memory port CPU RAM serial port Figure 1: Flow of goods in the example design economy. Agent Behavior All that remains to specify our mapping is to define the behavior of the various agents. The consumer’s problem is to set demands maximizing utility, subject to the budget constraint. This defines a constrained optimization problem, parametrized by current going prices. In our example, we adopt a standard special functional form for utility, one exhibiting constant elasticity of sub- stitution (CES) (Varian 1984). The CES utility function is t I l/P +1 ,...,xJ= &qpxp , i=l where the ai are good coefficients and p is a generic substitution parameter. One virtue of the CES utility function is that there is a closed-form solution to the op- timal demand function, ai z’J=l Pjej “i(P1 ,“‘,PJ = l/(lvp) Pi c ; - pl(p-l) * j=l ajPj The consumer can simply evaluate this function at the go- ing prices whenever prompted by the bidding protocol. In WALRAS , bids are actually demand curves, where the price pi of the good i being bid varies while the other prices are held fixed. CES consumers transmit a closed-form descrip- tion of this curve, with pi the dependent variable. The producers’ bidding task is also quite simple. Catalog producers face a discrete choice among the possible component instances, each providing a series of values for goods corresponding to resource and performance attributes. Let (x/ , . . ., xi) be the vector of quantities representing part j in the producer’s catalog, and 7tj denote the profitability of part j, as a function of prices: +,,..., p,)=~piX!. i=l All catalogs include the null part 0, where xp = 0 for all i. Given a set of prices, the producer selects the most profitable part: j*(P,9...7Pn) =ZVglllaXj d(pl,...,p,). Just as for consumers, when a producer bids on good i, it specifies a range of demands for all values of pi (the price of good i), holding all other prices fixed, which is determined by the optimal part for that price, When all prices but one are fixed, we can simplify the profit function as follows: k#i where the parameter pj depends on part j and the fixed prices &, k # i . For example, partj is more profitable than j’ if and only if p,x; + p’ > Pi”/’ + pj’, or equivalently (for xi > xi’), P 2v. i x; - x(’ It is clear from the above that parts with higher values of x! become maximally profitable, if ever, at higher prices pi. Therefore, one way to calculate j * (& , . . ., pi,. . ., F,,) is to sort the parts in increasing order of x/ and then use the inequality above to derive the price threshold between each adjacent pair. 6 These thresholds dictate the optimal part at any pi (assuming the other prices fixed), and the associated demand xii*. Thus, the form of the producer’s demand is a step function, defined by a set of threshold prices where different components become optimal. Results We have run the design economy on a few simple examples, including the one presented above modified to treat memory as a function rather than an intermediate 6This includes the null part, with x0 =p” = 0. If the threshold decreases, then the intervening part can never be optimal (it is not on the convex hull) for any price of good i, and can be skipped. Collaboration 405 good. On this example, it produces the optimal design in three bidding cycles, although it never (as long as we have run it) reaches a price equilibrium. After the third cycle, the prices continue to fluctuate, although never enough to cause one of the catalog producers to change its optimal part. Unfortunately, we have no way of detecting with certainty that the part choices are stable. Although we have yet to run systematic experiments, we have observed similar behavior on a variety of small exam- ples. We have also encountered examples where the design economy does not appear to converge on a single design, or it converges on a design far from the global optimum. This is not surprising, as the general class of design economies producible by the mapping specified above does not satisfy the known conditions for existence of competitive equilib- rium. Indeed, the conditions are never strictly satisfied by design economies generated as described, because the dis- creteness of catalogs violates convexity. Extensions As mentioned above, designs produced by the market model are not guaranteed to be optimal or even feasible, due to the discreteness (and other non-convexities) of the problem. They are more likely to be local optima, where no single change of policy by one of the producer agents will constitute an improved design. In current work, we are attempting to characterize the performance of the scheme for special cases. For those cases where perfectly competitive equilibria do not exist, we are investigating the possibility of relaxing the competitiveness assumption. In addition, we are looking into hybrid schemes that use the market to bound the optimal value by computing the global optimum for a smooth and convex relaxation of the original problem. This is analogous to branch-and-bound schemes that make use of regular linear-programming algorithms for integer-programming problems. In this and other approaches, we expect the economic price information exploited by the market-oriented approach to lead to more rational tradeoffs in the distributed design process. We are also exploring combinations of market-based and constraint-based methods, where we use general constraints (including those inexpressible in the market model) to prune the catalogs before running the design economy. A hybrid approach interleaving market-directed search for good designs with constraint reasoning uses each method to compensate for inadequacies in the other. Economics and Distributed AI Why Economics? To coordinate a set of largely autonomous agents, we usually seek mechanisms that (A) produce globally desirable results, (B) avoid central coordination, and (C) impose minimal communication requirements. In addition, as engineers of such systems, we also prefer mechanisms that (D) are amenable to theoretical and empirical analysis. In human societies, advocates of market economies argue that B is essential (for various reasons) and that market price systems perform well on criterion A because they provide each agent with the right incentives to further the social good. Because prices are a very compact way to convey this incentive information to each agent, we can argue that price systems also satisfy C (Koopmans 1970). For these reasons, some have found the market economy an inspiring social metaphor for approaches to coordinating distributed computational agents. Thus, we sometimes see mechanisms and protocols appealing to notions of negotiation, bidding, or other economic behavior. Criterion D is also a compelling motivation for exploring economic coordination mechanisms in a computational set - ting. The problem of coordinating multiple agents in a so- cietal structure has been deeply investigated by economists, resulting in a large body of concepts and insights, as well as a powerful analytical framework. The phenomena sur- rounding social decision making have been studied in other social science disciplines as well, but economics is distin- guished by its focus on three particular issues: Resource allocation. The central aspect of the outcome of the agents’ behavior is the allocation of resources and distribution of products throughout the economy. Rationality abstraction. Most of microeconomic theory adopts the assumption that individual agents are rational in the sense that they act so as to maximize utility. This approach is highly congruent with much work in Artificial Intelligence, where we attempt to characterize an agent’s behavior in terms of its knowledge and goals (or more generally, beliefs, desires, and intentions). Indeed, this knowledge level analysis requires some kind of rationality abstraction (Newell 1982), and is perhaps even implicit in our usage of the term agent. Decentralization. The central concern of economics is to relate decentralized, individual decisions to aggregate behavior of the overall society. This is also the concern of Distributed AI as a computational science. Related Work In addition to my own prior work in market-oriented programming (Wellman 1993)) there have been several other efforts to exploit markets for distributed computation. Most famous in AI is the contract net (Davis and Smith 1983), but it is only recently that true economic mechanisms have been incorporated in that framework (Sandholm 1993). There have been interesting recent proposals for incorporating a range of economic ideas in distributed allocation of computational resources (Drexler and Miller 1988; Miller and Drexler 1988)) as well as some actual experiments along these lines (Cheriton and Harty 1993; Kurose and Simha 1989; Waldspurger et al. 1992). However, there are no other computational market models for distributed design of which we are aware. This approach also shares some conceptual features with Shoham’s approach to “agent-oriented programming” 406 Distributed AI (Shoham 1993). Where Shoham defines a set of interac- tions based on speech acts, we focus exclusively on the economic actions such as exchange and production. In both approaches (as well as some others (Huberman 1988)), the underlying idea is to get an improved understanding of a complex computational system via social constructs. Finally, there is a large literature on decomposition methods for mathematical programming problems, which could perhaps be applied to distributed design. Many of these and other distributed optimization techniques (Bertsekas and Tsitsiklis 1989) can themselves be interpreted in economic terms, using the close relationship between prices and Lagrange multipliers. The main distinction of the approach advocated here is conceptual. Rather than taking a global optimization problem and decentralizing it, our aim is to provide a framework that accommodates an exogenously given distributed structure. Conclusions The main contribution of this work is a precise market model for distributed design, covering a significant class of configuration design problems. The model has been implemented within a general environment for defining computational market systems. Although we have yet to perform extensive, systematic experiments, we have found that the design economy produces good designs on some simple problems, but breaks seriously on others. Analysis is underway to characterize the cases where it works. It cannot be expected to work universally, as these are known intractable optimization problems, and distributing the problem only makes things worse. In the long run, by embedding economic concepts within our design algorithms, we facilitate the support of actual economic transactions that we expect will eventually be an integral part of networks for collaborative design and other inter-organizational interactions. Many other technical problems will need to be solved (involving security for proprietary information and bidding protocols, for example) before this is a reality, but integrating economic ideas at the conceptual level is an important first step. Acknowledgments Special thanks to Bill Birmingham and Tim Dar-r for assistance with the distributed design problem and examples. Daphne Koller and the anonymous referees provided useful comments on the paper and the work. References Arrow, K. J. and L. Hurwicz, Ed. (1977). Studies in Resource Allocation Processes. Cambridge, Cambridge University Press. Balkany, A., W. P. Birmingham, and I. D. Tommelein (1993). An analysis of several configuration design systems. AI EDAM 7: 1-17. Bertsekas, D. P., and J. N. Tsitsiklis (1989). Parallel and Distributed Computation. Englewood Cliffs, NJ, Prentice- Hall. Cheriton, D. R., and K. Harty (1993). A market approach to operating system memory allocation. Stanford University Department of Computer Science. Dar-r, T. P. and W. P. Birmingham (1994). An Attribute- Space Representation and Algorithm for Concurrent Engineering. University of Michigan. Davis, R. and R. G. Smith (1983). Negotiation as a Metaphor for Distributed Problem Solving. Artificial Intelligence 20: 63- 109. Drexler, K. E. and M. S. Miller (1988). Incentive engineering for computational resource management. In (Huberman 1988) 23 l-266. Hildenbrand, W. and A. P. Kirman (1976). Introduction to Equilibrium Analysis: Variations on Themes by Edgeworth and Walrus. Amsterdam, North-Holland. Huberman, B. A., Ed. (1988). The Ecology of Computation. North-Holland. Koopmans, T. C. (1970). Uses of prices. Scientific Papers of Tjalling C. Koopmans Springer-Verlag. 243-257. Kurose, J. F., and R. Simha (1989). A microeconomic approach to optimal resource allocation in distributed computer systems. IEEE Trans. on Computers 38: 705717. Milgrom, P. and J. Roberts (1991). Adaptive and sophisticated learning in normal form games. Games and Economic Behavior 3: 82- 100. Miller, M. S. and K. E. Drexler (1988). Markets and Computation: Agoric Open Systems. In (Huberman 1988) 133-176. Mittal, S. and F. Frayman (1989). Towards a generic model of configuration tasks. Proceedings of the Eleventh International Joint Conference on Artificial Intelligence, Detroit, MI, Morgan Kaufmann. Newell, A. (1982). The Knowledge Level. Artificial Intelligence 18: 87- 127. Sandholm, T. (1993). An implementation of the contract net protocol based on marginal cost calculations. Proceedings of the National Conference on Artificial Intelligence, Washington, DC, AAAI Press. Shoham, Y. (1993). Agent-oriented programming. Artificial Intelligence 60: 51-92. Shoven, J. B. and J. Whalley (1992). Applying General Equilibrium. Cambridge University Press. Varian, H. R. (1984). Microeconomic Analysis. New York, W. W. Norton & Company. Waldspurger, C. A., T. Hogg, B. A. Huberman, et al. (1992). Spawn: A distributed computational economy. IEEE Transactions on Software Engineering 18: 103-l 17. Wellman, M. P. (1993). A market-oriented programming environment and its application to distributed multicommodity flow problems. Journal of Artificial Intelligence Research 1( 1): l-23. Collaboration 407 | 1994 | 156 |
1,493 | Emergent Coordination through the Use of Cooperative State-Changing Rules Claudia V. Goldman and Jeffrey S. Rosenschein* Computer Science Department Hebrew University Givat Ram, Jerusalem, Israel clag@cs.huji.ac.il, jeff@cs.huji.ac.il Abstract Researchers in Distributed &tificial Intelligence have suggested that it would be worthwhile to isolate “aspects of cooperative behavior,” general rules that cause agents to act in ways conducive to cooperation. One kind of cooperative behav- ior is when agents independently alter the envi- ronment to make it easier for everyone to func- tion effectively. Cooperative behavior of this kind might be to put away a hammer that one finds lying on the floor, knowing that another agent will be able to find it more easily later on. We examine the effect a specific “cooperation rule” has on agents in the multi-agent Tileworld domain. Agents are encouraged to increase tiles’ degrees of freedom, even when the tile is not involved in an agent’s own primary plan. The amount of extra work an agent is willing to do is captured in the agent’s cooperation level. Re- sults from simulations are presented. We present a way of characterizing domains as multi-agent deterministic finite automata, and characterizing cooperative rules as transformations of these au- tomata. We also discuss general characteristics of cooperative state-changing rules. It is shown that a relatively simple, easily calculated rule can sometimes improve global system performance in the Tileworld. Coordination emerges from agents who use this rule of cooperation, without any ex- plicit coordination or negotiation. Introduction Distributed Artificial Intelligence (DAI) is concerned with effective agent interactions, and the mechanisms by which these interactions can be achieved. Re- searchers in DA1 have taken many approaches to this overall question, considering in particular explicit coor- dination and negotiation techniques (Smith 1978; Mal- one, Fikes, & Howard 1988; Kuwabara & Lesser 1989; Conry, Meyer, & Lesser 1988; Kreifelts & Martial 1990; Durfee 1988; Sycara 1988; 1989; Kraus & Wilkenfeld 1991; Zlotkin & Rosenschein 199310; Ephrati & Rosen- schein 1993), as well as implicit modeling of other *This research has been partially supported by the Is- raeli Ministry of Science and Technology (Grant 032-8284). agents’ beliefs and desires (Genesereth, Ginsberg, & Rosenschein 1986; Gmytrasiewicz & Durfee 1992; Kraus 1993; Grosz & Kraus 1993). There have also been repeated attempts by re- searchers to establish norms of cooperative behavior, general rules that would cause agents to act in ways conducive to cooperation. The search space for multi- agent action is large, and cooperative behavior on the part of agents would ideally act to limit this search space. These investigations into cooperative behavior have generally taken the approach of shaping agents’ plans in particular directions, such that other agents could interact appropriately. An agent that acts pre- dictably, shares its information, and defers globally constraining choices as long as possible, will be an eas- ier one with which to coordinate. Work in this area includes early research by Davis and his colleagues at MIT (Davis 1981), and some of the RAND work on cooperative behavior in the air traffic control do- main (McArthur, Steeb, & Cammarata 1982). Multi- agent reactive systems have also been analyzed, where solutions are arrived at dynamically by reactive agents (eco-agents) in multi-agent environments (Ferber & Drogoul 1992). M ore recently, Tennenholtz, Shoham, and Moses have considered how social laws for artificial agent societies could be developed and evaluated (Ten- nenholtz & Moses 1989; Shoham & Tennenholtz 1992b; 1992a). While these streams of research have considered how agents’ plans could be adapted to maximal coopera- tive effect, we take a different approach to the ques- tion. Instead of asking how an agent might temper its own goal-satisfying behavior to be cooperative, we ask how an agent might transform the world in a cooper- ative manner, at the same time that it is pursuing its own goal. We explore cooperative state-changing rules, that are imposed on the agents’ behaviors as meta- rules. Those meta-rules don’t have any influence on the primary actions that the agents are going to exe- cute. Rather, they induce the agents to perform extra work that will transform the world into one in which the work of all agents might be done more easily. 408 Distributed AI From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. Tileworld Interactions The Domain Consider agent interactions in a multi-agent version of the Tileworld (Pollack & Ringuette 1990). Agents can move only through navigable discrete squares; mov- ing from one square to another costs one unit. Agents are programmed to roam the grid and push tiles into holes. In our simulations, we considered two ways in which agents decide which tile to go after. In one variation, they choose the closest (Euclidean distance) tile, compute and traverse their path to it, then push it to the hole that is closest to the tile (again, Eu- clidean distance). The computation of distance is a (lower-bound) heuristic, since it doesn’t take into ac- count barriers, other agents, and tiles (but it is quick to compute). In the second variation, agents are as- signed tiles and holes arbitrarily by the programmer, and stop when they finish their assignments. This use of the Tileworld is non-standard, in that we are not fo- cusing here on the architecture (reactive or otherwise) of the agents. Instead, we are using the Tileworld as an interesting, constrained domain that helps us un- derstand implicit cooperative behavior. Strongly-Coupled Interactions The goals of separate agents in the multi-agent Tile- world are highly interconnected, primarily because the constraints on movement are so severe. Agents need to laboriously go around barriers to get into position and push a tile into a hole. Consider, for example, the simple interaction shown in Figure 1 (a variation of an example from (Zlotkin & Rosenschein 1993a)). hi a hole Ai t- an agent a tile a barrier 4 9 Figure 1: Strongly-Coupled Interactions Assume that agent Al wants to fill holes 1 and 2, and that agent A2 wants to fill holes 2 and 3 (perhaps the agents were assigned these goals a priori). For either agent to accomplish its goal, it would need to carry out a large number of movements to position the tiles and itself appropriately. For example, for A1 to fill its holes alone, it needs to move 17 steps (assuming A2 is not in its way). Similarly, AZ, alone in the world, would need to move 26 steps to fill its holes. However, if they work together, they can satisfy Al completely (and Aa partially) by going 12 steps (or satisfy As completely and Al partially). The highly constrained nature of the multi-agent Tileworld provides ample opportunity for cooperative behavior, as the above example shows (e.g., one agent pushes a tile away from a barrier, while the other then pushes it perpendicularly). However, finding these multi-agent optimal plans tends to be a very difficult task. Instead, we examine the possibility that the co- operative behavior exhibited in the above example can be approximated by giving the agents an inclination towards sociable behavior. We will induce this kind of behavior through a meta-rule that causes the agents to help one another implicitly, without having to search for an optimal multi-agent plan. Ideally, when the agents are acting sociably, their combined activity will approach the optimal solution that they would have found had they carried out full multi-agent planning, but at a fraction of the computational cost. A Rule for Sociable Behavior in the Tileworld In general, a rule for sociable behavior can take several forms. In the simplest case, an agent may have two courses of action that he perceives as equivalent; if other agents would prefer him to carry out specifically one of those courses of action, he might do so in order to be cooperative. Sometimes, however, we may design agents to actually carry out extra work, to improve the environment for other agents (the amount of extra work subject to the designer’s discretion). This latter kind of rule is the one that interests us here. Obviously, if one designer is building all agents, as in a cooperative problem solving scenario, then such a co- operative meta-rule will have clear utility if it improves overall system performance. If separate designers, with separate goals, are building the agents, there may still be justification for their putting in such a meta-rule, under certain circumstances; however, we do not con- sider these issues (such as stabiZity of the cooperative rule) in this paper. In the Tileworld domain, it is better for every agent to have tiles freely movable, i.e., less constrained by barriers, so that tiles can be pushed into holes more easily. Each tile has a degree of freedom associated with it, the number of directions in which it can be pushed, either zero, two, or four. When we want agents to act cooperatively, we induce them to free tiles, by increasing the tiles’ degree of freedom from two to four. The key point in any given domain is to identify the state characteristics that allow goals to be achieved more easily. Cooperative agents are those that tend towards moving the world into these more conducive, less constrained configurations. In general, a world with more freed tiles is better than one that has con- strained tiles. Although it may not be cheap for an agent to free a given tile, it may sometimes be possible to exert a small amount of extra effort, free a tile, and save another agent a large amount of work. In par- ticular, if agent Ai is on its way to push a tile into a hole, and there is a constrained tile close by that can be freed, then the agent might free it. Coordination 409 This kind of cooperation does not require that the agents communicate, nor that they have a model of the other agents’ specific plans, beliefs, or goals (though we do assume that agents know the current status of the grid) . Cooperation will emerge behavior of the individual agents. out of the sociable How much extra work should an agent be willing to do to act sociably? In our domain, how close does a constrained tile need to be for an agent to go out of its way and free it? In general, the extra work that the agent may do is the movement outside one of its minimal paths and back, so as to free a tile. We call this amount of extra work the cooperation level of the agent. Simulations We have run simulations using the MICE distributed agent testbed (Montgomery et al. 1992) to statisti- cally analyze the efficacy of different cooperation lev- els (Hanks, Pollack, & Cohen 1993). 4 Scenario One Scenario Troo Figure 2: Simulations In all the experiments, the agents have positive, static and predefined cooperation levels. At each tick of time, the agent tries to free a constrained tile that is different from the one it is pushing into a hole, such that the sum of the costs of the paths from the agent location to the constrained tile, freeing the tile, and going back to its original position (to continue with its original task) is not larger than its cooperation level. We first discuss the impact of the cooperative rule, presented above, for two specific scenarios ure 2). These are (see Fig- illustrative of general ways in which the rule can generate cooperative activity. Then we present additional results gathered from using the rule in randomly generated Tileworlds. Scenario One: The aim of each agent in this example is to fill as many holes as it can. Both agents are by design trying to get to their closest tile. In the first scenario, the-tiles‘ are at a di .agonal to their final holes; since agents can’t push tiles diagonally, it is more effi- cient for one agent to position the tile while the other pushes it without any repositioning necessary. Here, How confident can we be that this percentage re- AZ positions both tiles (getting the “assist”), while Al flects the real state of affairs for the overall space of actually does the work of pushing the tiles into the worlds we were testing (i.e., 4 agents, 6 holes, etc.)? Using elementary statistical theory, we find that we holes. Total work: 11 for Al and 13 for A2 (instead of 17 for each when there is no cooperative rule in force). It’s important to emphasize that A2 does not push tiles because it understands that Al will use them-it is simply using the cooperative rule (Aa had a coopera- tion level of 2 for this scenario). The optimal solution, created perhaps by a central planner, would actually have saved the agents some work; that solution would consist of only 12 steps (5 for A2 and 7 for Al), but require a great deal more effort to find. In the optimal solution, AZ does not run after tiles that Al eventually pushes into holes. Scenario Two: In the second scenario, agents are blocked from their closest tiles by a barrier. As they move around the barrier, another agent prepares the target tile by pushing it to the end of the barrier. With Al’s cooperation level set at 8, AZ’S level set at 4, and As’s set at 0, the work is accomplished in 37 steps (as opposed to 48 for the non-cooperative solution). The extra work undertaken by Al benefits AZ; the extra work undertaken by AZ benefits AS. The optimal so- lution, more difficult to compute, takes 28 steps. Experimental Results We have run agent experiments on 74 randomly gener- ated Tileworlds. All the worlds consisted of 4 agents, 6 holes, 6 tiles and 8 barriers. Each agent’s primary goal is to push the closest tile into the closest hole. The worlds differed from one another by the length of the barriers (l-4) and by the locations of the agents, the holes, the tiles, and the barriers, all of which were de- termined randomly. We computed the number of steps that each agent carried out in pushing as many tiles as it could into the holes that were spread in an 11 * 11 grid. The simulation stopped when there were no more tiles or holes left, or whenever the number of time ticks was 400. For each world, the agents’ performance was tested with the agents being given cooperation level 0, cooperation level 1, and so on, up to and including cooperation level 8. In each of the 666 simulations (74 worlds by 9 cooperation levels), all agents were given the same cooperation level. In 13 worlds, we found that the minimum number of steps done by the group of agents with some strictly positive cooperation level was less than the total work done by non-cooperative agents. In only 4 worlds was being cooperative actually harmful, i.e., agents cumu- latively carried out more total steps to fill up the holes with any strictly positive cooperation level. In the other 57 worlds, the agents went the same number of steps when they behaved cooperatively and when they were given zero cooperation level. Therefore, in 17.56% of the worlds we tested, positive cooperation level was beneficial. 410 Distributed AI can have 95% confidence that the error in probability will be less than 4.42% plus or minus for a sample size of 74 worlds (meaning we have 95% confidence that the real percentage of targeted worlds where a positive cooperation level is beneficial lies between 13.14% and 21.98%). Had we wanted to decrease the bound on the error of estimation to 2% plus or minus, we would have needed to exhaustively check 362 randomly generated worlds; to decrease the bound to 1% plus or minus, we would have needed to check 1448 worlds.’ The simulations were run with agents programmed to push the closest tile into the nearest hole; the chances that they would pass sufficiently close to an- other (constrained) tile to activate the cooperation rule were fairly small. Were the cooperation level suffi- ciently high, of course, an agent would wander far off his path to free tiles, but then it is likely that the over- all performance of the group would decrease (since so much extra work is being squandered on cooperation). One can imagine other scenarios where the likelihood of finding tiles to free would be increased-for example, if the agents were sent to push arbitrary preassigned tiles (instead of the closest tile), and might pass other, closer, tiles on the way. In these cases, beneficial co- operation is likely to be more prevalent. What is striking about the above, simple, experi- ment, is just how often a primitive, easy-to-calculate cooperative rule benefited the group as a whole. The improved performance was achieved without com- plicated computation or communication among the agents and the rule itself was easily identifiable for the domain. However, how would we, in general, discover suitable cooperative rules for different domains? The following section explores this question. Cooperative Rule Taxonomy Which kinds of rules can be designed for a given do- main? Which domain characteristics are relevant for designing cooperative state-changing rules? We are in- terested in a general way of framing the problem of co- operative rules, that will make the analysis of a wide range of domains possible. We define a multi-agent deterministic finite automa- ton, based on the standard definition of a deterministic finite automaton (Lewis & Papadimitriou 1981). Definition .l A multi-agent deterministic j@ite au- tomaton (MADFA) is a quintuple M = (K, C, 6, s, F): 0 K is a finite set of states, the set of the multi-agent world states, Q C is an alphabet. The letters are the actions that the agents can ta/lce, ‘This still leaves open the question of which coopera- tion level should be used in a given world, since all we’ve shown is that some cooperation level is beneficial in some percentage of worlds. The optimal cooperation level may possibly be discovered through a learning algorithm, but the question remains for future research. GB 3 : K x z + K is the transition function. E denotes (multi-agent) vectors of C (i.e., multi-agent actions), s is the initial state of the multi-agent world, F C K is the set of final states where all the agents’ goals are achieved. The language accepted by the multi-agent deter- ministic finite automaton is defined as the set of all the vector strings it accepts. For example, a word in the Tileworld domain, with three agents, could be {{north,south,east) {north,west,nil)},north E C, {north, south, east} E E. We consider two related multi-agent automata for each domain. One describes the domain in general, i.e., all the possible states and transitions that can exist in a given do- main (it will be denoted by GMADFA). The second is a sub-automaton of the first, that includes only those states and transitions permitted by r the agents’ actual programmed behavior (that is, the sub- .automaton in- cludes only those potential transitions that might ac- tually occur, given the way agents are programmed to act; agents may still have a choice at run-time, but the sub-automaton includes all choices they might make. It will be denoted by SMADFA.). The specific initial and final states might change for different examples, but the same architecture can be studied to find coop- erative rules regardless of the details of the examples. Assuming that the rule designer has sufficient infor- mation about the domain, he can formulate these two automata that describe the domain in general and the agents’ potential behavior within the domain, and use the automata to dedu ce appropriate cooperative rules. The corresponding automata for two distinct do- mains follow:- - The Tileworld SMADFA - K is the set of grid configurations of the Tileworld. C = { nil,south,east,north,west). F is the set of states in which holds ((#t 1 i es with degree of freedom > 0) = 0) V (#holes = 0). The FileWorld SMA FA - The FileWorld domain consists of agents whose goals are to write to and read from shared files. Whenever an agent performs an ac- tion that accesses the file, the file is locked for the other agents (i.e., they can’t access it). C = {pass-lock, read, write}. One agent, having the lock, can perform the write or read action and move the world into another state in which it can continue writing or reading indef- initely. If the agent performs puss-lock, then the lock is passed to another agent. The Cooperative State-Changing Ikules The purpose of a cooperative state-changing rule is to enable agents to improve the world by moving it into a state in which the agents’ work will be easier. One way to make the agents’ work easier is to shorten the pos- sible paths in SMADFA leading from an initial state to a final state. A problem domain considered as an automaton can help- a rule designer deduce useful co- operative rules. The rules then can be applied to dif- Coordination 411 ferent specific problems. For example, the cooperative rule found for the Tileworld above can be imposed on the agents in different specific scenarios. The given initial state of a particular Tileworld example doesn’t matter, nor does it matter what the specific goals of each agent are; the analysis of how to improve agents’ performance looks at the general actions that they can execute in the domain. We can shorten the words in three different ways (i.e., three categories of coopera- tive state-changing rules) by changing the SMADFA: 1. Find a shortcut by using the existing actions in the alphabet; i.e., look at GMADFA, at possible states and transitions, that were not included in SMADFA, and add them to it, 2. Find a shortcut by adding to the alphabet new actions that the agents are capable of doing, 3. Cut loops by minimizing the times the agents can be in a loop. We might choose to parameterize the actions; thus, cutting loops could be expressed by a change to C, (i.e., to the parameter that indicates the number of times the specific action can be taken). group of agents might be irreversible. redundant - a rule is redundant if performing the extra work encompassed in the rule might cause the agents to stay in the same state. Consider, for exam- ple, the StudyWorld, in which the agents are students. One of the possible actions to be performed by an agent is to borrow a book from the library. In this world, a cooperative rule might consist of a student leaving a summary of the book he has borrowed from the li- brary. In this case, the same summary might be left again by another student, making the rule redundant. resource dependent - a rule is resource dependent if following it implies the use of consumable resources (e.g., filling the printer tray with paper, although you don’t have to print). Conchlsions We have presented a “rule of cooperative behavior” that can sometimes improve overall system perfor- mance in the multi-agent Tileworld. Agents are en- couraged to move tiles away from barriers (even when these tiles do not contribute to their primary goal), as long as the amount of extra work required is not too great. The addition of this implicitly cooperative behavior does not require a great deal of extra compu- tation on the part of agents, nor any communication whatsoever. Cooperation emerges cheaply from agents acting sociably, without the overhead of negotiation or coordination. Simulations were run that illustrated the benefits of this emergent cooperation. Although un- likely to produce optimal behavior (except by chance), the cooperative rule can improve performance in a non- trivial number of instances. The sociability rule presented for the Tileworld is of the first kind above-it finds a shortcut using existing actions, since the agents’ original actions include the push action. Adding “extra work” to the Tileworld SMADFA means to explore other states and transi- from the initial state to tions to them such that-paths a final state can be shorter. Passing the lock so that other agents will also have access to a file can be a cooperative state-changing rule for the FileWorld. This rule is of the third kind; it cuts a loop created by an agent who goes on reading or writing to a file. The FileWorld SMADFA can be modified by setting a limit to the number of characters that an agent can- read handing over the lock. from or write to a file before To develop appropriate rules for different domains, and to be able to evaluate these rules, we present be- low some general characteristics that may prove useful in creating cooperative state-changing rules: state dependent - a rule is state dependent if the extra work that needs to be done can only be accom- plished in specific states. For example, in the Tile- world a tile can be freed only if there is a constrained tile and there is an agent with appropriate cooperation level that could free it. Therefore, the rule we proposed above for the Tileworld is state dependent. guaracteed - a rule is guaranteed if there is certain to be no harm (no increased global work) by executing it. In the Tileworld, the rule we presented is not guar- anteed, because the direction to which the tile is freed is heuristically computed. In the FileWorld, given that an agent has returned its lock, it is guaranteed that any other agent could use it and hence benefit from it. reversible - a rule is reversibze if its effects can be un- done. The Tileworld rule is reversible, since any agent can push a freed tile to be next to a barrier again. In contrast, adding information to what is known by a 1 We have also shown how a world can be character- ized by mapping it onto an automaton. We identified three kinds of cooperative state-changing rules that can be modeled as changes to the automaton. The principle of cooperative behavior extends to ar- bitrary domains, where system designers can identify aspects of global states that are generally desirable. In the Tileworld, it is generally desirable that tiles be unconstrained by barriers. In the blocks world, it is generally desirable that blocks be clear. The designers of agents can benefit by manufacturing rules of socia- ble behavior that encourage agents to carry out state transformations that tend to be socially desirable. * Future research will examine benefits to the system when the cooperation level of agents changes dynami- cally over time (for example, as a penalty mechanism aimed at uncooperative agents), how the subdivision of labor might also be affected by cooperative meta-rules, other criteria for qualifying cooperative rules, and sta- ble sociability rules for multi-agent systems. We are also interested in looking for ananlytical ways to eval- uate the cooperation level for a given domain. One way is to look at the cost of a task when the agents cooperate as a function of the cost of the original task, and to find the cooperation level that minimizes the 412 Distributed AI new cost. Another way is to regard the cooperative behavior as a perturbation of the distribution of the amount of work performed by zero-cooperative agents. References Conry, S. E.; Meyer, R. A.; and Lesser, V. R. 1988. Multistage negotiation in distributed planning. In Bond, A. H., and Gasser, L., eds., Readings in Dis- tributed Artijicial Intelligence. San Mateo, California: Morgan Kaufmann Publishers, Inc. 367-384. Davis, R. 1981. A model for planning in a multi- agent environment: steps toward principles for team- work. Working Paper 217, Massachusetts Institute of Technology AI Laboratory. Durfee, E. H. 1988. Coordination of Distributed Prob- lem Solvers. Boston: Kluwer Academic Publishers. Ephrati, E., and Rosenschein, J. S. 1993. Multi-agent planning as a dynamic search for social consensus. In Proceedings of the Thirteenth International Joint Conference on Artificial Intelligence, 423-429. Ferber, J., and Drogoul, A. 1992. Using reactive multi-agent systems in simulation and problem solv- ing. In Avouris, N. M., and Gasser, L., eds., Dis- tributed Artificial Intelligence: Theory and Praxis. Kluwer Academic Press. 53-80. Genesereth, M. R.; Ginsberg, M. L.; and Rosenschein, J. S. 1986. Cooperation without communication. In Proceedings of the National Conference on Artificial Intelligence, 51-57. Gmytrasiewicz, P., and Durfee, E. H. 1992. A logic of knowledge and belief for recursive modeling: Pre- liminary report. In Proceedings of the Tenth National Conference on Artificial Intelligence, 628-634. Grosz, B., and Kraus, S. 1993. Collaborative plans for group activities. In Proceedings of the Thirteenth International Joint Conference on Artificial Intelli- gence, 367-373. Hanks, S.; Pollack, M. E.; and Cohen, P. R. 1993. Benchmarks, test beds, controlled experimentation, and the design of agent architectures. AI Magazine 17-42. Kraus, S., and Wilkenfeld, J. 1991. Negotiations over time in a multi agent environment: Preliminary report. In Proceedings of the Twelfth International Joint Conference on Artificial Intelligence, 56-61. Kraus, S. 1993. Agents contracting tasks in non- collaborative environments. In Prbceedings of the Eleventh National Conference on Artificial Intelli- gence, 243-248. Kreifelts, T., and Martial, F. 1990. A negotiation framework for autonomous agents. In Proceedings of the Second European Workshop on Modeling Au- tonomous Agents and Multi-Agent Worlds, 169-182. Kuwabara, K., and Lesser, V. R. 1989. Extended pro- tocol for multistage negotiation. In Proceedings of the Ninth Workshop on Distributed Artificial Intelligence, 129-161. Lewis, H. R., and Papadimitriou, C. H. 1981. El- ements of the theory of computation. Prentice-Hall, Inc. Malone, T.; Fikes, R.; and Howard, M. 1988. En- terprise: A market-like task scheduler for distributed computing environments. In Huberman, B. A., ed., The Ecology of Computation. Amsterdam: North- Holland Publishing Company. 177-205. McArthur, D.; Steeb, R.; and Cammarata, S. 1982. A framework for distributed problem solving. In Pro- ceedings of the National Conference on Artificial In- telligence, 181-184. Montgomery, T. A.; Lee, J.; Musliner, D. J.; Durfee, E. H.; Damouth, D.; and So, Y. 1992. MICE Users Guide. Artificial Intelligence Laboratory, Department of Electrical Engineering and Computer Science, Uni- versity of Michigan, Ann Arbor, Michigan. Pollack, M. E., and Ringuette, M. 1990. Introducing the Tileworld: Experimentally evaluating agent archi- tectures. In Proceedings of The National Conference on Artificial Intelligence, 183-189. Shoham, Y., and Tennenholtz, M. 1992a. Emergent conventions in multi-agent systems: initial experi- mental results and observations (preliminary report). In Principles of knowledge representation and reason- ing: Proceedings of the Third International Confer- ence(KR92). Shoham, Y., and Tennenholtz, M. 199213. On the synthesis of useful social laws for artificial agent soci- eties (preliminary report). In Proceedings of the Tenth National Conference on Artificial Intelligence. Smith, R. G. 1978. A Framework for Problem Solv- ing in a Distributed Processing Environment. Ph.D. Dissertation, Stanford University. Sycara, K. 1988. Resolving goal conflicts via negotia- tion. In Proceedings of the Seventh National Confer- ence on Artificial Intelligence, 245-250. Sycara, K. 1989. Argumentation: Planning other agents’ plans. In Proceedings of the Eleventh Inter- national Joint Conference on Artificial Intelligence, 517-523. Tennenholtz, M., and Moses, Y. 1989. On cooper- ation in a multi-entity model. In Proceedings of the Eleventh International Joint Conference on Artificial Intelligence, 918-923. Zlotkin, G., and Rosenschein, J. S. 1993a. Com- promise in negotiation: Exploiting worth functions over states. Technical Report 93-3, Leibniz Center for Computer Science, Hebrew University. Zlotkin, G., and Rosenschein, J. S. 19938. A domain theory for task oriented negotiation. In Proceedings of the Thirteenth International Joint Conference on Artificial Intelligence, 416-422. Coordination 413 | 1994 | 157 |
1,494 | e Face of Uncertain Rewards Steven Ketchpel Stanford University Computer Science Department Stanford CA, 94305 ketchpel@cs.stanford.edu Abstract When agents are in an environment where they can interact with each other, groups of agents may agree to work together for the benefit of all the members of the group. Finding these coalitions of agents and determining how the joint reward should be divided among them is a difficult problem. This problem is aggravated when the agents have different estimates of the value that the coalition will obtain. A “two agent auction” mechanism is suggested to complement an existing coalition formation algorithm for solving this problem. I. The Problem Given a set of agents with different abilities and different information, there may be many opportunities for cooperation among the agents that will benefit all. Even more likely is the chance that a coalition can form, a subset of the agents working together, benefiting each agent in the group perhaps at the expense of the community as a whole. An agent following the economic principle of rationality will attempt to form a coalition which will maximize its own utility. However, the other agents in these coalitions will have their own preferences, and a complicated cycle of dependencies emerges. Agents only want to commit to a coalition once all of the other agents have committed. The final division of the agents into coalitions should be stable in the sense that no subset of the agents could leave their current coalitions to form a new one yielding all of the agents in that new coalition a higher utility than they obtain from their previous coalitions. For example, imagine there are a number of people interested in starting new hi-tech companies. There are The research was partially supported by a National Defense Science and Engineering Graduate Fellowship. The ideas contained within do not necessarily reflect the position or the policy of the Government and no official endorsement should be inferred. many possible combinations of people that could work together, but getting them to commit to form a new company is difficult. A scientist with a hot new product idea doesn’t want to commit unless a company has the necessary start-up capital. But fmancial backers typically require a thorough evaluation of the product and prefer a company president who has clout in the industry. The president may have reservations about working with certain financial officers, and so on. Even after the involved parties do agree to work together, bargaining over how to share the profits can reveal diverging perceptions about the relative importance of the different contributors. The coalition formation process, in this context, would describe which people should work together to start new companies and would also suggest a way to divide the profits among the partners. Stability in this scenario would mean that it would be unprofitable for one company to hire away workers from another, and there is no incentive for workers to get together (possibly with people from other companies) to start a new company. To evaluate a system formally, the agents al,...,aN are divided into a partition P containing coalitions Cl,...,CM such that every agent is a member of exactly one coalition. The payoff to an agent is a function u(P, a) of both the partition and the agent. For P to be stable, there must not he any other partition P’ forming coalitions Cl,..., CM such that Xi E P’ Vaj E C’i U(P, aj) > U(P, aj>. If there were such a C’i, the agents of that coalition would desert their current coalitions and form C’i. Determining how to divide the utility among the agents in the coalition is a problem that has received some attention in both game theory and distributed AI. A summary of the related research appears in Section 2. Many of these sources make the assumption that the value of any coalition is common knowledge. In game theoretic terms, there is a valuation function V: 2A + 33, which takes any possible subset of the agent pool A, and returns a real value representing the utility which is split among the members of the coalition. For the sake of simplicity, we assume that this utility is paid by an entity outside the system of agents, and that none of the agents have any inherent interest in achieving the goals, beyond merely fulfilling the contract to receive payment. The main 414 Distributed AI From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. contribution of this paper is to examine the case where the agents do not have access to this function, but instead have different expectations about the value. Section 3 analyzes one of the most widely used division mechanisms, the Shapley value, and its inherent problems. Section 4 proposes an alternative approach which does not make the common knowledge assumption. The “Two Agent Auction” mechanism and the properties proved about it constitute the original research contribution. Section 5 concludes with directions for further research. elated Research At the 1993 European Workshop on “Modeling Autonomous Agents in a Multi-Agent World”, three papers on coalition formation were presented [(Ketchpel 1993), (Shechory & Kraus 1993), (Zlotkin & Rosenschein 1993)]. The last two assumed super-additive domains in which adding an additional agent to a coalition can never reduce the utility of that coalition. Zlotkin and Rosenschein (Zlotkin & Rosenschein 1993) made the further stipulation that utility was not directly transferable between agents. All three papers assumed that the agents had common knowledge of the value function of the game and advocated the use of the Shapley value to divide the utility among the members of the coalition. There is another body of literature in economics which addresses the division of goods or costs among the members of a society. Raiffa includes a chapter in his book (Raiffa 1982) on fair division and includes an analysis where the involved parties place different values on the goods to be divided. Ephrati and Rosenschein (Ephrati & Rosenschein 199 1) use another device from economics known as the Clarke tax to allocate costs among multiple agents deciding among alternatives, charging each agent only in proportion to the amount it changed the group decision. The WALRAS system (Wellman 1993) uses a market scheme to reach an equilibrium among buyers and sellers of a commodity in the context of distributed action. However, none of these works analyzes the possibility of collusion by a coalition. This paper attempts to unite these two strands of research. 3. e Shapley Value and its Problems The function u(P, a) determines the amount of utility that agent a receives from its membership in its coalition in P. It is assumed that the distribution is efficient and no utility is lost in the division, so c aE ,u(P, a) = v(C). There have been a number of suggestions for such a distribution function u(P, a). One of the earliest and most widely used is due to Shapley (Shapley 1953). The Shapley value is calculated by looking at each of the different dynamics that could lead to the coalition under consideration. Agents either “found’ a coalition if they are the initial member, or else join a coalition founded by another member. The permutations of the members in the coalition is the set of formation dynamics. Each permutation describes an order in which the coalition could have been formed. Each agent adds value to a given formation process based on the marginal utility contributed by that agent. For example, if agent A is joining agents B, C, and D, and v(ABCD) = 100 and v(BCD) = 60, then A’s marginal contribution under this formation ordering is v(ABCD) - v(BCD) = 40. If agent A joins a coalition started by B, and they are subsequently joined by agents C and D, A’s marginal contribution is v(AB) - v(B). There are 22 other permutations that also might lead to the final coalition ABCD. By averaging A’s marginal contribution across all the different formation possibilities, A’s Shapley value is obtained. The underlying assumption is that all of the different formation processes are equally likely and, therefore, the marginal contributions for each formation are weighted equally. This calculation ensures that the sum of the Shapley values for all of the members of will be exactly the coalition’s combined utility. The Shapley value has several disadvantages. First, the most efficient known calculation is exponential though efficient means to calculate the expectation of the Shapley value over a large number of interactions are known. (Zlotkin & Rosenschein 1994). Second, it assumes common knowledge of the value that the coalition will obtain if it works as a unit. In more realistic assessments, each agent might have a different expectation for the value of the collaboration. To address these uncertainties more realistically, the value function should be dependent on which agent is performing the determination. That is, for two agents A and B, v,&U3), A’s estimate of the value of coalition AB is not necessarily equal to B’s estimate vn(AB), and both of these values may differ from the utility that will actually result from the coalition, which is denoted v(AB) (and is the same v(AB) used above). The potential disparity between these values (the actual utility and the various agents’ estimates of it) opens up a further problem. One agent may overestimate the value, and promise its potential coalition partner a “share” of utility larger than the total obtained by the whole coalition. When the obtained utility fails to meet the rosy predictions of the optimistic agent, who is penalized? 4. Coalition Formation Using a wo Agent Auction The problem that we are attempting to solve is two-fold: first, to determine coalitions of agents that will work together; second, to decide how to reward the agents, that is, what payment each agent will receive. These problems are complicated because the search space is very large (an exponential number of coalitions) and there are many dependencies among the decisions. For example, an agent’s offer to join a coalition may depend on the agents already in the coalition, the amount of the offer, offers from other coalitions, and the future prospects of this coalition’s Coordination 415 merging with other coalitions. Finally, the agents may have different perceptions about the value of collaboration and their respective contributions to the group’s outcome. The solution that we outline simplifies the problem along several dimensions, which we hope to address in future work. The basic model that we assume is an economic one of rational agents entering into contracts that specify guaranteed payments. The agents may have different bargaining power due to their relative contributions to coalitions, but we assume that they all play symmetric roles in the bargaining process. The prescribed process consists of the following steps: 1. 2. 3. 4. 5. Agents exchange initial offers to other available agents. These offers will lead to a possible agreement and contract among the agents. Agents evaluate the offers they received, and rank them in order of preference, based on their expected profit. Using these preference orderings, the agents attempt to pair off into coalitions of size 2 with the most attractive potential partners. The newly formed pairs enter a “two agent auction” that makes one agent the manager, bearing the risk and given the opportunity to bargain on behalf of the pair in future negotiations. The non-managing agent receives a fixed payment for its role in the coalition. The final agreement price is a function of the initial offers and the agents’ valuations of the collaborative effort. The process repeats, with the pairs formed in one round playing the role of individual agents in the next. 4.1. A Coalition Formation Algorithm In previous work (Ketchpel 1993), we noted that the coalition formation problem is related to the stable marriage problem (Gusfield & Irving 1989). In the stable marriage problem, an equal number of men and women seek mates. Each participant has a preference ordering among the candidates, and a stable matching is generated when each man is paired with a woman and there is no blocking pair of a man and woman that prefer to be paired with each other to being paired with their current partners. A stable matching may be found for any instance of the problem in time O(n*) where n is the number of people involved. The coalition formation process for coalitions of size 2 is equivalent to a variant of the stable marriage problem known as the stable roommate problem with unacceptable partners. In the stable roommate problem, the two classes of men and wome 7 are conflated to a single class, agents. When unacceptable partners are allowed, an agent prefers being unpaired to being paired with certain other agents. A pairing which matches any agent with an unacceptable partner is inherently unstable. Centralized versions of the stable roommate problem with unacceptable s artners find stable matchings (when they exist) in time O(n ). However, in a setting of autonomous, distrustful agents, a centralized algorithm is not a viable solution. In (Ketchpel 1993), a decentralized alternative is proposed. The modified algorithm is a greedy process where each agent proceeds down its preference list extending an offer to the top agent it hasn’t previously asked, accepting offers that improve its utility, and rejecting all others. At the end of a round, all of the pairs form proto-coalitions, which may join other proto-coalitions in the future. They select one of the members to act as the head of the coalition. In the subsequent rounds, the process repeats, with each coalition head extending offers to the heads of other coalitions and to agents that have not yet been paired. The process repeats until no new associations are formed. The algorithm takes time O(n3) for n agents. Although stability is not guaranteed, an agent will never settle for a less desirable coalition partner unless all of the better alternatives (taking the previous rounds of formation as given) have turned it down once already. Even if the other possible partners have turned it down in the past, they may later be willing to accept such a coalition. The agent will never approach these possible partners again, so unstable pairings may form. For a more complete description and complexity analysis, see (Ketchpel 1993). 4.2. The Two Agent Auction One mechanism to solve the division of utility in the face of uncertainty is to assign one of the agents responsibility for managing the group actions. The manager is required to meet the offers that it extended to the various coalition members, even if the coalition’s actual utility were less than expected. In exchange for undertaking this risk, the managing agent would receive all of the utility accruing to the coalition, and would earn a profit if this amount were greater than the salaries it paid. Also, as the manager, it has the authority to negotiate on behalf of the group to form larger coalitions. The algorithm described in Section 4.1 has the property that each of the proto-coalitions has exactly two entities (which may be agents or coalitions). Therefore, each of the auctions occurs between two agents, the managers of the coalitions that are merging. The two managing agents A and B begin the bargaining process using the initial offers that they extended to each other when the preference lists for the previous step were made. These offers will not necessarily add up to either agent’s estimate of v(AB), nor need they total the actual v(AB) value. The offers are adjusted according to the method described below and summarized in Figure 1. The two agents are guaranteed to converge on an agreeable value. The non-managing agent gets this agreed value, regardless of the actual utility of the coalition. The managing agent receives the balance of the utility obtained by the group. We use O(A, B) to represent the amount of the initial offer which agent A extended to agent B; similarly, O(B, A) is B’s initial offer to A. In selecting the agent to be the manager, there are four cases that may occur: 1. Both agents A and B want to be the manager, based on the offers and their beliefs about the actual value of the collaboration. So, v*(AB) - O(A, B) > O(B, A) and 416 Distributed AI vn(AB) - O(I3, A) > O(A, B). The agents reach agreement through an ascending auction. 2. Agent A wants to be the manager, and agent B is happy to agree. So, v&U) - O(A, B) > O(B, A) and VB(AB) - O(B, A) 5 O(A, B). In this case, A is selected to be the manager. 3. Symmetric to 2, with B wanting to be the manager. 4. Neither agent wants to be the manager, because both expect better payoffs if the other agent is the manager. So, vA(AB) - O(A, B) < O(B, A) and vu(AB) - O(B, A) I O(A, B). The agents reach agreement by entering a descending auction. In the first case, there needs to be further negotiation over who will manage the contract. To settle the difference, both agents incrementaIly increase their offers to the other coalition agent until one or the other is willing to forgo the opportunity to be the manager. In essence, the two agents contract. are “bidding** for the right to manage the BEGIN. k := 0. /*k is number of rounds of negotiation conducted*/ 6 := 1. /*6 is “precision” of negotiation*/ Ii? VA(m) - O(A, B) > O(B, A) AND vB(AB) - O(B, A) > O(A, B) I := +l. /*Reduce Case 1 to 2 or 3*/ WHILE ((vA(AB) - (O(A, B) + I*k*6) > (O(B, A) + I*k*& AND vB(AB) - (O(B, A) +I*k*@> (O(A, B)+ I*k*6): k:=k+ 1. END-WHILE. ELSE IF v,@B) - O(A, B) I O(B, A) AND ‘vB(m) - 0(-B, A) S O(A, B)) I 1. := - /*Reduce Case 4 to 2 or 3*/ WHILE ((vA(AB) - (O(A,B) + I*k*@c(O(B,A)+ I*k*6) AND VB(AB) - (O(B,A) + I*k*6) 5 O(A,B) + I*k*&)) k:=k+l. END-WHILE. END-IF. END-IF. IF VA(m) - (O(A, B) + I*k*6) 2 O(B, A) + I*k*6 A is manager, B gets O(A, B).+ I*k*& /*Case 2*/ ELSE B is manager, A gets O(B, A)+ I*k*6 /*Case 3*/ END-IF. Figure 1: Algorithm for selecting manager & determining utility division In the ascending auction called for in the first case, at each iteration of the WHILE loop in Figure 1, both agents increase their offers by 6. The bidding stops when either agent finds that the “opposing” agent (although they are coalition partners, they are competing with each other to maximize individual shares of the joint gain) has extended an offer that is greater than it would expect if it managed the contract. Note that there is some asymmetry in the roles of the agents. In one case the test is a strict inequality, while in the other case, the test is less than or equal to. We arbitrarily select the agent that initiates the proposal to be agent A. In the fourth case in which neither agent wants to be the manager, the agents enter an auction situation similar to case 1, but instead of incrementing their offers, they decrement them. At some point one of the agents will decide that with this new lower offer, it is better to accept the managing role than the small amount just promised by the other agent. This agent is made the manager, and its last offer is considered the agreement value. As an example, assume that agents A and B have agreed to form a coalition, and are trying to determine the distribution of the utility from the joint effort. Agent A expects that the value of the outcome will be 100, so vA(AB) = 100. Agent A realizes that agent B is doing a larger share of the work, so is willing to offer agent B a larger share of the utility, in this case, O(A, B) = 60. Agent B is more pessimistic about the expected outcome of their joint effort, expecting only 80 units of utility VB(AB) = 80. Agent B thinks that agent A’s contribution is minimal and is only willing to give agent A 15 units, O(l3, A) = 15. The case analysis outlined above shows that this example falls in the first case, and both agents A and B want to manage the contract. Agent A’s expected profit if it is the manager is 40 (vA(AB) - O(A, B)); if A accepts B’s offer, A will only obtain 15. Agent B carries out a similar analysis and sees that its expected return of 65 if it manages the contract (vn(AB) - O(B, A)) exceeds A’s offer of 60. At this point, the negotiation enters the stage of incrementally increasing offers. The progress of these iterative offers is shown in Figure 2. At round 3, B determines that it expects to get more if it allows A to manage the contract, so A is obligated to pay B 63 units of utility when B accomplishes its share of the work, and agent A will get the actual amount v(AB). If this amount is less than 63, A still must pay B the promised 63 units. If v(AB) is less than 81, then A would have been better off accepting B’s offer of 18, rather than receiving v(AB) while paying agent B 63. A’s Expected Value if: B’s Expected Value if: 1 39 16 64 61 2 38 17 63 62 Figure 2: Sequence of offers between agents 4.3 Analysis of the Two Agent Auction Although the negotiation is described above in an incremental process, the result is deterministic. The agent Coordination 417 with the higher estimate of v(AB) always becomes the manager, as is shown in Figure 3. Moreover, Theorem 2 in Figure 4 shows that the agreement price is also determined by the initial offers and valuations. If the agents are willing to share their estimates of v(AB) with their initial offers, they can directly calculate the differences between the evaluations of the agents’ contributions and determine which agent should be the manager and what the final offer to the non-managing agent should be. If the v(AB) estimates are not shared, the iterative method described above will yield the same result, though the manager’s estimate of v(AB) will never become public knowledge. The choice of incremental versus direct calculation is dependent on the domain, and the tradeoff between the benefit of privacy of information against the cost of more communication. I’heorem 1: Between two agents A and B, the one with the ngher valuation of v(AB) will always win the managing mole. I’he auction stops after k rounds, when either: 1) O(B, A) + k*‘6 > VA(A.B) - (O(A, B) + k*@; B manages or 2) O(A, B) + k*6 2 m(AB) - (O(B, A) + k*@; A manages [f (1) is the reason for stopping, (la) O(B, A) + k*6 > VA(fiB) - (O(A, B) + k*6) and (lb) O(A, B) + k*F6 < VB(m) - (O(B, A) + k”6) Adding k*6 to both sides of la and lb, (la’) O(B, A) + 2*k*‘6 > VA(AB) - O(A, B) (lb’) O(A, B) + 2*k*6 < v&W) - O(B, A) Adding O(A, B) to both sides of (la’) and O(B, A) to both sides of (lb’) (la”) O(A, B) + O(B, A) + 2*k*6 > vA(AB) (lb”) O(A, B) + O(B, A) + 2*k*6 < w(AB) By transitivity of la” and lb” VA(AB)< v&W), and in (l), B is the manager lf (2) is the reason for stopping, (2a) O(B, A) + k*8 I VA(AB) - (O(A, B) + k*@ and (2b) O(A, B) + k*5 > vB(AB) - (O(B, A) + k*S) Proof proceeds as above, replacing the strict inequalit; with non-strict inequality, yielding, v&AB) I vA(AB), and in case 2, A is the manager So, in both cases, the agent with the higher estimate 01 v(AB) is the manager. Figure 3: Agent with higher estimate of v(m) is manager 418 Distributed AI ?h&%m2: The agreement price (AP) wiI1 be within 6 of o(“*N) ’ v~(MN) - qNv M) , where M is the manager, N 2 s the other (non-managing) agem. The auction will stop in round k when O(M, N) + k*d 2 VN(MN) - (O(N,M) + k*s) o(M, N) + 2*k*6 2 vN(MN) - O(N,M) 2*k*6 2 m(MN) - O(N,M) - O(M, N) c.= v,(MW-WUO-OWJJ) 1 2*s 1 The offer after k rounds of negotiation is O(M, N) + k*& AP= O(M,N)+ v,(MN) - OV+‘,W - NW’0 2*s 1 * s %‘KN)+ v,WW - OW, W - O(M, N) * s < up, ad - 2*s AP < O(M,N)+ v,WfN) - OWJO - OW, N) + 1 2*s > * s Ap < O(M,N) + v,W’O - O(NM) + 6w 2 So, AP is within 6 of O(M,N)+v,(MN)-O(N,M) 3 Figure 4: Agreement price is function of offers and v(AB) estimates The agreement price that is reached is a function of the initial offers and the estimates of v(AB) as Figure 4 shows. From the final value, it appears that both agents will extend initial offers of 0. The agreement price increases with O(M, N), the initial offer of the manager to the non- manager. Therefore, an initial offer of 0 would minimize the agreement price with respect to this variable. Likewise, the agreement price decreases as the offer of the non- manager to the manager increases, so an initial offer of 0 would maximize the agreement price. However, this analysis is too simplified. The initial offers play a second role in the coalition formation process, namely determining the preference lists. Tberefore, the agents need to extend sufficiently high offers to each other to ensure that the other agent will agree to form a coalition. The auction mechanism (and the desire to minimize the initial offers) is only needed after two agents have agreed to form a | 1994 | 158 |
1,495 | The Impact of Locality and Authority on ergent Conventions: Init ial Observations James E. Kittock* Robotics Laboratory Computer Science Department Stanford University Stanford, CA 94305 jek@cs.stanford.edu Abstract In the design of systems of multiple agents, we must deal with the potential for conflict that is inherent in the interactions among agents; to en- sure efficient operation, these interactions must be coordinated. We extend, in two related ways, an existing framework that allows behavioral con- ventions to emerge in agent societies. We first consider localizing agents, thus limiting their in- teractions. We then consider giving some agents authority over others by implementing asymmet- ric interactions. Our primary interest is to ex- plore how locality and authority affect the emer- gence of conventions. Through computer simula- tions of agent societies of various configurations, we begin to develop an intuition about what fea- tures of a society promote or inhibit the sponta- neous generation of coordinating conventions. Introduction Imagine a society of multiple agents going about their business: perhaps it is a team of construction robots assembling a house. Perhaps it is a group of delivery robots responsible for carrying books, copies, or medi- cal supplies throughout a building. Or perhaps it is a society of software agents, working to collect data from diverse sources in “information space.” Whatever the nature and environment of these agents, they will find it necessary to interact with one another. There is an inherent potential for conflict in such interactions; for example, two robots might attempt to move through a doorway at the same time, or two software agents might try to modify the same file. As designers, we have achieved coordination when agents’ actions are chosen specifically to prevent such conflicts. Conven- tions are a straightforward means of implementing co- ordination in a multi-agent system. When several con- flicting strategies are available to agents for approach- ing a particular task, a convention specifies a common choice of action for all agents. *This research was supported in part by the Air Force Office of Scientific Research under grant number F49620- 92-J-0547-PO0001 and by the National Science Foundation under grant number IRI-9220645. 420 Distributed Al In general, designing all necessary conventions into a system or developing a centralized control mechanism to legislate new conventions is a difficult and perhaps intractable task (Shoham & Tennenholtz 1992b). It has been shown that it is possible for an agent so- ciety to reach a convention without any centralized control if agents interact and learn from their ex- periences (Shoham & Tennenholtz 1992a). Conven- tions thus achieved have been called “emergent con- ventions”, and the process for reaching them has been dubbed “co-learning.“(Shoham & Tennenholtz 1993) In previous work on the emergence of conventions through co-learning, it was assumed that each agent in a society is equally likely to interact with any other agent (Shoham & Tennenholtz 1992a). This seems an unreasonable assumption in the general case, and we consider ways to extend the framework by allowing for non-uniform interaction probabilities. Conceptually, we can divide limitations on interactions into two cat- egories: those due to inherent separation (geographic distance, limited communication, etc.) and those due to organizational separation (division of labor, segre- gation of classes, etc.). These notions are two sides of one coin; they are different forms of locality within a multi-agent society. Previous work also assumed that agents have equal influence on one another’s behavior. However, in prac- tice this is not generally true; multi-agent systems of- ten have variations in authority. We model differences in authority, by implementing asymmetrical interac- tions in which the less influential agent always receives feedback as a result of its actions, while the more in- fluential agent receives feedback with some probabil- ity. As the probability that an agent receives “upward feedback” from its subordinates decreases, the agent’s authority increases. This is intended to model a situa- tion in which the agent can act with impunity, choosing strategies based only upon their perceived effects on other agents (this can also model the case in which an agent is simply stubborn and deliberately chooses to ignore feedback). We do not claim that this is a an ex- haustive treatment of the notion of authority, but this asymmetry of interaction is one aspect of authority From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. that is strongly related to the topological organization of the agent society. Our primary aim in this paper is to explore how vari- ous forms of locality and authority affect the the emer- gence of conventions in a multi-agent society. We note that our goal is not to model human society; rather, we seek to gain a preliminary understanding of what global properties we might expect in a society of arti- ficial agents that adapt to one anothers’ behavior. The Basic Framework Our explorations of multi-agent societies take place within a formal model that allows us to capture the essential features of agent interactions without becom- ing lost in unnecessary detail. In particular, we dis- till the environment to focus on the interactions be- tween agents: each agent’s environment consists solely of other agents in the system. This allows us to be precise about the effects of agents on one another. In this model, agents must choose from a finite reper- toire of strategies-for carrying out an action. When agents interact, they receive feedback based on their current strategies; this simulates the utility various sit- uations would have to an agent (for example, a robot might get negative feedback for colliding with an ob- stacle and positive feedback for completing a task). An agent may update its strategy at any time, but must do so based only upon the history of its feedback (its “memory”). As designers in pursuit of emergent coor- dination in an agent society, our ultimate goal is for all agents to adopt the same strategy. Thus, we must find an appropriate strategy update rule that causes a convention to arise from mixed strategies. We have limited our preliminary investigation to pairwise interactions, and we can write the possible outcomes of agent interactions as a matrix in which each entry is ihe feedback that the agents involved will receive as a result of their choice of strategies.’ We model coordination by the following matrix: In this case, agents have two strategies from which to choose. It is not important which particular strategy a given agent uses, but it is best if two interacting agents use the same strategy. 2 A simplified example of such a situation from a mobile robotics domain is deciding who enters a room first: a robot going in or a robot going out. If some robots use one strategy and some ‘Although the matrix we use to model feedback is anal- ogous to the payoff matrix formalism in game theory, it is important to note that we assume neither that agents are “rational” nor that they can access the contents of the matrix directly. 2We limit our discussion here to the two-strategy case, but the results are qualitatively similar when more strate- gies are available to agents. robots use the other, there will be much unnecessary maneuvering about (or knocking of heads) when two robots attempt to move through a door simultaneously. If all robots use the same strategy, the system will run much more smoothly. This is reflected in the matrix entries: there is positive feedback for two agents us- ing the same strategy and negative feedback for two agents using different strategies. Since there is no a priori preference between the two available strategies, the feedback matrix is symmetric with respect to them. Agents update their strategies based upon the con- tents of a finite memory. Each memory element records the time of an interaction, the strategy used by the agent in that interaction, and the feedback the agent received as a result of that interaction. When an agent receives new feedback, it discards its oldest memory, to maintain the memory at a fixed size. Currently, we make the rather weak assumption that interactions are anonymous; although this reduces the amount of information available to each agent, we believe that ex- ploring the behavior of societies of simple agents will yield insight into the behavior we can expect from more complex agents. For our preliminary investigations, we have cho- sen to use a learning rule similar to the Highest Cu- mulative Reward rule used by Shoham and Tennen- holtz (Shoham & Tennenholtz 1993). To decide which strategy it will use, an agent first computes the cumula- tive reward for each strategy by summing the feedback from all interactions in which it used that strategy and then chooses the strategy with the highest cumulative reward (HCR). Th ere are, of course, many other possi- ble learning rules agents could use, including more so- phisticated reinforcement learning techniques such as Q-learning (Watkins 1989); however, using the simpler HCR rule fits with our program of starting with a sim- pler system. In preliminary experimentation, we found that small memory sizes allow the agent society to achieve a con- vention rapidly; we consistently used an agent memory of size 2 in the experiments described in this paper. Thus, each agent chooses a new strategy based only on the outcome of the previous two interactions. Locality and Authority In practice, locality can arise in two general ways, ei- ther as an inherent part of the domain or as a design decision; we will examine localization models that re- flect both of these sources. Whatever its origin, we implement localization with non-uniform interaction probabilities: an agent is more likely to interact with those agents to which it is “closer” in the society. Two-Dimensional Grids Consider two systems of mobile robots. In both societies, there is an inherent locality: each robot has some restricted neighborhood in which it moves (perhaps centered on its battery charger). In one society, each robot is confined to a Coordination 421 small domain and only interacts with other robots that are nearby. In the other system, the robots’ domains are less rigid; although they generally move in a small area, they occasionally wander over a greater distance. Now assume that in both systems, each robot ran- domly chooses between two available strategies. The robots then interact pairwise, receive feedback as speci- fied by the matrix describing coordination, and update their strategies according to the HCR rule. more constrained 1 t=o t=4000 t=8000 t=12000 t=l6000 t=20000 Figure 1: Time evolution example for agents on a grid. A typical example of the time evolution of two so- cieties fitting this description can be seen in Figure 1. In our initial investigations, the agents (robots) oc- cupy a square grid, with one agent at each grid site; in this case there are 1024 agents on a 32 by 32 grid. The agents are colored white or black according to their choice of strategy. In the society at the top of the figure, the agents are tightly constrained, while those in the society at the bottom have more free- dom to wander. Both systems start from the same initial configuration, but their evolution is quite dif- ferent. In the system of agents with limited interac- tion, we see the spontaneous development of coordi- nated sub-societies. These macroscopic structures are self-supporting-agents on the interior of such a struc- ture will have no cause to change strategy. The only changes will come at the edges of the sub-societies, which will wax and wane until eventually all of the agents are using one of the strategies. In the system of agents with more freedom of interaction, sub-societies do not appear to arise. The strategy which (perhaps fortuitously) gains dominance early is able to spread its influence throughout the society, quickly eliminat- ing the other strategy. In our model, we describe a robot’s motion as a sta- tistical profile that gives the likelihood of the robot wandering a particular distance from its “home” point. The probability of two robots interacting is thus a func- tion of their individual movement profiles; we have modeled this by a simple function, p(r) oc [l+(ar)P]-‘, where r is the distance between the two robots, mea- sured as a fraction of the size of the grid. This function was chosen because the parameters allow us to inde- pendently control the overall size of an agent’s domain of interaction (a) and the “rigidness” of the domain boundary (/3). F g i ure 2 shows the function for a vari- ety of parameter settings. With this function, we can model robots confined to a large domain, robots con- fined to a small domain, robots that usually move in a small domain but occasionally wander over a larger area, etc. Note that the parameter a controls where the probability is halved, regardless of the value of /3; in the limit /3 -Y 00, if T > l/cr then p(r) = 0. We can think of l/o as the “effective radius” of the probability distribution. I I I alpha = 2, beta = 2 - '..... ----------._.__e___m ___.___._......__._................................~.............. r..-.--.--.-..-..--.-..-..-.~==~-~-- -- . . . . .._..... 0 0.2 0.4 0.6 Distance 0.8 1 Figure 2: p(r) CC [l + (cw)p]-l. Trees and Hierarchies In many human communi- ties, the societal organization is built up of layers. Hi- erarchies in large companies and “telephone trees” for distributing messages are examples of such structures. For our purposes, trees are defined in the standard fash- ion: each node has one parent and some number of chil- dren; one node, the root node, has no parent.3 Agents organized as a tree can interact only with their par- ents and children. If we allow agents to interact with their peers-other agents on the same level-we call the resulting structure a hierarchy. We believe that these localizing structures may be useful in societies of artificial agents for some of the same reasons they have served humans well: delegation of responsibili- ties, rapid distribution of information, etc. Further- more, trees and hierarchies provide a natural setting for investigating the effects of authority. Implementing Authority In the present experi- ments, an agent in a tree or hierarchy is equally likely to interact with any other agent to which it is con- nected, be it parent, child, or peer. However, by giving agents the ability to selectively ignore feedback they re- ceive from interactions with agents at a lower level, we can implement a simple form of authority. We refer to the feedback an agent receives when interacting with its child in the organization as “upward feedback,” and 3We limit our discussion here to trees with a branching factor of two; additional experiments have shown that in- creasing the branching factor does not change the relative qualitative behavior of the tree and hierarchical organiza- tion schemes. 422 Distributed AI varying the probability that this upward feedback is in- corporated into an agent’s memory can be thought of as modeling a range of management styles, from egali- tarian bosses who listen to and learn from their subor- dinates to autocratic bosses who expect their subordi- nates to unquestioningly follow the rule “DO as I do.” In this preliminary model agents always heed feedback from their parents and peers. Experimental Results For ‘each experiment, the number of agents, social structure, and other parameters are fixed. Each agent’s probability distribution for interacting with the other agents is computed, and the system is run mul- tiple times. At the beginning of each run, the agents’ initial strategies are chosen randomly and uniformly from the two available strategies. In each iteration of the run, a first agent is chosen randomly and uniformly from the society; a second agent is then chosen ran- domly according to the first agent’s probability distri- bution. The agents interact and possibly update their strategies as described above. The system is run until all of the agents are using the same strategy, i.e. until we have 100% convergence. Each experiment was run 1000 times (with different random seeds), and the con- vergence time was computed by averaging the number of iterations required in each run. Two-Dimensional Grids We begin our survey of experimental results by considering agents on a square grid, with interaction probabilities defined by the func- tion p(r). In Figure 3, we see how the time for all of the agents to agree upon a particular strategy scales with the size of the system, for various parameter values. 8 8000 8 2 7000 $ 6000 beta = 6 -A-.- 5 beta = 8 +.- u 5000 g 5= 4000 s 3000 (u 2000 .s ril 1000 0 80 100 120 140 Number of Agents 160 180 200 Figure 3: Convergence time vs. number of agents for two-dimensional grid. In general, the convergence time appears to be poly- In Figure 5, we see the effects of system size on the nomial in the number of agents in the system. Fit- time to achieve total coordination. It appears that the ting curves to the data yields a range of empirical es- convergence time for trees is polynomial in the number timates from O(n 1.28 for the least restricted societies ) of agents, while for hierarchies, the convergence time (a = 2,p = 2) to O(n 1.45) for the most restricted seems to be exponential in the number of agents. Fit- (a = 4, ,B = 8). To examine the interaction of the pa- ting to curves yields empirical estimates of O(e0*26”) rameters in more detail, we fix the number of agents for hierarchies and O(r~l*~~) for trees. in the system and observe how the convergence time is affected by various parameter settings. Time to 100% Convergence 3500 3000 2500 2000 1500 1000 4 (2 2 (2.5) .9) alpha (effective radj .us ) Figure 4: Convergence time for 100 agents on a two- dimensional grid for various parameter settings. In Figure 4, we see this for a society of 100 agents; the effective radius in numbers of grid units is noted next to each value of cy. We find that the steepness of the drop-off in the interaction probability, controlled by p, becomes more and more significant as cy is increased. To think of it in terms of mobile robots, as the effective radius of a robot’s domain is decreased, the rigidness of the boundary of its domain becomes increasingly relevant. Trees, Hierarchies, and Authority We now look at the results of experiments with the tree and hierar- chy organizational structures. Initially, we will assume full top-down authority, i.e. parent nodes never pay attention to feedback from their children. 4000 I I I I I I - 8 i Tree 3500 - 8 Hierarchy ----. I g 3000 - ii : I 2500 - : : 8 0 I * 2000 - : 8 : d 1500 : - : z : 1000 - 500 - -A- O---Ea __--- I I I I 0 10 20 30 40 50 60 70 Number of Agents Figure 5: Convergence time vs. number of agents for tree and hierarchy organizations with full top-down au- thority. Coordination 423 . 15 agent tree .---- 20 40 60 80 Probability of Upward Feedback Figure 6: Effect of decreasing authority (increased up- ward feedback) on convergence time for tree organiza- tion. Now we increase the probability of upward feedback, reducing the authority of agents over their descendents in the tree. In Figure 6, the convergence time is plotted against the probability of upward feedback for three tree-structured systems of different sizes. We see that the time for total coordination to be achieved increases exponentially (the y-axis is logarithmic) with increas- ing upward feedback, until a probability of about 75% is reached, at which point the convergence time in- creases even more dramatically. Ei -2 _________________-----~~--------.- __-._--------~---___~.~~~~~~~~~~ --_. 100 I I I ------.--._________ I 0 20 40 60 80 100 Probability of Upward Feedback Figure 7: Effect of decreasing authority (increased up- ward feedback) on convergence time for hierarchy or- ganization. In Figure 7, the convergence time is plotted against the probability of upward feedback for the same three sys- tem sizes, now organized as hierarchies. In this case, the convergence time increases slightly with decreasing authority until a probability of about 50% is reached, at which point the society begins achieving coordina- tion ever more rapidly. It appears that while authority is useful in trees, agents in hierarchies should listen to their subordinates. tree Ir top-down authority hierarchy no authority I t=o t=200 t=400 t=600 t=800 t=1000 Discussion From the results of experiments with agents on a grid, we might speculate that increased interaction between agents promotes the emergence of conventions for coor- dinating actions. This is further borne out by the data in Figure 7-as the probability of upward feedback is increased, the amount of interaction effectively in- creases, and the system converges more rapidly. How- ever, the behavior of trees seems to defy this conjec- ture: Figure 6 shows that increased interaction on a tree decreases the efficiency of convention emergence. Furthermore, although societies with top-down author- ity have less overall interaction when organized as trees rather than as hierarchies, they converge much more readily when tree-structured, as seen in Figure 5. It appears that neither locality nor authority is a suffi- cient predictor of system performance by itself. To develop further intuition about the results with trees, hierarchies, and authority, we can observe the time evolution of representative systems. In Figure 8, we see four societies of 63 agents. The systems depicted represent the possible combinations of tree vs. hierar- chy and top-down authority vs. no authority. All four systems start from the same initial condition, but they evolve quite differently. Figure 8: Time evolution example for trees and hier- archies. In this diagram, parent nodes are drawn twice as wide as their children. Thus, coordinated subtrees appear as vertical rectangles. In the authoritarian tree, we see that there is a strong directional pressure “pushing” a convention through the tree. Each node receives feedback only from its parent, and will quickly adopt its parent’s strategy. We can think of this pressure as defining a “flow of information” through the society. This con- trasts with the authoritarian hierarchy, in which each level of the organization is completely connected in- ternally, but only weakly connected to the next level. The deeper a node is in the graph, the less likely it is to interact with its parent, and the flow of information is diluted. It becomes possible for adjacent levels to inde- pendently converge upon di$erent conventions, hence we see a horizontal division in strategies at t = 1000. When we eliminate authority by having upward feed- back occur with 100% probability, we increase the po- 424 Distributed AI tential for inter-level interaction. In trees, this causes results of an agent’s actions cause it to adapt its be- the flow of information to become incoherent and we are left with a sprawling, weakly connected society. Subsocieties emerge, but now they develop on subtrees, rather than across levels. For hierarchies, we saw that reducing top-down authority causes the convergence time to decrease, and in the bottom row of Figure 8, we see one of the reasons this happens: a level-which might have otherwise converged to an independent con- vention is rapidly brought into line with the rest of the system. Conclusion We have seen that locality and authority in a multi- agent society can profoundly affect the evolution of conventions for behavior coordination. We draw three (tentative) conclusions. First, in weakly con- netted systems, there is a tendency for sub-societies to form, hampering convergence.4 Conversely, systems with greater overall interaction tend to converge more rapidly. Finally, if our agents are weakly connected, whether by design or necessity, it appears best to have a directional flow of feedback-strong authority-to ensure the rapid spread of conventions throughout the society. There are numerous ways this work can be extended. In addition to exploring other forms of locality and au- thority, we can investigate noise, more complex learn- ing algorithms, and explicit communication. We have already begun experimentation with emergent cooper- ation, and preliminary results seem to indicate that organizing structures which promote the emergence of conventions for coordination do not necessarily serve the objective of emergent cooperation. Ultimately, we would like to develop an analytic theory of convention emergence; experiments such as these serve both to guide exploration and to test theoretical results. havior. Glance and Huberman implement locality by weighting agents’ perceptions of one another according to their proximity within the organization. Our notion of locality differs because it is not based on an expecta- actual tion of interaction: it is based on the or non-occurrence of interactions. occurrence This research bears a close resemblence to work in economics and game theory (Kandori, Mailath, & Rob 1991). One of our current goals is to gain a better understanding of this relationship. More generally, research on adaptive multi-agent systems appears to have ties to work in artificial life (Lindgren 1992), pop- ulation genetics (Mettler, Gregg, & Schaffer 1988), and quantitative sociology (Weidlich & Haag 1983). How- ever, while systems from these diverse fields share char- acteristics such as distributed components and com- plex dynamics, their particulars remain unreconciled. References Glance, N. S., and Huberman, B. 1993. Organiza- tional fluidity and sustainable cooperation. In Pro- ceedings of the Modeling Autonomous Agents in a Multi-Agent World conference. In press. Kandori, M.; Mailath, G.; and Rob, R. 1991. Learn- ing, Mutation and Long Equilibria in Games. Mimeo. University of Pennsylvania. Lindgren, K. 1992. Evolutionary phenomena in sim- ple dynamics. In Artificial Life II. Santa Fe Institute. Mettler, L. E.; Gregg, T. G.; and Schaffer, H. E. 1988. Population Genetics and Evolution. Prentice Hall, second edition. Shoham, Y., and Tennenholtz, M. 1992a. Emer- gent conventions in multi-agent systems: initial ex- perimental results and observations. In KR-92. Shoham, Y., and Tennenholtz, M. 1992b. On the syn- thesis of useful social laws for artificial agent societies. In Proceedings of the Tenth National Conference on Artificial Intelligence. AAAI. Shoham, Y., and Tennenholtz, M. 1993. Co-learning and the evolution of social activity. Submitted for publication. Stary, C. 1993. Dynamic modelling of collabora- tion among rational agents: redefining the research agenda. In IFIP Transactions A (Computer Science and Technology), volume A-24. Human, Organiza- tional and Social Dimensions of Information Systems Development. IFIP WG8.2 Working Group. Watkins, C. 1989. Learning from Delayed Rewards. Ph.D. Dissertation, King’s College. Weidlich, W., and Haag, G. 1983. Concepts and Mod- els of a Quantitative Sociology; The Dynamics of In- teraiting Populations. Springer-Verlag. Research into various forms of emergent behavior in multi-agent systems relates to our investigations, par- ticularly work on coordination and cooperation among agents. While our model incorporates adaptive agents that learn purely from experience, many researchers have taken the view that agents should be treated as “rational” in the game-theoretic sense, choosing their actions based on some expected outcome ((Stary 1993) is an overview of these approaches). In (Glance & Huberman 1993), Glance and Huberman approach closest to our work, exploring the effects of a dynamic social structure on cooperation between agents. In their model, an agent decides whether or not to co- operate by examining the behavior of other agents; if the agent perceives enough cooperation in its environ- ment, then it decides to cooperate as well. This dif- fers significantly from our model, in which the actual 41f agents fro m different components interact only rarely, it may not be important for the subsocieties to the same convention. very have Coordination 425 | 1994 | 159 |
1,496 | Model-Based Automated Generation of User Interfaces Angel R. Puerta, Henrik Eriksson, John . Gennari, and Mark A. Musen Medical Computer Science Group Knowledge Systems Laboratory Departments of Medicine and Computer Science Stanford University Stanford, CA, 943055479 {puerta,eriksson,gennari,musen} @camis.stanford.edu ABSTRACT’ User interface design and development for knowledge- based systems and most other types of applications is a resource-consuming activity. Thus, many attempts have been made to automate, to certain degrees, the construction of user interfaces. Current tools for automated design of user interfaces are able to generate the static layout of an interface from the application’s data model using an intelligent program that applies design rules. These tools, however, are not capable of generating the dynamic behavior of the interface, which must be specified programmatically, and which constitutes most of the effort of interface construction. Mecano is a model-based user- interface development environment that uses a domain model to generate both the static layout and the dynamic behavior of an interface. A knowledge-based system applies sets of dialog design and layout rules to produce interfaces from the domain model. Mecano has been used successfully to completely generate the layout and the dynamic behavior of relatively large and complex, domain- specific, form- and graph-based interfaces for applications in medicine and several other domains. INTRODUCTION In recent years there has been significant progress in providing automated assistance to user-interface developers. Commercially available interface builders, user-interface management systems, and interface toolkits provide considerable savings to developers in time and in effort needed to produce a new interface (deBaar, Foley, & Mullet 1992). Even with these commercial tools present, the amount of effort and low-level detail involved in constructing interfaces is substantial. Therefore, researchers are investigating techniques to automate more portions of the interface design process. One promising area is that of model-based user-interface development (Puerta 1993; Szekely, Luo, & Neches 1993). In this approach, 1. This work has been supported in part by grants LM05 157 and LM05305 from the National Library of Medicine, and by gifts from Digital Equipment Corporation. Dr. Musen is recipient of NSF Young Investigator Award IRI-9257578. developers work with high-level specifications (models) of the interface to define dialog and layout characteristics. Model-based systems facilitate the automation of interface design tasks. A successful approach has been to use the application’s data model to generate the static layout of an interface (deBaar, Foley, & Mullet 1992; Janssen, Weisbecker, & Ziegler 1993). User Interface Figure I. Generic framework for automated interface-generation environments that employ data models. The interface design is produced by tools that examine a data model and a dialog specification. The design may be represented implicitly or explicitly (as an interface model). The run-time system implements the design. Figure 1 shows a generic framework for automated interface generation environments that employ data models. An intelligent design tool examines the data model and applies a set of design rules to produce a static design of an interface. Because the data model is shared between the interface design and the target application design, both designs can be coupled, and changes to the application design can be propagated easily to the interface design. The dynamic behavior of the interface, however, must be specified separately. This process can take many forms such as using a graphical editor to construct dialog Petri nets (Janssen, Weisbecker, & Ziegler 1993), to assigning sets of pre- and postconditions to each interface object (Gieskens & Foley 1992). Although working with Enabling Technologies 441 From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. high-level dialog specifications is helpful to interface developers, it does not automate the design of dynamic behavior. For large interfaces, editing the dialog specifications is still a time-consuming task involving the definition of hundreds of actions and conditions, some of which may conflict with each other. The Mecano Approach Current data-model approaches do not exploit the relationships among objects in the model to generate the dynamic behavior of an interface. In addition, a data model is application-specific. In the Mecano approach, we aim to use domain models from which dynamic interface behavior can be generated, and that are also sharable across a range of applications. In this paper, we present Mecano, a model-based interface development environment that uses domain models instead of data models to generate interfaces. A domain model is a high-level knowledge representation that captures all the definitions and relationships of a given application domain. A domain model extends the data model for the application. By substituting the data model in Figure 1 for a domain model, Mecano does not require any dialog specification editing and can generate complete dynamic behavior specifications even for large interfaces with hundreds of components. The rest of this paper is organized as follows. We first review related work and present an overview of Mecano, including a definition and illustration of domain models. Then, we show how various cases of dynamic behavior can be generated from domain models by using an example from the medical domain. Next, we explain how end users are able to participate in the layout design of interfaces generated in Mecano and how design revisions can be conducted. We conclude by analyzing this approach and summarizing the results. RELATED WORK There are three types of systems documented in literature that relate closely to the Mecano approach: (1) systems that use textual specifications to generate dialogs, (2) systems that combine the use of data models and high-level dialog specifications, and (3) systems that directly manipulate an interface model to produce an interface. One of the earliest efforts to generate dialogs via textual descriptions is COUSIN (Hayes and Szekely 1992). It generates menus and fill-in forms from a specification of commands and their parameters. Mickey (Olsen 1989) uses an extended version of Pascal to describe contents, parameters, and behavior of direct-manipulation of interfaces. ITS (Wiecha et al. 1989) separates dialog and style into two different layers and allows the specification of the dialog layer through a command language and the definition of styles through a rule set. Given the textual description for a dialog, ITS reasons about the style rule set to generate the interface. The UofA* (Singh and Green 1991) system generates the presentation and dialog through a command language. These systems, in general, help the developer by providing tools to design dialogs at a high-level of abstraction, but they do not automate the design process beyond that point. Among the first examples of the use of data models to derive static layouts for interfaces is HIGGENS (Hudson and King 1986). It allows a developer to view abstractly the interface by examining the data models, but it lacks an automatic generator for the actual interface. The UIDE environment includes a tool for static layout generation from an extended data model (deBaar, Foley, & Mullet 1992). The specification of dynamic behavior, however, must be achieved by defining sets of pre- and postconditions (Gieskens and Foley 1992) for each one of the interface objects. The GENIUS environment (Janssen, Weisbecker, & Ziegler 1993) uses an entity-relationship data model along with a graphical editor for dialog specifications to generate interfaces. The data model, which can be edited graphically, provides the basis for the definition of the interface components and their layout. The graphical editor allows the review of dialog nets, a variation of Petri nets, that define the actions of the interface objects and the conditions that preclude or follow those actions. Systems that employ data models have the advantage of sharing the data model with the target application, thus coupling the design of both. They cannot automate dynamic dialog design from the data model and have problems scaling up because of their approach to specifying dialogs. For example, the use of pre- and postconditions in large interfaces can cause conflicts among the conditions and may necessitate the development of conflict-resolution strategies. Systems that generate interfaces by manipulating interface models include HUMANOID (Szekely, Luo, & Neches 1993) and DON (Kim and Foley 1993). HUMANOID defines an elaborate interface model that includes components for the application, the presentation, and the dialog. Developers construct application models and HUMANOID picks among a number of templates of interfaces to display the interface. The developer can then refine the behavior of the interface by editing the dialog model. HUMANOID assists, but does not automate, the generation of dynamic behavior specifications, and requires considerable additional developer effort to generate interfaces that do not conform to its templates, as is the case with most complex interfaces. DON uses a presentation model that allows developers to explore designs and that provides expert assistance in the generation of designs. DON does not have a dynamic behavior component for automatic generation of dialogs. 472 Enabling Technologies / Models \ Figure 2. The main components of Mecano. The intelligent designer operates on a domain model, as opposed to a data model, to produce interface designs. OVERVIEW OF MECANO The main components of the Mecano environment are shown in Figure 2. Mecano follows the general architecture of Figure 1, replacing the data model with a domain model. The design tools include a model-editing tool, an intelligent designer tool, and an interface builder, which in our case is provided by the supporting platform, the NeXT environment. The framework for user-interface development with Mecano calls for a developer to start by employing the model editor (Gennari 1993) to visualize and review a domain model (described later in this paper). The domain model is shared with the target application. Therefore, an interface developer need not build one for a given domain from scratch. Instead, the normal process is to revise an existing one. Once a domain model is deemed satisfactory, it is input to the intelligent designer (Eriksson, Puerta, & Musen 1994), a tool that produces a dynamic dialog specification and a preliminary layout for the interface. The layout can then be refined using NeXT’s Interface Builder. Both the dialog and layout output by the intelligent designer are stored declaratively in an interface model. This model contains all facets of an interface design including interface objects, presentation, dialog, and behavior. The design defined in the interface model is implemented by a run-time system. Mecano’s run-time tools have the capability of implementing form- and graph-based interfaces with many types of objects, from simple ones, such as menus and push-buttons, to complex ones, such as list browsers and domain-specific graphical editors. The run-time tools implement the dynamic behavior of the interface according to the specifications in the interface model. The overall design process in Mecano is iterative. The resulting interfaces may have deficiencies that require editing the domain model and regenerating the interface. In such cases, the intelligent designer keeps track of layout customizations that may have been made in the previous generation and reapplies these customizations as appropriate. 1 Chemotherapy Figure 3. Partial view of a medical domain model for therapy (protocol) administration (IS-A view). The hierarchy of classes is used to generate the interface- navigation schema for windows and other objects. Slots Facets / Name (type :string) Algorithm Drug-Parts - (allowed=classes :drug) Figure 4. Partial view of the slots and facets (proper- ties) for the chemotherapy class. Facets can define allowed-classes relationships among classes. These relationships are used to generate specifications for interface-object groupings in windows. Other facets like type are important to determine static layout (e.g., appropriate widget for a type string object) Domain Models A domain model is a representation of the objects in a domain and their interrelationships. Domain models in Mecano are constructed using a frame-based representation language that defines class hierarchies (Gennari 1993). Each class in the hierarchy can have a number of slots and each slot defines a number of properties (called facets) Figures 3 and 4 show partial views of a model for the medical domain of therapy administration (called protocol administration). There are two important relationships in domain models. The is-a relationship (see Figure 3) determines the class hierarchy and is used by the intelligent designer in Mecano to specify the interface-navigation schema among windows and other objects. The part-ofrelationship (see Figure 4) is Enabling Technologies 473 Figure window) and the slots andfacets (properties) of each class. used to determine object groupings by windows. Other important facets include, for example, type, cardinality, min and max of a slot, which are used in the specification of the static layout (e.g., what widget should be used for the slot; size of a numeric input field). In fact, the application’s data model is completely included in the domain model. Therefore, all the design rules of an intelligent design tool that may be applied to a data model can be applied to a domain model. In the next section, we illustrate the use of domain models to generate a therapy administration application. GENERATION OF DIALOG SPECIFICATIONS FROM DOMAIN MODELS Before dialogs can be generated, a domain model must be prepared with the model editor shown in Figure 5. The domain model is shared with the target application. Thus, a coupling of application design and interface design is established. Developers can build domain models incrementally, and can prototype interfaces early in the development process because Mecano supports iterative design. More importantly, it is not necessary to build 474 Enabling Technologies class hierarchy (top domain models from scratch for every application. A domain model for medical therapy planning can be reused, with minor variations, in other applications. This is a significant advantage of Mecano over systems that design from data models because data models are difficult to reuse across applications. Once edited, the domain model is used to generate dialog specifications. These specifications have two levels in Mecano:. 0 High-level dialog defines all interface windows, assigns interface objects to windows, and specifies the navigation schema among windows in the interface. 0 Low-level dialog defines specific dialog elements (widgets) to each interface object created at the high level and specifies how the standard behavior of the dialog element is modified for the given domain. High-Level Dialog Generation The elements of the high-level dialog specification are generated by examining the class hierarchy of the domain High-Level Dialog Design Low-Level Dialog Design Figure 6. Interface generated from the partial domain model in Figures 3 and 4. Legends indicate generated dialog at high- and low-level design times. An interface generated from the full domain model for medical therapy contains over 60 windows and hundreds of dialog elements (widgets). The dynamic behavior of such interface can be generated automatically from a domain model. model (see Figure 3) and the slots of each class (see Figure 4). Figure 6 shows an interface generated from the partial domain model shown in Figures 3, and 4. The complete medical domain model for therapy administration generates an interface with over 60 windows and hundreds of widgets. Note that the dialog for window navigation is established during high-level dialog design but that it can be refined, or augmented, at low-level dialog design time. The procedure to generate a high-level dialog design is as follows: e Each class in the hierarchy is assigned a window. 0 Window navigation is established by searching the 0 Each interface object defined at high-level design class hierarchy for links indicated by the allowed- time is assigned a dialog element (widget) by classes facet in the domain model. For example, the examining the facets of the corresponding slot in the Drug window shown in Figure 6 is accessed from the domain model. For example an object of type string is Chemo window because the Drug class is an allowed assigned a text field, an object of type Boolean is class for the slot Drug-Part. assigned a check-box widget, and an object of type e Each window is assigned one inter-j&e object per slot in the class. After generation, the developer has the option of customizing the interface by splitting windows multiple objects into two or more windows. Interface objects are assigned actual widgets during low-level dialog design. Low-Level Dialog Generation Elements of the low-level dialog specification are generated by examining the facets (properties) defined for each slot in the domain model (see Figure 4). The process has these steps: Enabling Technologies 475 can be string and cardinal@ multiple (i.e., the object multiple-valued) is assigned a list browser. 0 Each dialog element may be assigned actions beyond the standard behavior of the dialog element by examining the facets of the corresponding slot in the domain model. Examples of dialog-element actions include disabling editing in other dialog elements, and updating values in other dialog elements after a user input action (see Figure 6). Note that the specification of dialog-element actions is one of the important operations that cannot be automated in systems that rely on data models for interface generation. GENERATION OF DOMAIN-SPECIFIC GRAPHICAL EDITORS One of the important capabilities in Mecano is the generation from domain models of domain-specific, nodes-and-links graphical editors useful to describe procedures such as flowcharts. Consider the following slot information for the class Protocol: (slot algorithm (type :procedure) (allowed-classes :xrt :chemotherapy :drug)) When the intelligent dialog designer examines this slot during low-level dialog design, it assigns a graphical editor as the dialog element for that slot due to the type procedure defined for that slot. It also defines three graphical objects to be used during editing, one for x-ray therapies (xrt), one for chemotherapies (chemo), and one for drugs. Figure 7 shows a graphical editor generated from the above slot definition. PARTICIPATORY LAYOUT DESIGN AND DESIGN REVISION A crucial concern with any system that automatically generates interfaces is how it allows the developer to review and change the generated design. In Mecano, there are two types of revisions: layout and dialog. The intelligent designer tool uses a layout algorithm to produce a preliminary layout of the interface objects. The philosophy in Mecano is to be able to involve the end user in the process of custom-tailoring a layout. For example, for the medical treatment application shown in this paper, the interface developer works together with a physician to review and custom tailor the preliminary layout with an interface builder (see Figure 2). Our experience is that this revision-in the case of the interface derived from the full model-may take between two and a half to four hours for the 65 windows included in that application (including layout and dialog revisions, and needed interface regenerations). Custom-tailoring information is kept on a database so that if the interface needs to be regenerated because of incremental changes to the domain model (as it Figure 7. A graphical editor to draw medical treatments generated from a domain model. Both the drawing objects and their connectivity behavior are determined by the intelligent designer tool in Mecano. is often the case), the customizations can be reapplied to the newly generated interface. Substantial revisions of the domain model, however, invalidate the information on the customization database. The working sessions with the end user-in this paper’s example, a physician- are also used to discover difficulties with the dialog design and incompleteness in the information displayed in the interface. Dialog design customizations can be made by editing directly the interface model (see Figure 2) and do not require a regeneration of the interface. On the other hand, for the interface to be able to display additional dialog elements, changes must be made to the domain model to define needed slots or classes. Such changes do require the interface be regenerated. Overall, the Mecano policy is to understand the interface design process as iterative and to support the introduction of custom changes without creating duplicate work. ANALYSIS AND CONCLUSIONS We have described a user-interface development environment that generates automatically presentation and dialog specifications for domain-specific, form- and graph- based interfaces. The strong points of this system are: 0 Generation of both the static layout and the dynamic behavior of domain-specific, form- and graph-based interfaces, including relatively large and complex ones, for multiple domains (e.g., medical treatment, elevator configuration). l Use of the application’s domain models, which includes the application’s data model, for interface 476 Enabling Technologies generation considerably augments automation capabilities over systems utilizing only a data model. 0 Support of participatory layout design involving end users of the applications, and support for iterative design without duplication of work. Mecano has the same central weakness that other model- based systems have: the system is as good as the expressiveness of its underlying models. We continue researching extensions to our frame-based representation language for domain models and interface models in order to be able to automate more types of dialog actions. In particular, we are concerned with how to generate complex sequences of actions (commands) at low-level dialog design time. We are also working on the run-time system of Mecano to implement new types of widgets. Furthermore, the interface generation approach from domain models is most useful for domain-specific interfaces with a relatively fixed user dialogue (such as the medical forms shown in the figures in this paper). For other types of interfaces, it will be necessary to examine other types of models (such as a model of the user’s task) to be able to generate automatically interface specifications. We are currently working on developing such task models as components of our generic interface model. Overall, Mecano provides a framework for assisting the development of interfaces and for the study of interface models and the relationships between domain characteristics and user interface presentation and dialog. ACKNOWLEDGMENTS We wish to thank Tom Gruber for his helpful comments. REFERENCES de Baar, D.J.M.J., Foley, J.D. and Mullet, K.E. 1992. Coupling Application Design and User Interface Design. In Proceedings of Human Factors in Computing Systems, CHI’92. Monterey, California, May 1992, pp. 259-266. Eriksson, H., Puerta, A.R. and Musen, M.A. 1994. Generation of Knowledge-Acquisition Tools from Domain Ontologies. In Proceedings of the Eighth Bun.Knowledge Acquisition for Knowledge-Bused Systems Workshop. Banff, Alberta, Canada. pp. 7.1-7.20. Gennari, J.H. 1993. A Brief Guide to Maitre and MODEL: An Ontology Editor and a Frame-Bused Knowledge Representation Language. Stanford University, Knowledge Systems Laboratory, Report KSL-93-46, Stanford, California. June 1993. Gieskens, D.F. and Foley, J.D. 1992. Controlling User Interface Objects through Pre- and Postconditions. In Proceedings of Human Factors in Computing Systems, CHZ’92. Monterey, California, May 1992, pp. 189-194. Hayes, P. and Szekely, P. 1992. Graceful Interaction through the {COUSIN} Command Interface. International Journal of Man-Machine Studies, 19(3), pp. 285-305. Hudson, S.E. and King, R. 1986 A Generator of Direct Manipulation Office Systems. ACM Transactions on Information Systems, 4(2), pp. 132-163. Janssen, C., Weisbecker A. and Ziegler J. 1993. Generating User Interfaces from Data Models and dialog Net Specifications. In Proceedings of Human Factors in Computing Systems, INTERCHI’93. Amsterdam, The Netherlands, April 1993, pp. 418-423. Kim, W.C. and Foley, J.D. 1993. Providing High-Level Control and Expert Assistance in the User Interface Presentation Design. In Proceedings of Human Factors in Computing Systems, INTERCHI’93. Amsterdam, The Netherlands, April 1993, pp. 430-437. Olsen, D.R. 1989. A Programming Language Basis for User Interface Management. In Proceedings of Human Factors in Computing Systems, CHI’89. Austin, Texas, May 1989, pp. 171-176. Puerta A.R. 1993. The Study of Models of Intelligent Interfaces. In Proceedings of the 1993 International Workshop on Intelligent User Interfaces. Orlando, Florida, January 1993, pp. 71-80. Singh, G. and Green, M. 1991. Automating the Lexical and Syntactic Design of Graphical User Interfaces: The UofA* UIMS. ACM Transactions on Graphics, 10(3), pp. 213-254. Szekely, P., Luo, P. and Neches, R. 1993. Beyond Interface Builders: Model-Based Interface Tools. In Proceedings of Human Factors in Computing Systems, INTERCHI ‘93. Amsterdam, The Netherlands, April 1993, pp. 383-390. Wiecha, C., Bennett, W., Boies, S., Gould, J. and Greene, S. 1989. ITS: A Tool for Rapidly Developing Interactive Applications. ACM Transactions on Information Systems, S(3), pp. 204-236. Enabling Technologies 477 | 1994 | 16 |
1,497 | Learning to coordinate without sharing information Sandip Sen, Mahendra Sekaran, and John Hale Department of Mathematical & Computer Sciences University of Tulsa 600 South College Avenue Tulsa, OK 74104-3189 sandip@kolkata.mcs.utulsa.edu Abstract Researchers in the field of Distributed Artificial Intelligence (DAI) h ave been developing efficient mechanisms to coordinate the activities of multi- ple autonomous agents. The need for coordina- tion arises because agents have to share resources and expertise required to achieve their goals. Previous work in the area includes using sophis- ticated information exchange protocols, investi- gating heuristics for negotiation, and developing formal models of possibilities of conflict and co- operation among agent interests. In order to han- dle the changing requirements of continuous and dynamic environments, we propose learning as a means to provide additional possibilities for effec- tive coordination. We use reinforcement learning techniques on a block pushing problem to show that agents can learn complimentary policies to follow a desired path without any knowledge about each other. We theoretically analyze and experimentally verify the effects of learning rate on system convergence, and demonstrate benefits of using learned coordination knowledge on simi- lar problems. Reinforcement learning based coor- dination can be achieved in both cooperative and non-cooperative domains, and in domains with noisy communication channels and other stochas- tic characteristics that present a formidable chal- lenge to using other coordination schemes. Introduction In this paper, we will be applying recent research de- velopments from the reinforcement learning literature to the coordination problem in multiagent systems. In a reinforcement learning scenario, an agent chooses ac- tions based on its perceptions, receives scalar feedbacks based on past actions, and is expected to develop a mapping from perceptions to actions that will maxi- mize feedbacks. Multiagent systems are a particular type of distributed AI system (Bond & Gasser 1988), in which autonomous intelligent agents inhabit a world with no global control or globally consistent knowledge. These agents may still need to coordinate their activi- ties with others to achieve their own local goals. They could benefit from receiving information about what others are doing or plan to do, and from sending them information to influence what they do. Coordination of problem solvers, both selfish and cooperative, is a key issue to the design of an effec- tive distributed AI system. The search for domain- independent coordination mechanisms has yielded some very different, yet effective, classes of coordina- tion schemes. Almost all of the coordination schemes developed to date assume explicit or implicit sharing of information. In the explicit form of information sharing, agents communicate partial results (Durfee & Lesser 1991), speech acts (Cohen & Perrault 1979), re- source availabilities (Smith 1980), etc. to other agents to facilitate the process of coordination. In the im- plicit form of information sharing, agents use knowl- edge about the capabilities of other agents (Fox 1981; Genesereth, Ginsberg, & Rosenschein 1986) to aid lo- cal decision-making. Though each of these approaches has its own benefits and weaknesses, we believe that the less an agent depends on shared information, and the more flexible it is to the on-line arrival of problem- solving and coordination knowledge, the better it can adapt to changing environments. In this paper, we discuss how reinforcement learning techniques of developing policies to optimize environ- mental feedback, through a mapping between percep- tions and actions, can be used by multiple agents to learn coordination strategies without having to rely on shared information. These agents, though working in a common environment, are unaware of the capabili- ties of other agents and may or may not be cognizant of goals to achieve. We show that through repeated problem-solving experience, these agents can develop policies to maximize environmental feedback that can be interpreted as goal achievement from the viewpoint of an external observer. This research opens up a new dimension of coordination strategies for multia- gent systems. Acquiring coordination knowledge Researchers in the field of machine learning have inves- tigated a number of schemes for using past experience to improve problem solving behavior (Shavlik & Diet- 426 Distributed AI From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. terich 1990). A number of these schemes can be effec- tively used to aid the problem of coordinating multiple agents inhabiting a common environment. In cooper- ative domains, where agents have approximate models of the behavior of other agents and are willing to re- veal information to enable the group perform better as a whole, pre-existing domain knowledge can be used inductively to improve performance over time. On the other hand, learning techniques that can be used in- crementally to develop problem-solving skills relying on little or no pre-existing domain knowledge can be used by both cooperative and non-cooperative agents. Though the latter form of learning may be more time- consuming, it is generally more robust in the presence of noisy, uncertain, and incomplete information. Previous proposals for using learning techniques to coordinate multiple agents have mostly relied on using prior knowledge (Brazdil et al. 1991), or on coop- erative domains with unrestricted information sharing (Sian 1991). Even previous work on using reinforce- ment learning for coordinating multiple agents (Tan 1993; Weil31993) h ave relied on explicit information sharing. We, however, concentrate on systems where agents share no problem-solving knowledge. We show that although each agent is independently optimizing its own environmental reward, global coordination be- tween multiple agents can emerge without explicit or implicit information sharing. These agents can there- fore act independently and autonomously, without be- ing affected by communication delays (due to other agents being busy) or failure of a key agent (who con- trols information exchange or who has more informa- tion), and do not have to be worry about the reliability of the information received (Do I believe the informa- tion received? Is the communicating agent an accom- plice or an adversary?). The resultant systems are, therefore, robust and general-purpose. Reinforcement learning In reinforcement learning problems (Barto, Sutton, & Watkins 1989; Holland 1986; Sutton 1990), reac- tive and adaptive agents are given a description of the current state and have to choose the next action from a set of possible actions so as to maximize a scalar reinforcement or feedback received after each ac- tion. The learner’s environment can be modeled by a discrete time, finite state, Markov decision process that can be represented by a 4-tuple (S, A, P, r) where P:SxSxA ++ [0, l] gives the probability of mov- ing from state s1 to s2 on performing action a, and r : S x A H % is a scalar reward function. Each agent maintains a policy, 7r, that maps the current state into the desirable action(s) to be performed in that state. The expected value of a discounted sum of future rewards of a policy n at a state x is given by V” def E(CE, ytrF t}, where rr t is the random varia 1 le corresponding to the reward received by the learning agent t time steps after if starts using the pol- icy ?r in state s, and y is a discount rate (0 < y < 1). Various reinforcement learning strategies have been proposed using which agents can can develop a pol- icy to maximize rewards accumulated over time. For our experiments, we use the Q-learning (Watkins 1989) algorithm, which is designed to find a policy 7r* that maximizes VTT(s) for all states s E S. The decision policy is represented by a function, Q : S x A H !I?, which estimates long-term discounted rewards for each state-action pair. The Q values are defined as &,“(~,a) = VI,?;“(s), where a;~ denotes the event se- quence of choosing action a at the current state, fol- lowed by choosing actions based on policy 7r. The ac- tion, a, to perform in a state s is chosen such that it is expected to maximize the reward, v;‘(s) = Y~L&;* (s, a) for all s E S. If an action a in state s produces a reinforcement of R and a transition to state s’, then the corresponding Q value is modified as follows: Q(s, 4 + (1-P) Q(s, a>+@ (R+y rg~Q(s', a')). (1) lock pushing problem To explore the application of reinforcement learning in multi-agent environments, we designed a problem in which two agents, al and ~2, are independently as- signed to move a block, b, from a starting position, S, to some goal position, G, following a path, P, in Euclidean space. The agents are not aware of the ca- pabilities of each other and yet must choose their ac- tions individually such that the joint task is completed. The agents have no knowledge of the system physics, but can perceive their current distance from the de- sired path to take to the goal state. Their actions are restricted as follows; agent i exerts a force Fi, where 0 5 161 I cnax, on the object at an angle t9i, where 0 5 0 5 7r. An agent pushing with force ? at angle 0 will offset the block in the x direction by IF] cos(0) units and in the y direction by IF] sin(e) units. The net resultant force on the block is found by vector addition of individual forces: F = Fi + &. We calculate the new position of the block by assuming unit displace- ment per unit force along the direction of the resultant force. The new block location is used to provide feed- buck to the agent. If (x, y) is the new block location, PE(y) is the x-coordinate of the path P for the same y coordinate, Ax = ]Z - P$(y)l is the distance along the x dimension between the block and the desired path, then K * ueax is the feedback given to each agent for their last action (we have used K = 50 and a = 1.15). The field of play is restricted to a rectangle with endpoints [0, 0] and [loo, 1001. A trial consists of the agents starting from the initial position S and applying forces until either the goal position G is reached or the block leaves the field of play (see Figure 1). We abort a trial if a pre-set number of agent actions fail to Coordination 427 field of play .“““““““““‘““““““““““’ . agent 1 at each time step each agent pushes with some force at a cerlain angle Combined effect Fsin 8 F Figure 1: The block pushing problem. take the block to the goal. This prevents agents from learning policies where they apply no force when the block is resting on the optimal path to the goal but not on the goal itself. The agents are required to learn, through repeated trials, to push the block along the path P to the goal. Although we have used only two agents in our experiments, the solution methodology can be applied without modification to problems with arbitrary number of agents. Experimental setup To implement the policy R we chose to use an inter- nal discrete representation for the external continuous space. The force, angle, and the space dimensions were all uniformly discretized. When a particular discrete force or action is selected by the agent, the middle value of the associated continuous range is used as the actual force or angle that is applied on the block. An experimental run consists of a number of tri- als during which the system parameters (p, y, and K) as well as the learning problem (granularity, agent choices) is held constant. The stopping criteria for a run is either that the agents succeed in pushing the block to the goal in N consecutive trials (we have used N = 10) or that a maximum number of trials (we have used 1500) have been executed. The latter cases are reported as non-converged runs. The standard procedure in Q-learning literature of initializing Q values to zero is suitable for most tasks where non-zero feedback is infrequent and hence there is enough opportunity to explore all the actions. Be- Figure 2: The X/Motif interface for experimentation. cause a non-zero feedback is received after every action in our problem, we found that agents would follow, for an entire run, the path they take in the first trial. This is because they start each trial at the same state, and the only non-zero Q-value for that state is for the ac- tion that was chosen at the start trial. Similar reason- ing holds for all the other actions chosen in the trial. A possible fix is to choose a fraction of the actions by random choice, or to use a probability distribution over the Q-values to choose actions stochastically. These options, however, lead to very slow convergence. In- stead, we chose to initialize the Q-values to a large positive number. This enforced an exploration of the available action options while allowing for convergence after a reasonable number of trials. The primary metric for performance evaluation is the average number of trials taken by the system to converge. Information about acquisition of coordina- tion knowledge is obtained by plotting, for different trials, the average distance of the actual path followed from the desired path. Data for all plots and tables in this paper have been averaged over 100 runs. We have developed a X/Motif interface (see Fig- ure 2) to visualize and control the experiments. It displays the desired path, as well as the current path along which the block is being pushed. The interface allows us to step through trials, run one trial at a time, pause anywhere in the middle of a run, “play” the run at various speeds, and monitor the development of the policy matrices of the agents. By clicking anywhere on the field of play we can see the current best action choice for each agent corresponding to that position. Choice of system parameters If the agents learn to push the block along the desired path, the reward that they will receive for the best action choices at each step is equal to the maximum possible value of K. The steady-state values for the Q- values ( Qss) corresponding to optimal action choices can be calculated from the equation: QSS = (1 - P) Qss + P (K + r&a). 428 Distributed AI Solving for Qss in this equation yields a value of &. In order for the agents to explore all actions after the Q-values are initialized at Sr, we require that any new Q value be less than Sl. From similar considerations as above we can show that this will be the case if Sl 2 5. In our experiments we fix the maximum reward K at 50, Sl at 100, and y at 0.9. Unless otherwise mentioned, we have used ,B = 0.2, and allowed each agent to vary both the magnitude and angle of the force they apply on the block. The first problem we used had starting and goal po- sitions at (40,O) and (40,100) respectively. During our initial experiments we found that with an even number of discrete intervals chosen for the angle dimension, an agent cannot push along any line parallel to the y-axis. Hence we used an odd number, 11, of discrete intervals for the angle dimension. The number of discrete inter- vals for the force dimension is chosen to be 10. On varying the number of discretization intervals for the state space between 10, 15, and 20, we found the corresponding average number of trials to convergence is 784, 793, and 115 respectively with 82%, 83%, and 100% of the respective runs converging within the spec- ified limit of 1200 trials. This suggests that when the state representation gets too coarse, the agents find it very difficult to learn the optimal policy. This is because the less the number of intervals (the coarser the granularity), the more the variations in reward an agent gets after taking the same action at the same state (each discrete state maps into a larger range of continuous space and hence the agents start from and ends up in physically different locations, the latter re- sulting in different rewards). Varying learning rate We experimented by varying the learning rate, ,f3. The resultant average distance of the actual path from the desired path over the course of a run is plotted in Fig- ure 3 for p values 0.4,0.6, and 0.8. The average number of trials to convergence is 784, 793, and 115 respec- tively with 82%, 83%, and 100% of the respective runs converging within the specified limit of 1200 trials. In case of the straight path between (40,O) and (40,100), the optimal sequence of actions always puts the block on the same x-position. Since the x- dimension is the only dimension used to represent state, the agents update the same Q-value in their policy matrix in successive steps. We now calculate the number of updates required for the Q-value corre- sponding to this optimal action before it reaches the steady state value. Note that for the system to con- verge, it is necessary that only the Q-value for the opti- mal action at x = 40 needs to arrive at its steady state value. This is because the block is initially placed at x = 40, and so long as the agents choose their opti- mal action, it never reaches any other z position. So, the number of updates to reach steady state for the Q-value associated with the optimal action at x = 40 should be proportional to the number of trials to con- vergence for a given run. In the following, let St be the Q-value after t updates and ,S’I be the initial Q-value. Using Equation- 1 and the fact that for the optimal action at the starting position, the reinforcement received is K and the next state is the same-as the current state, we can write, St+1 = (l-P)St+P(K+ySt) = (1 -P(1 -r))St +prc = ASt+C (2) where A and B are constants defined to be equal to 1-/3*(1-y) and/?*K respectively. Equation 2 is a difference equation which can be solved using Sc = Sl to obtain St = A t+1 sl+ C(1 --At+7 1-A ’ If we define convergence by the criteria that jSt+l - St 1 < 6, where 6 is an arbitrarily small positive number, then the number of updates t required for convergence can be calculated to be the following: t > log(c) - h&%(1 - 4 - c> - log(A) = log(c) - log(P) - log(S1 (I - Y) - K) Zo!l(l - P (1 - r>> (3) If we keep y and Sl constant the above expression can be shown to be a decreasing function of /3. This is corroborated by our experiments with varying p while holding y = 0.1 (see Figure 3). As ,B increases, the agents take less number of trials to convergence to the optimal set of actions required to follow-the desired path. The other plot in Figure 3 presents a compar- ison of the theoretical and experimental convergence trends. The first curve in the plot represents thefunc- tion corresponding to the number of updates required to reach steady state value (with 6 = 6). The second curve represents the average number of trials required for a run to converge, scaled down by a constant factor of 0.06. The actual ratios between the number of trials to convergence and the values of the expression on the right hand side of the inequality 3 for /3 equal to 0.4, 0.6, and 0.8 are 24.1, 25.6, and 27.5 respectively (the average number of trials are 95.6, 71.7, and 53; values of the above-mentioned expression are 3.97, 2.8, and 1.93). Given the fact that results are averaged over 100 runs, we can claim that our theoretical analysis provides a good estimate of the relative time required for convergence as the learning rate is changed. Varying agent capabilities The next set of experiments was designed to demon- strate the effects of agent capabilities on the time re- quired to converge on the optimal set of actions. In the first of the current set of experiments, one of the agents was chosen to be a “dummy” ; it did not exert Coordination 429 Varying beta 0 20 40 60 80 100 120 140 Trials Variation of Steps to convergence with beta 6, I I I I I I I 5.5 5 4.5 4 3.5 2.5 2 1.5 -0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Beta Figure 3: Variation of average distance of actual path from desired path over the course of a run, and the number of updates for convergence of optimal Q-value with changing /? (y = 0.1, Sl = 100). any force at all. The other agent could only change the angle at which it could apply a constant force on the block. In the second experiment, the latter agent was allowed to vary both force and angle. In the third ex- periment, both agents were allowed to vary their force and angle. The average number of trials to convergence for the first, second, and third experiment are 431, 55, and 115 respectively. The most interesting result from these experiments is that two agents can learn to coor- dinate their actions and achieve the desired problem- solving behavior much faster than when a single agent is acting alone. If, however, we simplify the problem of the only active agent by restricting its choice to that of selecting the angle of force, it can learn to solve the problem quickly. If we fix the angle for the only active agent, and allow it to vary only the magnitude of the force, the problem becomes either trivial (if the chosen angle is identical to the angle of the desired path from the starting point) or unsolvable. Transfer of learning We designed a set of experiments to demonstrate how learning in one situation can help learning to perform well in a similar situation. The problem with starting and goal locations at (40,O) and (40,100) respectively Figure 4: Visualization of agent policy matrices at the end of a successful run. is used as a reference problem. In addition, we used five other problems with the same starting location and with goal locations at (50,100), (60,100), (70,100), (SO,lOO), and (90,100) respectively. The correspond- ing desired paths were obtained by joining the starting and goal locations by straight lines. To demonstrate transfer of learning, we first stored each of the policy matrices that the two agents converged on for the orig- inal problem. Next, we ran a set of experiments using each of the new problems, with the agents starting off with their previously stored policy matrices. We found that there is a linear increase in the num- ber of trials to convergence as the goal in the new prob- lem is placed farther apart from the goal in the initial problem. To determine if this increase was due purely to the distance between the two desired paths, or due to the difficulty in learning to follow certain paths, we ran experiments on the latter problems with agents starting with uniform policies. These experiments re- veal that the more the angle between the desired path and the y-axis, the longer the agents take to converge. Learning in the original problem, however, does help in solving these new problems, as evidenced by a w 10% savings in the number of trials to convergence when agents started with the previously learned policy. Us- ing a one-tailed i-test we found that all the differences were significant at the 99% confidence level. This re- sult demonstrates the transfer of learned knowledge between similar problem-solving situations. Complimentary learning If the agents were cognizant of the actual constraints and goals of the problem, and knew elementary physics, they could independently calculate the desired action for each of the states that they may enter. The resulting policies would be identical. Our agents, how- ever, have no planning capacity and their knowledge is encoded in the policy matrix. Figure 4 provides a snap- shot, at the end of a successfully converged run, of what each agent believes to be its best action choice for each 430 Distributed AI of the possible states in the world. The action choice for each agent at a state is represented by a straight line at the appropriate angle and scaled to represent the magnitude of force. We immediately notice that the individual policies are complimentary rather than being identical. Given a state, the combination of the best actions will bring the block closer to the desired path. In some cases, one of the agents even pushes in the wrong direction while the other agent has to com- pensate with a larger force to bring the block closer to the desired path. These cases occur in states which are at the edge of the field of play, and have been visited only infrequently. Complementarity of the individual policies, however, are visible for all the states. Conclusions In this paper, we have demonstrated that two agents can coordinate to solve a problem better, even without a model for each other, than what they can do alone. We have developed and experimentally verified theo- retical predictions of the effects of a particular system parameter, the learning rate, on system convergence. Other experiments show the utility of using knowledge, acquired from learning in one situation, in other similar situations. Additionally, we have demonstrated that agents coordinate by learning complimentary, rather than identical, problem-solving knowledge. The most surprising result of this paper is that agents can learn coordinated actions without even be- ing aware of each other! This is a clear demonstration of the fact that more complex system behavior can emerge out of relatively simple properties of compo- nents of the system. Since agents can learn to co- ordinate behavior without sharing information, this methodology can be equally applied to both cooper- ative and non-cooperative domains. To converge on the optimal policy, agents must re- peatedly perform the same task. This aspect of the current approach to agent coordination limits its ap- plicability. Without an appropriate choice of system parameters, the system may take considerable time to converge, or may not converge at all. In general, agents converged on sub-optimal poli- cies due to incomplete exploration of the state space. We plan to use Boltzmann selection of action in place of deterministic action choice to remedy this problem, though this will lead to slower convergence. We also plan to develop mechanisms to incorporate world mod- els to speed up reinforcement learning as proposed by Sutton (Sutton 1990). We are currently investigating the application of reinforcement learning for resource- sharing problems involving non-benevolent agents. eferences A. B. Barto, R. S. Sutton, and C. Watkins. Sequen- tial decision problems and neural networks. In Pro- ceedings of 1989 C on erence f on Neural Information Processing, 1989. A. H. Bond and L. Gasser. Readings in Distributed Artificial Intelligence. Morgan Kaufmann Publishers, San Mateo, CA, 1988. P. Brazdil, M. Gams, S. Sian, L. Torgo, and W. van de Velde. Learning in distributed systems and multi- agent environments. In European Working Session on Learning, Lecture Notes in AI, 482, Berlin, March 199 1. Springer Verlag . P. R. Cohen and C. R. Perrault. Elements of a plan-based theory of speech acts. Cognitive Science, 3(3):177-212, 1979. E. H. Durfee and V. R. Lesser. Partial global plan- ning: A coordination framework for distributed hy- pothesis formation. IEEE Transactions on Systems, Man, and Cybernetics, 21(5), September 1991. M. S. Fox. An organizational view of distributed sys- tems. IEEE Transactions on Systems, Man, and Cy- bernetics, 11(1):70-80, Jan. 1981. M. Genesereth, M. Ginsberg, and J. Rosenschein. Co- operation without communications. In Proceedings of the National Conference on Artificial Intelligence, pages 51-57, Philadelphia, Pennsylvania, 1986. J. H. Holland. Escaping brittleness: the possibili- ties of general-purpose learning algorithms applied to parallel rule-based systems. In R. Michalski, J. Car- bonell, and T. M. Mitchell, editors, Machine Learn- ing, an artificial intelligence approach: Volume II. Morgan Kaufman, Los Alamos, CA, 1986. c J. W. Shavlik and T. G. Dietterich. Readings in Ma- chine Learning. Morgan Kaufmann, San Mateo, Cal- ifornia, 1990. S. Sian. Adaptation based on cooperative learning in multi-agent systems. In Y. Demazeau and J.-P. Miiller, editors, Decentralize AI, volume 2, pages 257- 272. Elsevier Science Publications, 199 1. R. G. Smith. Th e contract net protocol: High-level communication and control in a distributed prob- lem solver. IEEE Transactions on Computers, C- 29(12):1104-1113, Dec. 1980. R. S. Sutton. Integrated architecture for learning, planning, and reacting based on approximate dy- namic programming. In Proceedings of the Seventh International Conference on Machine Learning, pages 216-225, 1990. M. Tan. Multi-agent reinforcement learning: Inde- pendent vs. cooperative agents. In Proceedings of the Tenth International Conference on Machine Learn- ing, pages 330-337, June 1993. C. Watkins. Learning from Delayed Rewards. PhD thesis, King’s College, Cambridge University, 1989. G. WeiI3. Learning to coordinate actions in multi- agent systems. In Proceedings of the International Joint Conference on Artificial Intelligence, pages 311-316, August 1993. Coordination 431 | 1994 | 160 |
1,498 | Coalition, Cryptography, and Stability: Mechanisms for Coalition Formation in Task Oriented Gilad Zlotkin Center of Coordination Science Sloan School of Management, MIT 1 Amherst Street, E40-179 Cambridge, MA 02139 USA gilad@mit .edu Abstract Negotiation among multiple agents remains an impor- tant topic of research in Distributed Artificial Intel- ligence (DAI). Most previous work on this subject, however, has focused on bilateral negotiation, deals that are reached between two agents. There has also been research on n-agent agreement which has consid- ered “consensus mechanisms” (such as voting), that allow the full group to coordinate itself. These group decision-making techniques, however, assume that the entire group will (or has to) coordinate its actions. Sub-groups cannot make sub-agreements that exclude other members of the group. In some domains, however, it may be possible for ben- eficial agreements to be reached among sub-groups of agents, who might be individually motivated to work together to the exclusion of others outside the group. This paper considers this more general case of n-agent coalition formation. We present a simple coalition for- mation mechanism that uses cryptographic techniques for subadditive Task Oriented Domains. The mecha- nism is efficient, symmetric, and individual rational. When the domain is also concave, the mechanism also satisfies coalition rationality. Introduction In multi-agent domains, agents can often benefit by coordinating their actions with one another; in some domains, this coordination is actually required. In two- agent encounters, the situation is relatively simple: ei- ther the agents reach an agreement (i.e., coordinate their actions), or they do not. With more than two agents, however, the situation becomes more compli- cated, since agreement may be reached by sub-groups. The process of agent coordination, and of reach- ing agreement, has been the focus of much research in Distributed Artificial Intelligence @AI). The gen- eral term used for this process is “negotiation” (usu- ally in the 2-agent case) (Conry, Meyer, & Lesser 1988; Kraus & Wilkenfeld 1991; Kreifelts & von Martial 1990; Kuwabara & Lesser 1989; Sycara 1988; Zlotkin & Rosenschein 1993a; Rosenschein & Zlotkin 1994), and “reaching consensus” (in the n-agent case) (Ephrati & Rosenschein 1991; 1993). Both approaches, though 432 Distributed AI Jeffrey S. osenschein Computer Science Department Hebrew University Givat Ram, Jerusalem Israel jeff@cs.huji.ac.il dealing with different numbers of agents, share one un- derlying assumption: the agreement, if it is reached, will include all relevant members of the encounter. Thus, even in the n-agent case where a voting pro- cedure might enable consensus to be reached, the entire group will be bound by the group decision. Sub-groups cannot make sub-agreements that exclude other members of the group. Interesting variations on these approaches, which nonetheless remain bilateral in essence, are the Contract Net (Smith 1978), which al- lows bilateral agreement in n-agent environments, and bilateral negotiation among two sub-groups discussed in (Kraus, Wilkenfeld, & Zlotkin 1995). In some domains, however, it may be possible for beneficial agreements to be reached among sub-groups of agents, who might be individually motivated to work together to the exclusion of others outside the group. Voting procedures are not applicable here, be- cause the full coalition may not be able to satisfy all its members, who are free to create more satis- fying sub-coalitions. This paper considers this more general case of n-agent coalition formation (recent pieces of work on similar topics are (Ketchpel 1993; Shechory & Kraus 1993)). Building on our previous work (Zlotkin & Rosenschein 1993a), which dealt only with bilateral negotiation mechanisms, we here analyze the kinds of n-agent coordination mechanisms that can be used in specific classes of domains. Coalitions An Example---The Tileworld Consider the following simple example in a multi-agent version of the Tileworld (Pollack & Ringuette 1990) (see Figure 1). A single hole in the grid is represented by a framed letter (such as a . Each agent’s position P is marked by its name (sue as Al). Tiles are repre- sented by black squares (H) inside the grid squares. Agents can move from one grid square to another horizontally or vertically (unless the square is occupied by a hole-multiple agents can be in the same grid square at the same time). When a tile is pushed into any grid square that is part of a hole, the square is filled and becomes navigable as if it were a regular From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. Figure 1: ‘I ‘hree-Agent Encounter in the Tileworld grid square. The domain is static except for changes brought about by the agents. Agent l’s goal is to fill hole q , while agents 2 and hole each agent negs to &? 7 steps. 3 need to fill holes b and c respectively To fill its Agents can cooperate and help each other to reduce the cost of achieving their goals. There are several kinds of joint plans that the agents can execute that will reduce the cost of achieving their goals. Some of those joint plans are listed in the table on the right side of Figure 1. The coalition structure { 1,3},{2} means that there are two coalitions, one consisting of the agents 1 and 3, and the other consisting only of agent 2. When two agents form a coalition it means that they are coor- dinating their actions. The utility of an agent from a joint plan that achieves his goal is the difference be- tween the cost of achieving his goal alone and the cost of his part of the joint plan (Zlotkin & Rosenschein 1991). The coalition that gives the maximal total utility is the full coalition that involves all 3 agents, where they all coordinate their actions to mutual benefit (total utility is 17). ’ Although this full coalition is globally optimal, Agent l’s utility is only 4, and he would prefer to reach agreement with either agent 2 or agent 3 (with utility of 6), but not with both. The agents in the above scenario are able to trans- fer utility to each other, but in a non-continuous way. Agent 1, for example, can “transfer” to agent 2 seven points of utility by achieving his goal. He cannot, how- ever, transfer an arbitrary amount. Without this ar- bitrary, continuous utility transfer capability, agent 1 will prefer to form a coalition with either one of the other two agents, rather than with both. Coalition Games The definitions below are standard ones from coalition theory (Kahan & Rapoport 1984). Definition 1 A coalition game with transferable util- ity in normal characteristic form is (N, v) where: N = ‘The joint plan where agent 1 achieves both 2 and 3’s goals (with cost of 3), while either agent 2 or 3 achieves l’s goal (each with expected cost of i). Coalitions u1 ‘112 213 Table 1: Possible Coalitions in the Tileworld Example w,..., n) set of agents, and v: 2N + lR. For each coalition which is a subset of agents S E N, v(S) is the value of the coalition S, which is the total utility that the members of S can achieve by coordinating and acting together. The Tileworld example from Figure 1 can be de- 7~;;; as a coalition game (N, v) such that: v( { 1)) = = ~((3)) = v({2,3}) = 0, v({1,2)) = v({L3)) = 12, and v({1,2,3}) = 17. Note that the value derived by a coalition is inde- pendent of the coalition structure. A given coalition is guaranteed to get a certain utility, regardless of what coalitions are formed by the other agents. In the Tile- world domain this assumption is not necessarily true- though it is true in the example we gave above. We will see below that in Task Oriented Domains (Zlotkin & Rosenschein 1993a) this definition of the coalition value is directly applicable. Task Oriented Domains Definition IE A Task Oriented Domain (TOD) is a tuple < T,d,c > where: 7 is the set of all possible tasks; A = {Al,. . .A,) is an ordered list of agents; c is a monotonic function c: [27] + lR+. [27] stands for all the finite subsets of 7. For each finite set of tusks X C 7, c(X) is the cost of executing all the tusks in X by a single agent. c is monotonic, i.e., for any two finite subsets X E Y C 7, c(X) < c(Y); c(0) = 0. An encounter within a TOD < 7,d, c > is an or- dered list (Tl , . . . , T,) such that for all k E { 1 . . . n), Tk is a finite set of tusks from 7 that Ak needs to achieve. Tk will also be called Ak ‘s goal. The Postmen Domain (Zlotkin & Rosenschein 1989) is one classic example of a TOD. In this domain, each agent is given a set of letters to deliver to various nodes on a graph; starting and ending at the Post Office, the agents are to traverse the graph and make their deliveries. Agents can reach agreements to carry one another’s letters, and save on their travel. In multi-agent Task Oriented Domains, agents can reach agreements about the re-distribution of tasks among themselves. When there are more than two agents, the agents can also form coalitions such that tasks are re-distributed only among the members of the same coalition. When mixed deals are being used by agents (those are agreements where agents settle on Coordination 433 a probabilistic distribution of tasks), it can be useful to conceive of the interaction as a coalition game with transferable utility. The use of probability smooths the discontinuous distribution of tasks, and therefore of utility. However, utility is still not money in a classic TOD; utility is the difference between the cost of achieving your goal alone, and the cost of your part of the deal. Therefore, there is an upper bound on the amount of utility that each agent can get-no agent can get more utility than his stand-alone cost. As we shall see below, however, our model never attempts to violate this upper bound on utility. Subadditive Task Oriented In some domains, by combining sets of tasks we may reduce (and can never increase) the total cost, as com- pared with the sum of the costs of achieving the sets separately. The Postmen Domain, for example, has this property, which is called subadditivity. If X and Y are two sets of addresses, and we need to visit all of them (XUY), th en in the worst case we will be able to do the minimal cycle visiting the X addresses, then do the minimal cycle visiting the Y addresses. This might be our best plan if the addresses are disjoint and de- coupled (the topology of the graph is against us). In that case, the cost of visiting all the addresses is equal to visiting one set plus the cost of visiting the other set. However, in some cases we may be able to do better, and visit some addresses on the way to others. Definition 3 TOD < 7, d, c > will be called subad- ditive if for all finite sets of tusks X, Y E 7, we have c(X u Y) 5 c(X) + c(Y). Coalitions in Subadditive Task Oriented Domains In a TOD, a group of agents (a coalition) can coordi- nate by redistributing their tasks among themselves. In a subadditive TOD, the way to minimize total cost is to aggregate as many tasks as possible into one ex- ecution batch (since the cost of the union of tasks is always less than the sum of the costs). Therefore, the maximum utility that a group can derive in a subaddi- tive TOD is the difference between the sum of stand- alone costs and the cost of the overall union of tasks. This difference will be defined to be the value of the coalition. Definition 4 Given an encounter (Tl, . . . , Tn) in u subadditive TOD < 7,A, c >, we will define the coali- tion game induced by this encounter to be (N,v), such that N = (1,2,... CicS ‘(El - ‘(UieS z)* ,n), and QS C N, v(S) = Superadditive Coalition Games It seems intuitively reasonable that agents in a coali- tion game should not suffer by coordinating their ac- tions with a larger group. In other words, if you take two disjoint coalitions, the utility they can derive to- gether should not be less than the sum of their separate utilities (at the worst, they could “coordinate” by ig- noring each other). This property (which, however, is not always present) is called superadditivity. efinition 5 A coalition game with transferable util- ity in normal characteristic form (N, v) is superaddi- tive if for any disjoint coalitions S, V c N, S n V = 8, then v(S) + v(V) < v(S U V). TPaeorern 1 Any encounter (Tl, . . . , Tn) in a subad- ditive TOD induces a superadditive coalition game (N, v). Proof. Proofs can be found in (Zlotkin & Rosenschein 1993b). •I Mechanisms for Subadditive TO We would like to set up rules of interaction such that communities of self-interested agents will form benefi- cial coalitions. There are several attributes of the rules of interaction that might be important to the design- ers of these self-interested agents (as discussed further ents should not squander re- sources when they come to an agreement; there should not be wasted utility when an agreement is reached. Since the coalition game is superadditive it means that the sum of utilities of the agents should be equal to VW). 2. Stability: Since the coalition game is superaddi- tive, the full coalition can always satisfy the efficiency condition, and therefore we will aSsume that the full coalition will be formed. The stability condition then relates to the payoff vector (ui, . . . , un) that assigns to each agent i a utility of ui. There are three levels of sta- bility (rationality) conditions: individual, group, and coalition rationality. Individual Rationality means that that no individualagent would like to opt out of the full coalition; i.e., ui 2 v( {i}) = 0. Group Rationality (Pareto Optimulity) means that the group as a whole would not prefer any other payoff vector over this vec- tor; i.e., Cy=, Ui = v(n). This condition is equivalent to the efficiency condition above. Coalition Rutionul- ity means that no group of agents should have an in- centive to deviate from the full coalition and create a sub-coalition; i.e., for each subset of agents S E N, c* 3. ‘P Ui > V(S). ln-&&ty: It will be desirable for the overall in- teraction environment to make low computational de- mands on the agents, and to require little communica- tion overhead. 4. Distribution: Preferably, the interaction rules will not require a central decision maker, for all the obvious reasons. We do not want our distributed system to have a performance bottleneck, nor collapse due to the single failure of a special node. 5. Symmetry: Two symmetric agents should be as- signed the same utility by the mechanism (two agents 434 Distributed AI ;iAd Figure 2: Example of an Unstable Encounter are symmetric when they contribute exactly the same value to all possible coalitions). We will develop a mechanism for subadditive TQDs such that agents agree on the all-or-nothing deal, in which each agent has some probability of executing all the tasks. The question that we will try to answer now is “What should be the division of utilities among all agents in the full coalition?” Coalition rationality is the strongest stability con- dition, and implies individual rationality and group rationality.’ However, this condition is very strong, and cannot always be satisfied. Consider the encounter from a three-agent Postmen Domain that can be seen in Figure 2. The Post Office is in the center. The length of each arch is 1. The encounter is (7’1 = {a, d}, T2 = {b, e}, Ts = {c, f}).3 Each agent can deliver his letters with a cost of 4. The cost of delivering the union of the letters of any two agents is 5. Therefore, the value of any two agents’ coalition is (2 * 4) - 5 = 3. The cost of delivering all the letters is 8. Therefore, the value of the full coalition is (3 * 4) - 8 = 4. We would like to find a payoff vector (~1, ~2, ’11s) that satisfies the following conditions: (1) v’i E {1,2,3) ui 2 u({i}) = 0; (2) Vi # j E {1,2,3) Ud + Uj > ?J({i,j)) = 3; (3) ‘111 + U2 + u3 2 w({l, 2,3})= 4. Since the full coalition is also the maximal valued configuration, condition (3) is satisfied by equality (i.e., Ul -I- U2 i- U3 = 4). If we add up all the inequalities, we will have ur +UZ+U~ >= 4$, which cannot be satisfied. This means that in any division of the value of the full coalition among the agents there will be at least two agents that will prefer to opt out of the coalition and form a sub-coalition! For example, assume that the full coalition is formed with payoff vector (1, 1,2). Agents 1 and 2 can get more by forming a coalition (i.e., by excluding agent 3 from the coalition). The new payoff 2All payoffs that satisfy the coalition rationality condi- tions are called the core of the game in the game theory literature. See, for example, (Kahan & Rapoport 1984). 3Agent 1 has to deliver letters to addresses a and d, agent 2 has to deliver letters to addresses b and e, and agent 3 has to deliver letters to addresses c and f. vector can then be (11, li, 0). This coalition and payoff vector is also not sta % le, since now agent 3 can tempt agent 2 (for example) to form a coalition with 3 by promising 2 more utility. The new payoff vector can then be (0,2,1). H owever, now agent 1 can convince the two agents that they all can do better by forming the full coalition again. The new payoff vector can then be (i, 2i, 14). This coalition is also not stable. . . S hapley Value The Shapley Value (Shapley 1988; Young 1988) for agent i is a weighted average of all the utilities that i contributes to all possible coalitions. The weight of each coalition is the probability that this coalition will be formed in a random process that starts with the one-agent coalition, and in which this coalition grows by one agent at a time such that each agent that joins the coalition is credited with his contribution to the coalition. The Shapley Value is actually the expected utility that each agent will have from such a random process (assuming any coalition and permutation is equally likely). efinition 6 Given a superadditive coalition game with transferable utility in normal characteristic form (Iv, v), the Shapley Value is defined to be: ui = c (n-IsI-l)!pI! SCN,ieS tl! v(S u {i}) - v(S). The Shapley Value satisfies the efficiency, symmetry, and individual rationality conditions (Shapley 1988; Kahan & Rapoport 1984). However, it does not nec- essarily satisfy the coalition rationality condition. Theorem 2 The Shapley Value is also: ui = c(Ti) - c SCIV,i@S c(S), i.e., (n-‘s;;l)!‘s’!At(S). Ai, - c(S u {i}) - the additional cost that agent i adds to a coalition S. Agent i’s Shapley Value is the difference between the cost of its goal and its weighted average cost contri- bution to all possible coalitions. The cost that agent i can contribute to a coalition is bounded by c(T;). Therefore, the average contribution is also bounded by c(Ti), which also means that the Shapley Value is pos- itive (i.e., satisfies the individual rationality contribu- tion) and bounded by c(T;) (which is also the maximal utility that an agent can get according to our model). Thus (as we promised above in Section ), our model never attempts to transfer to an agent more utility than he can get by simply having his tasks performed by others. echanisms for Subadditive TODs We can define a Shapley Value-based mechanism for subadditive TODs that forms the full coalition and di- vides the value of the full coalition using the Shap- ley Value. The mechanism simply chooses the follow- ing (all-or-nothing) mixed deal, (pi, . . . , pn), such that Pi = c (n-p+-l)!pI!A~(S> .SCN,ieS c(N) Coordination 435 Theorem 3 The above all-or-nothing deal is well- defined, (i-e., Vi E N: 0 5 pi 5 1; Cy=, pi = 1) and gives each agent i an expected utility that is exactly the Shapley Value ui. Evaluation of the Mechanism The above mechanism gives each agent its Shapley Value. The mechanism is thus symmetric and effi- cient (i.e., satisfying group rationality), and also sat- isfies the criterion of individual rationality. However, as was seen in Example 2, no mechanism can guar- antee coalition rationality. Besides failing to guaran- tee coalition rationality, the mechanism also does not satisfy the simplicity condition. It requires agents to calculate the Shapley Value, a computation that has exponential computational complexity. The computational complexity of a mechanism should be measured relative to the complexity of the agent’s standalone planning problem. This relative measurement would then signify the computational overhead of the mechanism. Each agent in a Task Ori- ented Domain needs to calculate the cost of his set of tasks, i.e., to find the best plan to achieve them. Calculation of the value of a coalition is linear in the number of agents in the coalition.4 The calculation of the Shapley Value requires an evaluation of the value of all (2”) p ossible coalitions. In Section below we will show that there exists another Shapley-based mecha- nism that has linear computational complexity. Concave TODs Definition 7 [Concavity]:5 TOD < I,d, c > will be called concave if for all finite sets of tasks X 5 Y, 2 C 7, we have c(Y U 2) -c(Y) 5 c(X U 2) -c(X). All concave TODs are also subadditive. It turns out that general subadditive Task Oriented Domains can be restricted, becoming concave Task Oriented Do- mains. For example, the Postmen Domain is subad- ditive, when the graphs over which agents travel can assume any topology. By restricting legal topologies to trees, the Postmen Domain becomes concave. Definition 8 A coalition game with transferable util- ity in normal characteristic form (N, v) is convex if for any coalitions S, V, v(S)+v(V) 5 v(SUV)+v(SnV). to In convex coalition games, the incentive for an join a coalition grows as the coalition grows. agent Theorem 4 Any encounter (Tl, . . . , T,) in a concuve TOD induces a convex coalition game (N, v). Theorem 5 [Shapley (1971)] (Shapley 1971): In convex coalition games, the Shapley Value always sat- isfies the criterion of coalition rationality. 4The cost of a set of tasks needs to be calculated only a linear number of times. 5The definition is from (Zlotkin & Rosenschein 1993a). 436 Distributed AI In concave TODs, the Shapley-based mechanism in- troduced above is fully stable, i.e., satisfies individual, group, and coalition rationality. The Random Permutation Mechanism The Shapley Value is equal to the expected contribu- tion of an agent to the full coalition, assuming that all possible orders of agents joining and forming the full coalition are equally likely. This leads us to a much simpler mechanism called the Random Permuta- tion Mechanism: agents choose a random permutation and form the full coalition, one agent after another, according to the chosen permutation. Each agent (i) gets utility (2oi) that is equal to its contribution to the coalition, at the time he joined it. This is done by agreeing on the all-or-nothing deal, (~1, . . . , p,,), such that pi = %F Theorem 6 If each permutation has an equal chance of being chosen, then the Random Permutation Mech- anism gives each agent an expected utility that is equal to its Shapley Value. The Shapley-based Random Permutation Mecha- nism does not explicitly calculate the Shapley Value, but instead calculates the cost of only n sets of tasks. Therefore, it has linear computational complexity. The problem of coalition formation is reduced to the prob- lem of reaching consensus on a random permutation. Consensus on Permutation No agent would like to be the first one that starts the formation of the full coalition (since this agent by defi- nition gets zero utility). If the domain is concave (and therefore the coalition game is convex), each agent has an incentive to join the coalition as late as possible. To ensure stability, we need to find a consensus mech- anism that is resistant to any coalition manipulation. No coalition should be able, by coordination, to influ- ence the resulting permutation such that the members of the coalition will be the last ones to join the full coalition. For example, this means that no coalition of n - 1 agents could force the single agent that is out of the coalition to go first. We will use the simple cryptographic mechanism that allows an agent to encrypt a message using a private key, to send the encrypted message, and then to send the key such that the message can be unen- crypted. Using these tools, each agent chooses a ran- dom permutation and a key, encrypts the permutation using the key, and broadcasts the encrypted message to all other agents. After he has received all encrypted messages, the agent broadcasts the key. Each agent un- encrypts all messages using the associated keys. The consensus permutation is the combination of all per- mutations. Each agent can make sure that each permutation has an equal chance of being chosen even if he assumes that the rest of the agents are all coordinating their permutations against him (i.e., trying to make him be the first). All he needs to do is to choose a random per- mutation. Since his permutation will also be combined into the final permutation, everything will be shuffled in a way that no one can predict. Conclusions We have considered the kinds of n-agent coordination mechanisms that can be used in Task Oriented Do- mains (TODs), when any sub-group of agents may en- gage in task exchange to the exclusion of others. We presented a simple, efficient, symmetric, and in- dividual rational Shapley Value-based coalition forma- tion mechanism that uses cryptographic techniques for subadditive TODs. When the domain is also concave, the mechanism also satisfies coalition rationality. Future research will consider non-subadditive TODs. It will also consider issues of incentive compatibility in multi-agent coalition formation, investigating mecha- nisms that can be employed when agents have partial information about the goals of other group members and can deceive one another about this private infor- mation. References Conry, S.; Meyer, R.; and Lesser, V. 1988. Multistage negotiation in distributed planning. In Bond, A., and Gasser, L., eds., Readings in Distributed Artificial In- telligence. San Mateo: Morgan Kaufmann Publishers, Inc. 367-384. Ephrati, E., and Rosenschein, J. S. 1991. The Clarke Tax as a consensus mechanism among automated agents. In Proceedings of the Ninth National Con- ference on Artificial Intelligence. Ephrati, E., and Rosenschein, J. S. 1993. Dis- tributed consensus mechanisms for self-interested het- erogeneous agents. In First International Conference on Intelligent and Cooperative Information Systems, 71-79. Kahan, J. P., and Rapoport, A. 1984. Theories of Coalition Formation. London: Lawrence Erlbaum As- sociates. Ketchpel, S. P. 1993. Coalition formation among autonomous agents. In Pre-Proceedings of the Fiflh European Workshop on Modeling Autonomous Agents in a Multi-Agent World. Kraus, S., and Wilkenfeld, J. 1991. Negotiations over time in a multi agent environment: Preliminary report. In Proceedings of the Twelfth International Joint Conference on Artificial Intelligence, 56-61. Kraus, S.; Wilkenfeld, J.; and Zlotkin, 6. 1995. Mul- tiagent negotiation under time constraints. Artificial Intelligence. to appear. A preliminary version ap- peared in CS-TR-2975, University of Maryland. Kreifelts, T., and von Martial, F. 1990. A negotia- tion framework for autonomous agents. In Proceedings of the Second European Workshop on Modeling Au- tonomous Agents and Multi-Agent Worlds, 169-182. Kuwabara, K., and Lesser, V. R. 1989. Extended pro- tocol for multistage negotiation. In Proceedings of the Ninth Workshop on Distributed Artificial Intelligence, 129-161. Pollack, M. E., and Ringuette, M. 1990. Introducing the Tileworld: Experimentally evaluating agent archi- tectures. In Proceedings of the National Conference on Artificial Intelligence, 183-189. Rosenschein, J. S., and Zlotkin, G. 1994. Rules of En- counter: Designing Conventions for Automated Nego- tiation among Computers. Cambridge: MIT Press. Rosenschein, J. S. 1993. Consenting agents: Negotia- tion mechanisms for multi-agent systems. In Proceed- ings of the International Joint Conference on Artiji- cial Intelligence, 792-799. Shapley, L. S. 1971. Cores of convex games. Interna- tional Journal of Game Theory l:ll-26. Shapley, L. S. 1988. A value for n-Person games. In Roth, A. E., ed., The Shapley Value. Cambridge: Cambridge University Press. chapter 2, 31-40. Shechory, O., and Kraus, S. 1993. Coalition for- mation among autonomous agents: Strategies and complexity. In Pre-Proceedings of the Fifth Euro- pean Workshop on Modeling Autonomous Agents in a Multi-Agent World. Smith, R. G. 1978. A Framework for Problem Solv- ing in a Distributed Processing Environment. Ph.D. Dissertation, Stanford University. Sycara, K. P. 1988. Resolving goal conflicts via nego- tiation. In Proceedings of the Seventh National Con- ference on Artificial Intelligence, 245-250. Young, H. P. 1988. Individual contribution and just compensation. In Roth, A. E., ed., The Shapley Value. Cambridge: Cambridge University Press. chapter 17, 267-278. Zlotkin, G., and Rosenschein, J. S. 1989. Negotia- tion and task sharing among autonomous agents in cooperative domains. In Proceedings of the Eleventh International Joint Conference on Artificial Intelli- gence, 912-917. Zlotkin, G., and Rosenschein, J. S. 1991. Coopera- tion and conflict resolution via negotiation among au- tonomous agents in noncooperative domains. IEEE Transactions on Systems, Man, and Cybernetics 21(6):1317-1324. Zlotkin, G., and Rosenschein, J. S. 1993a. A domain theory for task oriented negotiation. In Proceedings of the International Joint Conference on Artificial In- telligence, 416-422. Zlotkin, G., and Rosenschein, J. S. 199313. One, two, many: Coalitions in multi-agent systems. In Pre- Proceedings of the Fifth European Workshop on Mod- eling Autonomous Agents in a Multi-Agent World. Coordination 437 | 1994 | 161 |
1,499 | An Experiment in the Design of Software Agents Henry Kautz, Bart Selman, Michael Coen, Steven Ketchpel, and Chris Ramming AI Principles Research Department AT&T Bell Laboratories Murray Hill, NJ 07974 {kautz, selman, jcr)@research.att.com mhcoen@ai.mit.edu ketchpel@cs.stanford.edu Abstract We describe a bottom-up approach to the design of software agents. We built and tested an agent system that addresses the real-world problem of handling the activities involved in scheduling a visitor to our laboratory. The system employs both task-specific and user-centered agents, and communi- cates with users using both email and a graphical interface. This experiment has helped us to identify crucial require- ments in the successful deployment of software agents, in- cluding issues of reliability, security, and ease of use. The architecture we developed to meet these requirements is flex- ible and extensible, and is guiding our current research on principles of agent design. Introduction There is much recent interest in the creation of software agents. A range of different approaches and projects use the term “agents”, ranging from adaptive user interfaces to systems that use planning algorithms to generate shell scripts (Maes 1993; Dent et al. 1992; Shoham 1993; Etzioni et al. 1992). In our own approach, agents assist users in a range of daily, mundane activities, such as setting up meetings, send- ing out papers, locating information in multiple databases, tracking the whereabouts of people, and so on. Our objec- tive is to design agents that blend transparently into nor- mal work environments, while relieving users of low-level administrative and clerical tasks. We take the practical as- pect of software agents seriously: users should feel that the agents are reliable and predictable, and that the human user remains in ultimate control. One of the most difficult aspects of agent design is to define specific tasks that are both feasible using current technology, and are truly useful to the everyday user. Fur- thermore, it became clear during the testing of our initial prototype that users have little patience when it it comes to interacting with software agents. We therefore paid spe- cial attention to the user interface aspects of our system. In particular, whenever possible, we opted for graphically- oriented interfaces over pure text-based interfaces. In ad- dition, reliability and error-handling is crucial in all parts of a software agent system. The real world is an unpre- dictable place: messages between agents may be lost or 438 Distributed AI delayed, people may respond inappropriately to requests, and so forth. Our approach has been bottom-up. We began by identify- ing possible useful and feasible tasks for a software agent. The first such task we choose involved the activities sur- rounding the scheduling of a visitor to our lab. We designed and implemented a set of software agents to handle this task. We deliberately made no commitment in advance to a par- ticular agent architecture. We then tested the system with ordinary users; the feedback from this test led to many im- provements in the design of the agents and the human/agent interfaces, as well as the development of a general and flexible framework for agent interaction. The key feature of the framework is the use of personalized agents called “userbots” that mediate communication between users and task-specific agents. We are now in our third round of im- plementation and testing, in which we are further refining and generalizing our userbots so that they can communicate with software agents developed by other research groups, such as the “softbots” of Etzioni et al. (1992). We believe that the bottom-up approach is crucial in identifying the necessary properties of a successful agent platform. Our initial experiments have already led us to formulate some key properties. Examples include the sepa- ration of task-specific agents from user-centered agents, the need to handle issues of security and privacy, and as men- tioned above, the need for good human interfaces and high reliability. Selecting an appropriate task for software agents to perform is itself a challenge. Agents must provide solutions to real problems that are important to real users. The whole rai- son d’etre for software agents is lost if they are restricted to handling toy examples. On the other hand, more com- plex tasks frequently include a range of long-term research issues, such as understanding unrestricted natural language. After considering a number of possible agent tasks, we settled on the problem of scheduling a visitor to our lab.’ This job is quite routine, but consumes a substantial amount ‘See also Dent et al (1992) and Maes and Kozierok (1993), that describe the design of software agents that learn how to assist users in scheduling meetings and managing their personal calendars. From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. of the host’s time. The normal sequence of tasks consists of announcing the upcoming visit by email; collecting re- sponses from people who would like to meet with the visitor, along with their preferred meeting times; putting together a schedule that satisfies as many constraints as possible (taking into account social issues, such as not bumping the lab director from the schedule); sending out the schedule to the participants, together with appropriate information about room and telephone numbers; and, of course, often rescheduling people at the last minute because of unforeseen events. We decided to implement a specialized software agent called the “visitorbot” to handle these tasks. (We use the suffix “bot” for “software robot”.) After examining various proposed agent architectures (Etzioni, Lesh, & Segal 1992; Shoham 1993), we decided that it was necessary to first obtain practical experience in building and ausing a concrete basic agent, before committing to any particular theoretical framework. Our initial implementation was a monolithic agent, that communicated directly with users via email. The program was given its own login account (“visitorbot”), and was activated upon receiving email at that account. (Mail was piped into the visitorbot program using the “.forward” facility of the Unix mail system.) Our experience in using the visitorbot in scheduling a visit led to the following observations: 1. Email communication between the visitorbot and hu- mans was cumbersome and error-prone. The users had to fill in a pre-defined form to specify their preferred meeting times. (We considered various forms of natural language input instead of forms. However, the current state of the art in natural language processing cannot parse or even skim unrestricted natural language with sufficient reliability. On the other hand, the use of restricted “pseudo’‘-natural lan- guage has little or no advantage over the use of forms.) Small editing errors by users often made it impossible to process the forms automatically, requiring human interven- tion. Moreover, users other than the host objected to using the visitorbot at all; from their point of view, the system simply made their life harder. Based on this observation, we realized that an easy to use interface was crucial. We decided that the next version of the visitorbot would employ a graphical interface, so that users could simply click on buttons to specify their preferred meeting times. This approach practically eliminated com- munication errors between people and the visitorbot, and was viewed by users as an improvement over the pre-Bot environment. 2. There is a need for redundancy in error-handling. For example, one early version of the visitorbot could become confused by bounced email, or email responses by “vaca- tion” programs. Although our platform has improved over the initial prototype, more needs to be done. Agents must react more or less predictably to both foreseen errors (e.g., mangled email), and unforeseen errors (e.g., a subprocess invoked by the bot terminates unexpectedly). Techniques from the area of software reliability and real-time systems design could well be applicable to this problem. For exam- ple, modern telephone switching systems have a down-time of only a few minutes per year, because they continuously run sophisticated error-detection and recovery mechanisms. 3. The final task of creating a good schedule from a set of constraints did not require advanced planning or scheduling techniques. The visitorbot translated the scheduling prob- lem into an integer programming problem, and solved it us- ing a commercial integer programming package (CPLEX). An interesting advantage of this approach is that was easy to incorporate soft constraints (such as the difference between an “okay” time slot and a “good” time slot for a user). This experience led us to the design shown in Fig. 1. This design includes an agent for each individual user in addition to the visitorbot. For example, the agent for the user “kautz” is named “kautzbot”, for “selman” is named “selmanbot”, and so on. The userbots mediate communication between the visitorbot and their human owners. The normal interaction between the visitorbot and the users proceeds as follows. The initial talk announcement is mailed by the visitorbot to each userbot. The userbot then determines the preferred mode of communication with its owner. In particular, if the user is logged in on an X-terminal, the userbot creates a pop-up window on the user’s screen, containing the announcement and a button to press to request to meet with the visitor, as shown in the left-hand window in Fig. 2. If the user clicks on “yes”, the userbot passes this information back to the visitorbot, which responds with a request to obtain the user’s preferred meeting times. The userbot then creates a graphical menu of meeting times, as shown in the right-hand window in Fig. 2. The user simply clicks on buttons to indicate his or her preferences. The userbot then generates a message containing the preferences and mails it back to the visitorbot. If the userbot is unable to determine the display where the user is working, or if the user fails to respond to the pop-up displays, the userbot forwards the request from the visitorbot via email to the user as a plain text form. There are several important advantages of this design. First, the visitorbot does not need to know about the user’s display, and does not need permission to create windows on that display. This means, for example, that a visitorbot at Bell Labs can create a graphical window at any site that is reachable by email where there are userbots. This was successfully tested with the “mhcoenbot” running at MIT.2 The separation of the visitorbot from the userbot also sim- plifies the design of the former, since the userbots handle the peculiarities of addressing the users’ displays. Even more importantly, the particular information about the user’s loca- tion and work habits does not have to be centrally available. This information can be kept private to the user and his or 2Sometimes it is po ssible to create a remote X-window over the internet, but this is prone to failure. Among other problems, the user would first have to grant permission to (“xhost”) the machine running the visitorbot program; but note that the user may not even know the identity of machine on which the visitorbot program is running. Even if the identity of the machine is known, it may be impossible to serve X-windows directly due to “firewalls” or intermittent connections. Software Ag&ts 439 user selman user kautz user mike Figure 1: Current architecture of the agent system. Solid lines represent email communication; dashed lines represent both - graphical and email communication. her userbot. Another advantage of this design is that different users, who may have access to different computing resources, can run different userbots, of varying levels of sophistication. Thus, everyone is not restricted to a “least common denom- inator” type interface. Perhaps the most important benefit of the design is that a task-specific agent (such as the visitorbot) is not tied to any specific form of communication. The task-specific agent specifies what information is to be transmitted or obtained, but not how the communication should take place. The userbot can then employ a wide range of media, such as graphics, voice, FAX, email, etc. for interacting with its owner. The userbot also can take into account its owners preferences and such factors as the owners whereabouts and current computing environment in deciding on the mode of communication. For example, a userbot could incorporate a telephone interface with a speech synthesizer. This would enable a userbot to place a call to its owner (if the owner so desires), read the talk announcement, and collect the owner’s preferences by touch-tone. Note that this extension would not require any modification to the visitorbot itself. Refining the Userbot Tests of the visitorbotiuserbot system described in the pre- vious section showed that users greatly preferred its ease of use and flexibility over our initial monolithic, email-based agent. Now that we had developed a good basic archi- tecture, the logical next step was to incorporate new task- specific agents. In order to do so, we undertook a complete reimplementation of the system. In the new implemen- tation, all visitorbot-specific code was eliminated from the userbots. We designed a simple set of protocols for commu- nication between task-specific agents and userbots. Again, our approach was pragmatic, in that we tried to established a minimal set of conventions for the applications we were considering, rather than immediately trying to create a full- blown agent interlingua. 440 Distributed AI In brief, bots communicate by exchanging email which is tagged with a special header field, “XBot-message-type”. The message type indicates the general way in which the body of the message (if any) should be processed by the receiver. For example, the message type “xchoices” means that the message is a request for the receiver to make a se- ries of choices from among one or more sets of alternatives described in the body of the message. The inclusion of the field “XBot-return-note” in the message indicates that the result of processing the message should be mailed back to the sender. The communication protocol establishes the syntax for the data presented in the body of each message type, and the format of the data that results from processing the message. However, the protocol deliberately does not specify the exact method by which the processing is carried out. For example, a userbot may process an xchoices mes- sage by creating a pop-up menu, or calling the user on the telephone, or simply by consulting a database of defaults that the user has established. When applications are developed that demand novel kinds of interactions with userbots, the communication protocols can be extended by adding new message types. This will re- quire the creation of mechanisms for distributing “updates” to the userbots to handle the extensions (an issue we re- turn to below). So far, however, only a very small number of message types (namely, ones for requesting choices, re- questing help, and simply conveying a piece of information) have been needed. One question that more experience in building bots will answer is whether the number of basic message types is indeed bounded and small, or if new types are often needed with new applications. In essence, then, the messages that task-specific bots and userbots exchange can be viewed as intensions - such a request to make a choice - rather than extensions - for ex- ample, if one were to mail a message containing a program that draws a menu on the screen when executed.3 In this 3This description of messages as intensions versus extensions id xsched F Message from Visitorbat Cll:lSan Tues Jan 18) *** Talk Rnnouncenent *** Sof?bots as Testbeds for AI Oren Etzioni imiu, of Washington I will describe our project to develop UNIX softblots Csoftware robots)--- complete intelligent agents that interact uith UNIX, 11:OO an Thurs Jan 20, Roan 28-402 Schedule neeting? I Yes pilqicq ~1 10:30 p&-&j 11m Talk - 4C-507 11:30 12:oo 1:30 2:oo 2:30 3:oo 3:30 Figure 2: Graphical interfaces created by a userbot in response to messages from the visitorbot. The left window is created by processing a talk announcement; the right, by a request for the user’s preferred meeting times. regard it is informative to contrast our approach with that used in Telescript, the electronic messaging language cre- ated General Magic. In Telescript, messages are full-fledged programs, and are executed on the receiving machine by a fixed interpreter. Thus, Telescript messages are purely ex- tensional. While the Telescript approach has the advantage that the reception of a message can initiate arbitrarily novel and complex processing, this must be weighed against the fact that any flexibility in the way in which the interaction takes place with the user must be built into each and every message. One aspect of the preliminary userbot that some users found objectionable was the fact that various windows (such as those in Fig. 2) would pop-up whenever the userbot re- ceived mail, which could be disruptive. Therefore, in the new implementation messages are normally not displayed until the user makes an explicit request to interact with his or her userbot. This interaction is supported by a continuously- running “active” userbot, as shown in Fig. 3. The main user- bot window indicates the number of outstanding messages waiting to be processed, the userbot’s state (working or idle), and three buttons. Clicking on the “process message” button allows the userbot to process messages that require user in- teraction - for example, bringing up an xchoices window on behalf of the visitorbot. Note, however, that messages that are tagged as “urgent” are always immediately processed by the userbot. is due to Mark Jones. The second button, “user preferences”, brings up a win- dow in which the user can set various options in the behavior of his or her userbot. For example, checking the “autopilot” box makes the userbot pop up windows without waiting to be explicitly told to do so. The “voice” checkbox causes the userbot to announce the receipt of new userbot mail us- ing the speaker in a Sun workstation - a kind of audible “biff” for botmail. The “forward to” options are used to indicate that the userbot should choose try to communicate with its owner at a remote location - for example, by trans- mitting messages via a FAX-modem to the owner’s home telephone. (Currently the code to support the “forward to” options has not yet been completed. The exact appearance and functionality of these options may differ in the final version.) Finally, the third button in the main userbot window brings up the window labeled “taskbots”. This window contains a button for each task-specific agent whose email address is known to the userbot. (This information is main- tained in file that the user can easily customize.) Clicking on a button in this window initiates communication with the designed task-specific agent, by sending a message of type “help” to that agent. The communication protocol specifies that the agent should respond to a help message by send- ing back a menu of commands that the agent understands, together with some basic help information, typically in the form of an xchoices message. When the userbot processes this response, it creates a window containing the appropri- ate controls for interacting with that particular task-specific Software Agents 441 Figure 3: Graphical display of a userbot. agent. For example, a user who is hosting a visitor to our lab starts the entire process by clicking on the visitorbot button. This leads to the creation of a window containing buttons for basic visitorbot commands, such as scheduling a new visitor, getting the status of a visit, ordering a schedule to be generated, and so on. Clicking on some of these buttons could lead to the creation of other windows, for example, one in which to type the text of the abstract of the visitor’s talk. At the time that this paper is being written, only the visi- torbot button in the taskbots menu is active. Over the next few months we intend to establish communication with Oren Etzioni’s “finger” agent (used to obtain information about people on the internet) (Etzioni, Lesh, & Segal 1992), and Tom Mitchell’s “calendar” agent (used to schedule meetings among groups of people) (Dent et al. 1992). The fingerbot and calendarbot will not themselves be ported to our labo- ratory’s computers; instead, those programs will run at their respective “homes” (University of Washington and CMU), and communication with userbots at various sites will take place using ordinary intemet email. We hope that the idea of a userbot will provide a powerful and flexible framework for integrating many different kinds of software agents that run in different computing environments. ivacy and Security Early discussions with potential users made it clear that privacy and security are central issues in the successful de- ployment of software agents. Some proposed agent systems would filter through all of the user’s email, pulling out and deleting messages that the agent would like to handle. We found that users generally objected to giving a program permission to delete automatically any of their incoming 442 Distributed AI mail. An alternative approach would give the bot authority to read but not modify the user’s mail. The problem with this is that the user’s mail quickly becomes polluted with the many messages sent between the various bots. Our solution to this problem has been to create a pseudo- account for each userbot, with its own mail alias. Mail sent to this alias is piped into a userbot program, that is executed under the corresponding user’s id. This gives the instantiated userbot the authority, for example, to create a window on the user’s display. Any “bot mail” sent to this alias is not seen by the user, unless the userbot explicitly decides to forward it. Each user has a special “.bot” directory, which contains information customized to the particular user. These files specify the particular program that instantiates the userbot, a log of the userbot mail, and the user’s default display id. In general, this directory contains user-specific information for the userbot. It is important to note that this directory does not need to be publicly readable, and can thus contain sensitive information for use by the userbot. Examples of such information include the names of people to which the bot is not supposed to respond, unlisted home telephone numbers, the user’s personal schedule, and so on. Thus, userbots provide a general mechanism for the dis- tribution and protection of information. For a concrete ex- ample, consider the information you get by running the “finger” command. Right now, you have to decide whether your home phone number will be available to everyone on the intemet, or no one at all. A straightforward task of your userbot would be to give out your phone number via email on request from (say) faculty members at your department and people listed in your address book, but not to every person who knows your login id. Earlier we described the alternative Telescript model in which messages are programs that are executed on the re- ceiving machine. This model raises concerns of computer security, particularly if such programs are able to access the host’s file system. (Security features in Telescript al- low the user to disable file access, but this would appear to limit the kinds of tasks Telescript agents could perform.) Userbot systems are by nature secure, insofar as the rou- tines for processing each message type within the userbot are secure. Although this is a non-trivial condition, it would appear to be easier to guarantee that the code of the userbot itself (that is presumably obtained from a trusted source) is secure, rather than to guarantee that every email program (that could come from anyone) does not contain a virus. Extensions and updates to userbots to handle new message types would have to be distributed through secure channels, perhaps by using cryptographic techniques (Rivest, Shamir, & Adleman 1978). Bots vs. Programs An issue that is often raised is what exactly distinguishes software agents from ordinary programs. In our view, soft- ware agents are simply a special class of programs. Perhaps the best way to characterize these programs is by a list of distinguishing properties: Communication: Agents engage in complex and frequent patterns of two-way communication with users and each other. Temporal continuity: Agents are most naturally viewed as continuously running processes, rather than as functions that map a single input to a single output. Responsibility: Agents are expected to handle private in- formation in a responsible and secure manner. Robustness: Agents should be designed to deal with unex- pected changes in the environment. They should include mechanisms to recover both from system errors and hu- man errors. If errors prevent completion of their given tasks, they still must report the problem back to their users. Multi-platform: Agents should be able to communicate across different computer system architectures and plat- forms. For example, very sophisticated agents running on a high-end platform should be able to carry out tasks in cooperation with relatively basic agents running on low-end systems. Autonomy: Advanced agents should have some degree of decision-making capability, and the ability to choose among different strategies for performing a given task. Note that our list does not commit to the use of any particular form of reasoning or planning. Although advanced agents may need general reasoning and planning capabilities, our experiments have shown that interesting agent behavior can already emerge from systems of relatively simple agents. addresses the real-world problem of handling the commu- nication involved in scheduling a visitor to our laboratory. Our experiment helped us to identify crucial factors in the successful deployment of agents. These include issues of reliability, security, and ease of use. Security and ease of use were obtained by separating task-specific agents from personal userbots. This architecture provides an extensible and flexible platform for the further development of practi- cal software agents. New task-specific agents immediately obtain a graphical user interface for communicating with users via the userbots. Furthermore, additional modalities of communication, such as speech and FAX, can be added to the userbots, without modifying the task-specific agents. Perhaps the hardest problem we encountered was defining the initial task. More attention should be paid to identifying useful and compelling agent applications that blend unob- trusively into ordinary work environments. We believe that the empirical approach taken in this paper is essential for guiding research toward the truly central and difficult issues in agent design. Acknowledgements We thank Oren Etzioni for stimulating discussions about softbots during his visit to Bell Labs, leading us to initiate our own softbot project. We also thank Ron Brachman, Mark Jones, David Lewis Chris Ramming, Eric Sumner, and other members of our center for useful suggestions and feedback. eferences Dent, L.; Boticario, J.; McDermott, J.; Mitchell, T.; and Zabowski, D. 1992. A personal learning apprentice. In Proceedings of AAAI-92, 96-103. AAAI Press/The MIT Press. Etzioni, 0.; Hanks, S.; Weld, D.; Draper, D.; Lesh, N.; and Williamson, M. 1992. An approach to planning with incomplete information. In Proceedings of KR-92, 115 125. Morgan Kaufmann. Etzioni, 0.; Lesh, N.; and Segal, R. 1992. Building softbots for UNIX. Technical report, University of Wash- ington, Seattle, WA. Maes, P., and Kozierok, R. 1993. Learning interface agents. In Proceedings of AAAI-93, 459-464. AAAI Press/The MIT Press. Maes, P., ed. 1993. Designing Automomous Agents. MIT/Elsevier. Rivest, R. L.; Shamir, A.; and Adleman, L. 1978. A method for obtaining digital signatures and public key cryptosys- terns. Communications of the ACM 2 l(2): 120-l 26. Shoham, Y. 1993. Agent-oriented programming. ArtiJiciaZ Intelligence 60:5 l-92. Conclusions We have described a bottom-up approach to the design of software agents. We built and tested an agent system that Soft T.vare Agents | 1994 | 162 |
Subsets and Splits
SQL Console for Seed42Lab/AI-paper-crawl
Finds papers discussing interpretability and explainability in machine learning from after 2010, offering insight into emerging areas of research focus.
Interpretability Papers Since 2011
Reveals papers from the AAAI dataset after 2010 that discuss interpretability or explainability, highlighting key research in these areas.
SQL Console for Seed42Lab/AI-paper-crawl
Searches for papers related to interpretability and explainability in NIPS proceedings after 2010, providing a filtered dataset for further analysis of research trends.
AI Papers on Interpretability
Finds papers discussing interpretability or explainability published after 2010, providing insight into recent trends in research focus.
ICML Papers on Interpretability
Retrieves papers from the ICML dataset after 2010 that mention interpretability or explainability, offering insights into trends in model transparency research.
ICLR Papers on Interpretability
Retrieves papers from the ICLR dataset that discuss interpretability or explainability, focusing on those published after 2010, providing insights into evolving research trends in these areas.
ICCV Papers on Interpretability
Finds papers from the ICCV dataset published after 2010 that discuss interpretability or explainability, providing insight into trends in research focus.
EMNLP Papers on Interpretability
Retrieves papers related to interpretability and explainability published after 2010, providing a focused look at research trends in these areas.
ECCV Papers on Interpretability
The query retrieves papers from the ECCV dataset related to interpretability and explainability published after 2010, providing insights into recent trends in these research areas.
CVPR Papers on Interpretability
Retrieves papers from the CVPR dataset published after 2010 that mention 'interpretability' or 'explainability', providing insights into the focus on these topics over time.
AI Papers on Interpretability
Retrieves papers from the ACL dataset that discuss interpretability or explainability, providing insights into research focus in these areas.