index int64 0 18.8k | text stringlengths 0 826k | year stringdate 1980-01-01 00:00:00 2024-01-01 00:00:00 | No stringlengths 1 4 |
|---|---|---|---|
1,300 | EL : A Formal, Yet Natural, Compre Knowledge Representation* Chung Hee Hwang & Lenhart K. Schubert Department of Computer Science, University of Rochester Rochester, New York 14627-0226 {hwang,schubert}@cs.rochester.edu Abstract We present Episodic Logic (EL), a highly expressive knowledge representation well-adapted to general com- monsense reasoning as well as the interpretive and in- ferential needs of natural language processing. One of the distinctive features of EL is its extremely permis- sive ontology, which admits situations (episodes, events, states of affairs, etc.), propositions, possible facts, and kinds and collections, and which allows representation of generic sentences. EL is natural language-like in ap- pearance and supports intuitively understandable infer- ences. At the same time it is both formally analyzable and mechanizable as an efficient inference engine. Introduction One of the requirements on knowledge representation is that it should sup ort efficient inference (cf., [Brachman & Levesque, 1985 ). 7 Our basic methodological assump- tion is that this demand on the representation is best met by using a highly expressive logic closely related to natural language itself. The possibility of handling sit- uations, actions, facts, beliefs, attitudes, causes, effects, and general world knowledge simply and directly de- pends on the expressiveness of the representation. These remarks apply as much to semantic representation of English sentences, as to knowledge representation. In fact, the simplest assumption is that the two are one and the same. On that premise, we have been develop- ing Episodic Logic (EL), a highly expressive knowledge and semantic representation well-adapted to common- sense reasoning as well as the interpretive and inferential needs of natural language processing. EL is a first order, intensional logic that incorpo- rates from situation semantics the idea that sentences describe situations [Barwise & Perry, 1983; Barwise, 19891. A distinctive feature of the logic, responsible for its name, is the inclusion of episodic (situational) variables. (Episodes, as the term is construed in EL, subsume events, states of affairs, circumstances, even- tualities, etc.) The adjective “episodic” is intended to emphasize the fact that reasoning about the world and the agents in it often involves inference of the temporal *This research was supported in part by NSF Re- search Grant IRI-9013160, ONR/DARPA Research Con- tracts No. N00014-82-K-0193 and No. N00014-92-J-1512, NSERC Operating Grant A8818, and the Boeing Co. in Seat- tle under Purchase Contract W288104. 676 Hwang and causal connections among transient types (as op- posed to eternal types) of situations, i.e., occurrences or state changes. EL is related to natural language through a Montague- style coupling between syntactic form and logical form, allowing the relationship between surface form and log- ical form to be specified in a modular, transparent way. EL representations derived from English text are natu- ral and close to English surface form. Episodic variables implicit in English sentences and temporal relations be- tween those episodes are automatically introduced into the logical form in the process of deindexing. Very gen- eral inference rules, rule instantiation and goad chaining, have been developed that allow for deductive and prob- abilistic inferences. We first describe the ontology of EL, which provides the necessary ingredients for interpreting an expressive representation, and then show how some of the more unusual kinds of objects are represented using these in- gredients. After that we briefly discuss how inferences are made in EL. EL and its Liberal Ontology A distinctive feature of EL is its very permissive ontol- ogy, which supports the interpretation of a wide range of constructs that are expressible in English. EL can rep- resent conjoined predicates by means of X-abstraction (e.g. 9 crack longer than 3 inches); restricted quantifiers (e.g., most aircrafts manufactured by Boeing); predicate modifiers (e.g., severe damage); perception (e.g., “Mary heard the bomb explode”); attitudes and possible facts (e.g., “Mary believes that gasoline is heavier than wa- ter”); actions (e.g., “John thought Mary’s dropping the glass was intentional”); opaque contexts (e.g., “John wants to design a new engine”); kinds (e.g., “the two kinds of precious metal, gold and platinum”), etc. We now describe the ontological basis of this wide expressive range of EL. Model structures for EL are based on an ontology of possible individuals 2). Like Hobbs [1985], we believe it is better to expand one’s ontology to allow more kinds of entities than complicating the logical form. Possibde in- dividuals include not only real or actual individuals but also imaginary or nonexistent ones (e.g., “Tomorrow’s lecture has been cancelled” [Hirst, 19911). As shown in Figure 1, 2) includes many unusual types of individuals From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. Figure 1: Ontology of Basic Individuals besides “ordinary” ones. First, unlike situation seman- tics, EL allows possible situations S. These are much like “partial possible worlds,” in that predicate symbols are assigned partial extensions (and antiextensions) relative to them. Among the possible situations are informa- tionally maximal exhaustive situations ?f, and among the exhaustive situations are the spatially maximal pos- sible times Z, which in turn include the spatiotemporally maximal possible worlds W and the spatially maximal, temporally minimal moments of time M. The treatment of times and worlds as certain kinds of situations is un- usual but quite plausible. Sentences like “Last week was eventful” suggests that times such as last week indeed have episodic content. Note that times in the episodic sense are distinguished from clock times (in the metric sense). Also, note that actions or activities are not in- cluded in S, since actions are regarded in EL as events paired with their agents. (More on this later.) In gen- eral, a situation can be part of many worlds, but an “exhaustive” situation belongs to a unique wor1d.l A transitive, reflexive relation Actual C V XS determines what individuals are actual with respect to a given sit- uation. As well, there is a relation Nonactual CVXX, disjoint from Actual, determining the possible but non- actual individuals involved in a situation. Disjointly from S, we have not only ordinary individ- uals of our experience, but also propositions P (includ- ing possible facts F which we identify with consistent propositions), kinds of individuals K (including kinds of actions KA, and kinds of episodes ICE), the real num- bers ZR (augmented with -co and +oo), and n-D regions R,, containing subsets of an (1 5 n 5 4). Elements ‘Note that if two worlds assign the same truth values to the same unlocated (i.e., eternal) statements, they must be one and the same world. Since exhaustive situations are in- formationally maximal, any true (or false) unlocated state- ment in a particular world must be true (or false) in every exhaustive situation that is part of that world. Thus, an exhaustive situation cannot belong to more than one world. of Rq are space-time trajectories that may not be con- nected, and whose temporal projection in general is a multi-interval2 This allows for repetitive or quantified events in EL. Finally, there are collections C and n-vectors (i.e., tu- ples) V, n = 2,3, . . ., of all of these. Space limitations f revent further elaboration, but readers are referred to Hwang, 1992; Hwang & Schubert, To appear] for a more detailed discussion of the EL ontology and semantics. Some Essential Resources of EL We now outline some of the essential resources of EL, em- phasizing nonstandard ones intended to deal with events, actions, tionals. attitudes, facts, kinds, and probabilistic condi- Events and Actions We discuss events (situations) and actions first. While doing so, we will also indicate the flavor of EL syntax. We then discuss kinds of events and actions, and describe how properties of events and actions are represented. Consider the following sentences and their logical forms. (1) a. Mary dropped the glass b. (past (The x: [x glass] [Mary drop x])) c. (3el: [el before ul] [[Mary drop Glass11 ** el]) (2) a. John thought it was intentional. b. (past [John think (That (past [It intentional]))]) c. (3e2: [e2 before ~21 [[John think (That (3e3: [e3 at-or-before e2] [[[Mary ] el] intentional] ** e3]))] ** e2]) Initially, sentence (la) is translated into an unscoped logical form ULF, [Mary (past drop) (The glass)], where ( ) indicates unscoped expressions and [ ] in- dicates ‘infix expressions. (Infix notation is used for readability, with the last argument wrapped around to the position preceding the predicate.) After scoping of the past operator and the The-determiner, we get LF (lb), which is then deindexed to episodic logical form ELF (1~). As seen in (lc), we use restricted quanti- fiers of form (Qcv:Qj\l[l), where Q is a quantifier such as V, 3, The, Most,Few, . . . ; (Y is a variable; and restriction @ and matrix Q are formulas. (V’a:QQ) and @a:@!&) are equivalent to (VJQ)[@ + !I!] and (3cy)[@ A @I, re- spectively. In (lc), ‘**’ is an episodic, modal opera- tor that connects a formula with the episode/situation it describes. Intuitively, for @ a formula and q an episodic term, [(a ** 71 means “a characterizes -(or, 2Note that situations occupy such spatiotemporal trajec- tories, rather than occupying space and time separately. This point is supported by sentences like “It did not snow on the trip from Madison to Chicago” [Cooper, 19851. As Cooper points out, this sentence “could be true even if it had snowed during the trip on the road between Madison and Chicago and yet had not been snowing at any time at the place where the car was at the time.” Representation for Actions & 677 completely describes) v.“~ Also notice in (lc) that the exchange of a boat for a sum of money), but different past operator is deindexed to the predication [el be- actions (a buying versus a selling). (3) John bought the boat from Mary. (4) Mary sold the boat to John. fore ulj, where ul denotes the utterance event bf sen- tence (la). Such temporal deindexing is done by a set of recursive deindexing rules [Hwang & Schubert, 1992; Schubert & Hwang, 19901. A “characterizing” description of an episode is max- imal, or complete, in the sense that it provides all the facts that are supported by the episode, except possibly for certain ones entailed by those given. In other words, the episodes so characterized are minimal with respect to the characterizing description, in the part-of ordering among situations, i.e., no proper part of such an episode supports the same description. We also have a more fun- damental episodic operator ‘*‘, where [@ * q] means “a is true in (or, partially describes) 7.” ‘*’ is essentially an object-language embedding of the semantic notion of truth in an episode or situation. Note that [@ ** 71 im- plies [Q * riJ. Thus, for instance, [[Mary drop Glass11 ** el] implies that el is a part (in an informational sense) of some episode e2, coextensive with el, such that [[Glass1 fall] * e2] , [[Mary hold Glass11 * (begin-of e2)] , [(-[Mary hold Gl assl]) * (end-of e2)], etc. The notion of a complete description (characterization) of a situation using ‘**’ is crucial for representing causal relationships among situations. For instance, if (la) is followed by “It woke up John,” “it” refers to an event completely described by (la), i.e., a minimal-spatiotemporally as well as informationally -event supporting (la), not sim- ply some event partially described by (la). (For more detailed argument, see [Hwang & Schubert, In print].) In (2b), That is a proposition-forming (nominalization) operator to be discussed later. In (2a, b), “it” refers to Mary’s action of dropping the glass, and is resolved in (2~) to [Mary 1 el], “th e action whose performance by Mary constitutes event e1.“4 ‘I’ is a pairing function (similar to Lisp “cons”) applicable to individuals and tuples. Thus, actions are represented as agent-event pairs in EL. This representation is motivated by the observation that actions are distinguished from events or episodes in that they have well-%efined agents. That is why it makes sense to talk about “intentional actions,” but not “intentional events.” It also seems that the criteria for individuating actions are different from those for indi- viduating episodes. For example, it seems that (3) and (4) below may describe the same episode or event (an Note, in particular, that the buying in (3) may have been performed reluctantly and the selling in (4) eagerly, but it would be very odd to say that the events described in (3) or (4) were reluctant, or eager, or occurred re- luctantly or eagerly. Events simply do not have such properties. If we assume they did, we might end up say- ing, contradictorily, that an event was both reluctant and eager.5 Several event- or situation-based formalisms have been propbsed within the AI community also. The first was the situation calculus [McCarthy & Hayes, 19691, which introduces explicit situation-denoting terms and treats some formulas and functions (namely, situational flu- ents) as having situation-dependent values. However, situations are viewed as instantaneous “snapshots” of the universe. (They correspond to M in our ontology.) As such they cannot serve as models of the events and situations of ordinary experience, which can be tempo- rally extended while having limited spatial extent and factual content, can cause each other, etc. Kowalski and Sergot [1986] developed the event calculus in an effort to avoid the frame problem that exists in the situation cal- culus. Events in the event calculus are local (rather than global), and initiate or terminate “time periods” (proba- bly best understood as circumstances or states of affairs, since they can be concurrent yet distinct). The main limitation is that (as in a Davidsonian approach) events are associated only with simple subject-verb-object(s) tuples, and not with arbitrarily complex descriptions. Kinds of Events and Actions. As separate cate- gories from situations and events, there are also kinds of events and actions. Below are some sample sentences with their logical forms (with certain simplifications). Ka in (5b) and (6b) is a property forming (nominaliza- tion) operator that maps monadic predicate intensions to (reified) types of actions and attributes. Ke in (7b) and (7b) is a sentence nominalization operator, which forms (reified) types of events from sentence intensions. (5) a. Skiing is strenuous b. [[(Ka ski) strenuous] ** El] 30~r episodic variables are different from Davidsonian (6) a. Mary wants to paint the wall [1967] event variables in that they can be “attached” to any b. [[Mary want (Ka (paint Wa113))] ** E2] formula, whereas Davidsonian ones can be “attached” only (7)a. For John to be Iate is rare to atomic ones. Note that Davidson’s method cannot handle b. [[(Ke [John late]) rare] ** E3] sentences with quantifiers or negation. Event variables that are closely related to ours are those of Reichenbach [1947] (8) a. Bill suggested to John that he call Mary b. [[Bill suggest-to John (Ke [John call Mary])] who, like situation semanticists, viewed a sentence as de- scribing a situation. ** E4] 4Notice the existential variable el occurring outside its 50ur view appears to resonate with Jacobs’ [1987]. Al- quantifier scope. This is allowed in EL thanks to the “pa- though our conception of actions as agent-event pairs is some- rameter mechanism,” which allows the binding of variables what different from Jacobs’ who regards actions as VIEWS of to be carried beyond their quantifier sdopes. See [Hwang & events, both are based on the intuition that events and ac- Schubert, In print]. tions are different, though closely related. 678 Hwang “Skiing” and “to paint the wall” are kinds of actions, while “for John to be late” and “John call Mary” are kinds of events. Properties of Events and Actions. Typically, Pro- perties of actions and attributes (manner, purpose, deg- ree, quality, etc.) are introduced‘ through predicate op- erators; and those of episodes (duration, frequency, spa- tiotemporal location, etc.) through sentential operators. Consider the following examples. (9) a. John fixed the engine with Bill yesterday b. (past (The x: [x engine] (( adv-e (during Yesterday)) [John ((adv-a (with-accomp Bill)) (fix x))]))) c. (3el:[el before ul] [[“(during (yesterday-rel-to ul)) A “Xe[[John ] e] with-accomp Bill] A [John fix Engine211 ** el]) (10) a. Mary bought a brush to paint the wall b. (past (The x: [x wall] [Mary ((adv-a (for-purpose (Ka (paint x)))) WY: [Y brush1 [z buy Yl)>lN c. (3e2:[e2 before u2] [[“Xe[[Mary 1 e] for-purpose (Ka (paint Wall3))] A (3~: EY brush1 [Mary buy YI)I ** e2]) “Yesterday” in (9a) implicitly modifies the episode de- scribed by “John fix the engine” (its temporal location). “With Bill” in (9a) and “to paint the wall” in (lOa), on the other hand, implicitly modify actions performed by John and Mary respectively (by specifying their “accom- paniment” and “purpose”). As illustrated in the index- ical (b)-formulas above, implicit episode modifiers ap- pear ‘as sentential operators of form (adv-e 7r), where ?r is a predicate over episodes; implicit action modifiers appear as predicate modifiers of form ( ‘adv-a T), where 7r is a predicate over actions/attributes.‘ Simple deindex- ing rules for adverbials (which we omit here;-see [Hwang & Schubert, 1993a]) convert the (b)-formulas into the nonindexical ELFs shown in (c). Note in (c)-formulas that our treatment of adverbials views them as provid- ing conjunctive information about the described episode (or action), as in Dowty’s system [1982]. ‘“’ is an ex- tension operator that applies its predicate operand to the “current” episode. For example, “(during 1993) or “Xe[[John ] e] with-accomp Bill] is true in situation s iff s occurs during 1993 or the action [John I s] is accompa- nied by Bill. Notice that the adv-a rule introduces the agent-event pair [z I e] into the formula. The following are some relevant meaning postulates. For 7r, X’ l-place predicates, 7 a term, and @ a formula: 0 [“7r A “7r’] ++ “Xe[[e n] A [e T’]] 0 N”~ A @I ** ??I * [[[?I 4 A @I ** 4 Applying the above meaning postulates to (9c) and WC)’ we obtain the following (assuming el, e2 are skolemized to El, E2). (9’) d. F.- g- (10’) d ;. [El during (yesterday-rel-to ul)] [[John ] El] with-accomp Bill] [John fix Engine21 * El] (3e3: [e3 coexten-subep-of El] [[John fix Engine21 ** e3]) [[Mary I E2] for-purpose (Ka(paint Wall3))] [(3y: [y brush] [Mary buy y]) * E2] (3e4: [e4 coexten-subep-of E2] [(3y: [y brush] [Mary buy y]) ** e4]) [e coexten-subep-of e’] means that e and e’ occupy the same spatiotemporal location and that e is an (informa- tional) part of e’. Intensions, Attitudes and Possible Facts We now briefly discuss attitude and intensional verbs. (11) a. John will design the engine b. (pres (futr [John (design XZ[Z = (The engine)])])) b’. (pres (futr [John (design Az(The y:[y engine][z = y]))])) (12) a. Mary told John that the engine gave up b. (past (The x: [x engine] [Mary tell John (That (past [x give-up]))])) c. (gel: [el before ul] [[Mary tell John (That (3e2: [e2 before el] [[Engine3 give-up] ** e2]))] ** el]) As shown in (11)) intensional verbs are treated as pred- icate modifiers in EL. For objects of intensional verbs, there is generally no presupposition of actual exis- tence - at least not in the “opaque” (de ditto) reading. That is, “the engine” in (ll), for instance, does not nec- essarily exist in the world wherein the sentence is evalu- ated. That is why it is scoped under the intensional verb in (llb’). (W e omit the deindexed formula for (1 la), but see [Hwang, 19921.) Th e “transparent” (de 7-e) reading can be obtained by choosing wide scope for the unscoped term (The engine), i.e., just inside the tense operator, but outside the intensional verb. Objects of attitudes are taken to be (reified) propo- sitions in EL. Propositions are formed by a nominal- ization operator That as shown in (12bc). Recall that we take propositions as subsuming possible facts. Pos- sible facts are just consistent propositions- there are self-contradictory propositions (and these may, for in- stance, be objects of beliefs, etc.), but there are no self- contradictory possible facts. We should remark here that often events and facts are equated, e.g., by Reichenbach [1947]. As Vendler [1967] has pointed out, this is unten- able. Most importantly, events take place over a certain time interval, and may cause and be caused by other events. In contrast, facts do not happen or take place. They are abstractions (like propositions) and as- such provide explanations, rather than causes. However, they Representation for Actions & Motion 679 are so closely related to events (e.g., it may be a fact that un event occurred or will occur) that people often talk of facts as if they were causes. We regard such talk as metonymic, referring to the “events behind the facts.” Kinds and Probabilistic Conditionals We have seen operators Ka and Ke, forming kinds of actions and events. We now consider a more general kind-forming operator, K, that maps predicates to indi- viduals. It seems that many generic sentences are best translated into formulas involving kinds. Other kinds of generic sentences are more easily represented as proba- bilistic (generic) conditionals, and we will discuss these after “kind” expressions. First, consider the following sentences. (13) a. b. C. Gold is expensive, but John buys it regularly [(gpres [(K gold) expensive]) A (pres ((adv-f regular) [John buy It]))] [(3el: [[el extended-ep] A [ul during el]] [[(Kgold) expensive] ** el]) A (3e2: [et2 at-about ul] [L[zt;flar] A (mult [John buy (Kgold)])] (14) a. Wasps are pesky and they spoiled our picnic b* [(gpres [(K (Pl ur wasp)) (plur pesky)]) A (past [They spoil Picnicl])] Following Carlson [1982] and Chierchia [1982], we trans- late mass or abstract nominals like gold, corrosion, wel- fare, etc., and bare plurals like wasps into kinds. In (13a, b) above, ‘it’ refers to ‘gold’ in the first clause and is resolved as (Kgold) in (13~). In (13b), adv-f (standing for frequency adverbial) is an operator that maps pred- icates over sequences (i.e., composite episodes) to sen- tence modifiers, and its deindexing rule introduces the mult operator shown in (13~). For @ a formula and q a composite episode, [(mult Qi) ** 91 reads “every compo- nent episode of q is of type a.” In (14b), plur is an oper- ator that maps predicates applicable to (non-collective) individuals to predicates applicable to collections. That is, (plur P) is true of a collection just in case P is true of each member. (plur is similar to Link’s (19831 “star” operator.) We omit the deindexed formula for (14a) for space reasons. Now in (13a), what John buys is apparently quanti- ties of gold, not the “kind” gold. We obtain such “in- stance”-or “realization” interpretations using the follow- ing meaning postulates. For kinds K: and telic, object-level predicates II : I7 [[7 II K] cf (32 : [z instance-of K] [7 II x])] For all monadic predicates 7r : q (Vx [[a: instance-of (K -/r)] ++ [x r]]) Then, we have the following equivalences. [John buy (K gold)] c, (3x: [x gold] [John buy x]). Our uniform treatment of mass terms and bare plurals as kinds in EL deals straightforwardly with seemingly prob- lematic sentences like (13a) and (14a), in which kinds and instances appear to co-refer. Generalizations involving indefinite count singulars (e.g., “A bicycle has two wheels”) or bare numeral plu- rals (e.g., “Two men can lift a piano”) are translated into probabilistic conditionals (i.e., extensionally inter- pretable generic conditionals), rather than kind-level predications. Such conditionals turn out to be very use- ful in representing naive physics and causal laws (of the kinds discussed in [Hayes, 1985; Hobbs et al., 19871) and unreliable knowledge in general, like the following. (15) a. If one drops an open container containing some . I liquid, then the container may cease to contain any liquid. b. (3x: [X person] (%:[(3w[b container] A [y open]] (3%: [z liquid] [y contain z])) ** er] (3e2: [(begin-of e2) during ei] [Lx drop !/I ** e21))) +.3,~:,y,el,e2 (3es:[[e2 cause-of es] A [es right-after e2]] [(1(3v: [V liquid] [y contain v])) ** es]) Here, ‘.3’ attached to the conditional is a lower bound on the statistical probability, and x, y, el, e2 are controlled variables.6 This rule says, roughly, that in at least 30% of the situations in which the antecedent is true, the consequent will also be true.7 It appears that for many conditional generalizations, a representation in terms of a probabilisiic conditional with control over all existen- tials in the antecedent that occur anaphorically in the consequent leads to intuitively reasonable uncertain in- ferences. We provide a “first cut” formal semantics in [Hwang, 1992; Hwang & Schubert, To appear]. I11ference Rules The main inference rules in EL are Rude Instantiation (RI) and Goal Ch aining (GC). They are generalizations of “forward chaining” and “backward chaining,” in AI terminology. RI allows arbitrarily many minor premises to be matched against arbitrarily deeply embedded subformu- las of a “rule” (an arbitrary formula, though typically a conditional with quantified or controlled variables). As such, it is similar to “nested resolution” [Traugott, 198S], but avoids skolemization. Instead of stating the rule formally (which we have done elsewhere [1993b] & [In print]), we illustrate its use with a simple example. 6As mentioned earlier, the parameter mechanism in EL lets the existential variable bindings be carried beyond their quantifier scope. Different choices of controlled variables lead to different readings. (This addresses the “proportion prob- lem”; cf., [Schubert & Pelletier, 19891.) 71f the consequent of the rule said “the container will contain less liquid than before,” then the conditional would have a much higher lower bound, say, ‘.95’. 680 Hwang Suppose we have the following rule (with all episodic variables suppressed for simplicity), (Vx: [x person] [[[x healthy] A [[x rich] V (3y:[x has-job y])]] ---f [x contented]]) , For anyone, if he is heaZthy and is rich or has a job, he is contented and assume we are given the following facts: [Joe man] and [Joe has-job Lawyer]. Then RI would trigger on the second fact, matching [Joe has-job Lawyer] to [x has-job ~1, and thus binding x to Joe and y to Lawyer. This also particularizes [x person] in the rule to [Joe person], and this would immediately be verified by the “type specialist,” with use of [Joe man] and the implicit subsumption relation between person and man. Substituting truth for both of the matched subformulas and simplifying, RI would then infer [Joe healthy] + [Joe contented], i.e., Joe is contented provided that he is healthy. Note that the matching process can substitute for either uni- versal variables (provided that the universal quantifier lies in a positive environment) or existential variables (provided that the existential quantifier lies in a nega- tive environment).’ More generally variables controlled by probabilistic conditionals and quantified variables in “facts” may also be bound in the matching process. For instance, suppose that the rule above were slightly reformulated to say “If a person is healthy and either is rich or has a job, then he is probably (with lower bound .6 on the probability) contented” (it should not be hard to see how to write this down formally); and suppose [Joe has-job Lawyer] had been replaced by (3% [Joe has-job z]), and the addi- tional fact [Joe healthy] given. Then RI would still have applied, and would have yielded conclusion [Joe contented]v6, i.e., with an epistemic probability of at least 60%, Joe is contented. While RI is typically used for “spontaneous” (input- driven) inference chaining when new facts are asserted, goal chaining (GC) is used for deliberate, goal-directed inference, for instance when answering questions. GC is the exact dual of RI. For example, suppose again that we have the rule and facts given above, and we wish to answer the question, “Is Joe contented?“. Then GC would reduce this goal to the subgoal “Is Joe healthy?” in one step. (It would do this either from the original rule and facts, or, if the above result of RI had been asserted into the knowledge base, from the latter.) In actual use, RI and GC are slightly more subtle than the above examples suggest. First, there are two ver- 8 Positive and neg ative environments correspond respec- tively to embedding by an even and odd number of nega- tions, implicational antecedents, and universal quantifier re- striction clauses. Only subformulas embedded by extensional operators 1, A, V, +, V, 3, and generic conditionals may be matched by RI. sions of each rule, whose (sound) use depends on the configuration of quantifiers for matched variables. Sec- ond, goal-directed reasoning is supplemented with natu- ral deduction rules, such as that to prove a conditional, we can assume the antecedent and prove the consequent. And finally, there is some limited use of goal-chaining in input-driven inference, so as to verify parts of rules, and some limited use of input-driven inference in goal- directed reasoning, so as to elaborate consequences of assumptions that have been made. Concluding Remarks EL is a very expressive knowledge representation; its ontology allows for possible situations (events, states, states of affairs, etc.), actions, attitudes and proposi- tions, kinds, and unreliable general knowledge, among other things. As such, EL goes beyond the current state of the art as represented by such works as [Alshawi & van Eijck, 1989; Brachman et al., 1991; Hobbs et al., 1987; Shapiro, 1991; Sowa, 19911. All features of EL are strongly motivated by corresponding expressive devices found in natural languages - i.e., generalized quanti- fiers, modifiers, nominalization, etc. As a result, knowl- edge can be cast in a very natural, understandable form and intuitively obvious inferences can be modelled in a direct, straightforward way. One of the most important remaining problem is the principled handling of probabilities. The state of the art in probabilistic inference (e.g., [Pearl, 1988; Bacchus, 19901) is not such as to provide concrete tech- nical tools for a logic as general as EL. Our current tech- niques consist mainly of probabilistic inference chaining, which is demonstrably sound under certain conditions. As well, the implementation applies a “noncircularity principle” which prevents the same knowledge from be- ing used twice to “boost” the probability of a particular conclusion. Apart from this, independence assumptions are used where there are no known dependencies, and lower probabilities are manipulated in accord with the laws of probability. However, we lack a general theory for combining evidence for (or against) a given conclu- sion. Another remaining problem is inference control. Right now EPILOG terminates forward inference chains when either the probability or the “interestingness” of the inferred formulas becomes too low. We are con- vinced that “interestingness” is a crucial notion here, and that it must allow for context (salience) and for the inherent interestingness of both objects and predicates, and the interaction between these (e.g., an object should become more interesting if it is found to have interesting properties). We have experimented with such measures, but have not achieved uniformly satisfactory inference behavior. The kinds of EL formulas we have shown are in principle derivable from surface structure by simple, Montague-like semantic rules paired with phrase struc- ture rules. While developing a grammar and semantic rules that would cover most of English would be a very large undertaking, we have developed GPSG-like gram- Representation for Actions & Motion 681 mars to cover story fragments and (more ambitiously) sizable dialogs from the TRAINS domain [Allen & Schu- bert, 19911. F or some such fragments, as well as rules for mapping indexical LFs into nonindexical ELFs, see [Hwang, 1992; Hwang & Schubert, To appear]. The EPILOG implementation [Schaeffer et al., 19911 of EL has been applied to small excerpts from the Little Red Riding Hood story, making complex inferences about causation [Schubert & Hwang, 19891; and it reasons with telex re- ports for aircraft mechanical problems in a message pro- cessing application for the Boeing Commercial Airplane Reliability and Maintainability Project [Namioka et al., In print; Hwang & Schubert, 1993b]. References [Allen, J.F. & Schubert, L.K. 19911 The TRAINS Project. TR 382, U. of Rochester, Rochester, NY. [Alshawi, H. & van Eijck, J. 19891 Logical forms in the Core Language Engine. In Proc. .Uth Annual Meeting of the ACL, Vancouver, Canada. 25-32. [Bacchus, F. 19901 Representing and Reasoning with Proba- bilistic Knowlege: A Logical Approach to Probabilities. MIT Press, Cambridge, MA. [Barwise, J. 19891 The Situation in Logic. CSLI, CA. [Barwise, J. & Perry, J. 19831 Situations and Attitudes. MIT Press, Cambridge, MA. [Brachman, R. J. & Levesque, H. J. 19851 Introduction. In Readings in Knowledge Representation. Morgan Kaufmann, San Mateo, CA. [Brachman, R. J., McGuinness, D. L., Patel-Schneider, P. F., Resnick, L. A., & Borgida, A. 19911 Living with Classic: When and how to use a KL-ONE like language. In Sowa, J. F., editor. 1991, 401-456. [Carlson, G. N. 19821 G eneric terms and generic sentences. J. of Philosophical Logic 11:145-181. [Chierchia, G. 1982.1 On plural and mass nominals. Proc. West Coast Conf. on Formal Semantics 1:243-255. [Cooper, R. 19851 Aspectual classes in situation semantics. CSLI-84-14C, CSLI, CA. [Davidson, D. 19671 The logical form of action sentences. In Rescher, N., ed. The Logic of Decision and Action. U. of Pittsburgh Press. [Dowty, D. 19821 T ense, time adverbs and compositional semantic theory. Linguistics & Philosophy 5:23-55. [Hayes, P. J. 19851 N aive physics I: Ontology for liquids. In Hobbs, J. R. & Moore, R. C., eds. Formal Theories of the Commonsense World. Ablex, Norwood, NJ. 71-108. [Hirst, G. 19911 Existence assumptions in knowledge repre- sentation. Artificial Intelligence 49:199-242. [Hobbs, J. R. 19851 Ontological promiscuity. In Proc. 23rd Annual Meeting of the ACL. Chicago, IL. 61-69. [Hobbs, J. R., Croft, W., D avies, T., Edwards, D., & Laws, I<. 19871 Commonsense metaphysics and lexical semantics. Computational Linguistics 13~241-250. [Hwang, C. H. 19921 A Logical Approach to Narrative Un- derstanding. Ph.D. Dissertation, U. of Alberta, Canada. [Hwang, C. H. & Schubert, L. K. 19921 Tense trees as the “fine structure” of discourse. In Proc. 30th Annual Meeting of the ACL. Newark, DE. 232-240. [Hwang, C. H. & Schubert, L. K. 1993a] Interpreting tem- poral adverbials. In Proc. Human Language Technology, ARPA Workshop, Princeton, NJ. [Hwang, C. H. & Schubert, L. K. 1993131 Meeting the in- terlocking needs of LF-computation, deindexing, and infer- ence: An organic approach to general NLU. In Proc. 13th IJCAI, ChambCry, France. [Hwang, C. H. & Schubert, L. K. In print] Episodic Logic: A situational logic for natural language processing. In Situa- tion Theory & its Applications, V. 3, CSLI, CA. [Hwang, C. H. & Schubert, L. K. To appear] Episodic Logic: A comprehensive semantic representation and knowledge representation for language understanding. [Jacobs, P. S. 1987.1 K nowledge-intensive natural language generation. Artificial Intelligence 33:325-378. [Kowalski, R. & Sergot, M. 19861 A logic-based calculus of events. New Generation Computing 4167-95. [Link, G. 19831 The logical analysis of plurals and mass terms: A lattice-theoretical approach. In Bauerle, Schwarze, & von Stechow, eds. Meaning, Use, and Interpretation of Lan- guage. Walter de Gruyter, Berlin, Germany. 302-323. [McCarthy, J. & Hayes, P. J. 19691 Some philosophical problems from the standpoint of artificial intelligence. In Meltzeret, et al. eds. Machine Intelligence, V. 4. 463-502. [Namioka, A., Hwang, C. H., & Schaeffer, S. In print] Using the inference tool EPILOG for a message processing applica- tion. Int. J. of Expert Systems. [Pearl, J. 19881 Probabilistic Reasoning in Intelligent Sys- tems. Morgan Kaufman, San Mateo, CA. [Reichenbach, H. 19471 Ed ements of Symbolic Logic. Macmil- lan, New York, NY. [Schaeffer, S., Hwang, C. H., de Haan, J., & Schubert, L. K. 19911 The User’s Guide to EPXLOG. Edmonton, Canada. [Schubert, L. K. To appear] Formal foundations of Episodic Logic. [Schubert, L. K. & H Wang, C. H. 19891 An Episodic knowl- edge representation for narrative texts. In Proc. KR ‘89, Toronto, Canada. 444-458. [Schubert, L. K. & H Wang, C. H. 19901 Picking reference events from tense trees: A formal, implementable theory of English tense-aspect semantics. In Proc. Speech & Natural Language, DARPA Workshop, Hidden Valley, PA. 34-41. [Schubert, L. K. & P e e 11 t ier, F. J. 19891 Generically speaking, or, using discourse representation theory to interpret gener- ics. In Chierchia, Partee, & Turner, eds. Property Theory, Type Theory, and Semantics, V.2. Kluwer, Boston, MA. 193-268. [Shapiro, S. C. 19911 Cables, paths, and “subconscious” rea- soning in propositional semantic networks. In Sowa, J. F., editor. 1991. 137-156. [Sowa, J. F., editor. 19911 Principles of Semantic Networks: Explorations in the Representation of Knowledge. Morgan Kaufmann, San Mateo, CA. [Sowa, J. F. 19911 Toward the expressive power of natural language. In Sowa, J. F., editor. 1991. 157-189. [Traugott, J. 19861 Nested resolution. In Proc. 8th Int. Conf. on Automated Deduction (CADE-8). 394-402. [Vendler, Z. 19671 C ausal relations. J. of Philosophy. 64:704- 713. 682 Hwang | 1993 | 101 |
1,301 | Charles L. Ortiz, Jr. * Department of Computer and Information Science University of Pennsylvania Philadelphia, PA 19104 clortiz@linc.cis.upenn.edu Abstract In planning tasks an agent may often find himself in a situation demanding that he choose an action that would prevent some unwanted event from oc- curring. Similarly, in tasks involving the genera- tion of descriptions or explanations of sequences of events, it is often useful to draw as many in- formative connections as possible between events in the sequence; often, this means explaining why certain events are not possible. In this paper, I consider the semantics of event prevention and ar- gue that a naive semantics which equates preven- tion with the elimination of all future possibility of the event in question is often difficult, if not impossible, to implement. I argue for a more use- ful semantics which falls out of some reasonable assumptions regarding restrictions on the set of potential actions available to an agent: (1) those actions about which the agent has formed inten- tions, (2) those actions consistent with the agent’s attitudes (including its other intentions), and (3) the set of actions evoked by the type of situation in which the agent is embedded. Introduction Any reasonable theory of action must consider the se- mantics of preventing events. This is important in planning: an agent may find himself in a situation de- manding that he choose an action that would prevent some unwanted event from occurring. This is also im- portant to tasks involving the generation of descrip- tions or explanations of sequences of events: it is often useful in such descriptions to draw as many informative .connections as possible between events in the sequence. A naive definition of event prevention motivated by examples such as (la) might base the notion on the creation of circumstances in which the event to be pre- vented could not occur. Such a definition would count too few events as legitimately “preventing,” excluding *This research was supported by the following grants: AR0 no. DAAL 03-89-C-0031 Prime and DARPA no. N00014-90-J1863. reasonable cases such as (lb) in which intuition sug- gests that the agent who was prevented was somehow predisposed to not attempt the desired action in the new, resulting situation. (la) I prevented him from drinking this water by drinking it myself. (lb) I prevented him from drinking this water by tak- ing it away. By examining the use of the verb prevents as it occurs in normal language I will argue that its commonsense usage is much more restrictive in terms of the set of possible futures relative to which it is interpreted. I claim that this more restrictive and more useful notion is a consequence of reasonable contextual restrictions that agents place on the set of potential actions avail- able to them: (1) those actions about which the agent has formed intentions, (2) those actions consistent with the agent’s attitudes (including its other intentions), and (3) the set of actions evoked by the type of sit- uation in which the agent is embedded (for example, in a traffic situation, the set of actions defined by the vehicle code). I will show that many of these proper- ties need not be taken as axiomatic but rather can be seen as deriving from a set of assumptions regarding the rational constitution of agents. a&ground One characteristic of the notion of some event, e, pre- venting another event, e’, is that it makes implicit ref- erence to an event that never occurs (e’). This sug- gests that the semantics must consider future possi- bility, only a portion of which might be realized. A first attempt at a definition might therefore base it on the conditional “if e does not occur, e’ will.” Unfor- tunately, there is an obvious problem with this sort of definition. Consider the statement, (2) The vaccine prevented him from getting smallpox. Unintended would be a suggestion that smallpox would have inevitably eventuated had the person not received the vaccine. A more acceptable definition in terms of a model of branching time was suggested by McDermott Representation for Actions & Motion 683 From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. [McDermott, 19821: e prevents e’ just in case before e, e’ was possible, while after e, e’ became impossible. In this paper, I will refer to this definition of prevention as necessary prevention, for reasons that will become clear shortly. Sentence (2) is an example of necessary prevention. The problem with the definition of nec- essary prevention, as I have already noted by way of example (lb), is that it handles too few cases. In fact, in the same paper McDermott quotes James Allen as objecting to its limited usefulness: a literal application of the definition by an agent in the course of planning would make it difficult for that agent to be able to prevent anything, there being so many things in the world outside an agent’s control. In the next section I consider the constraining influence that the prevailing context, in the form of the beliefs, desires, and inten- tions (BDIs) of the agent being prevented, might have on the set of actions that should realistically be con- sidered “possible,” In order to express some of these contextual restric- tions, I will draw on a logic of BDI developed by Co- hen and Levesque (C&L) [Cohen and Levesque, 19901 to which I will make a few additions. Their logic mod- els belief with a weak S5 modal logic where possible worlds are linear sequences of event types; complex action descriptions are possible by way of statements in dynamic logic. An agent i’s belief in some propo- sition 4 is expressed by the statement: BeZ(i, 4). An agent’s goals (consistent desires) are similarly captured by statements of the form Goal(i, 4). The semantics of both Bed and Goad are in terms of two sets of pos- sible worlds: one captures the agent’s goals and is a subset of the second set which captures the agent’s be- liefs. An agent’s intentions are then composite objects modeled as persistent goals which an agent will main- tain until the action intended is achieved or until the agent believes the action is no longer possible. Their temporal model is one based on discrete linear time, indexed by the integers, with modal temporal oper- ators: 04 means 4 is eventually true in the current world (which includes the current moment), 04 is de- fined as 1014, and later(4) is defined as 14 A 04. Their action representation is based on dynamic logic in which primitive actions are closed under nondeter- ministic choice ((Y ] p), sequencing (a ; ,8), tests (a?), and iteration (a*). Conditional actions are defined by way of the usual if-then-else statement: [if a then cx else ,8] defined as [a? ; cy ] la? ; /?I. Finally, the modal operators happens(a) and done(P) refer to the actions QI and ,8 as, respectively, happening next or as having just happened in the current world-time point (with an optional extra argument standing for the agent of the action). The reader is referred to [Cohen and Levesque, 19901 for details on the logic. Since we will need to refer to future possibility I will introduce the following branching modal operators to C&L’s logic: 0~4 means that among all of the possible worlds with pasts identical to the real one, there is some possible future in which 4 holds. q B is defined again as 10~ 14. F ormally, this can be done as follows. Let the set of worlds compatible with world u, at time t (written comp(ut, t)) (where, in C&L, T is the set of possible worlds and each world is a function from times to event types) be: comp(w, t) = {w’ E T 1 w’(t’) = w(t) for all t’ 5 t and M, W, t’ /= p iff M, w’, t’ k p for each primitive proposition}. This collapses all of the worlds with pasts identical to the current world: as such it introduces a forward branching structure. The operators are then defined as follows. M, W, t b 0~4 iff for some w’ E comp(w , t) there is a t* 2 t such that M, w’, t* /= 4. •~ is then defined in the usual way: q B =def loB+. 1 wi 11 1 a so make use of an operator, +, which I will define as follows: e + e’ =def happens(e) 3 +zter(huppens(e’)) which can be glossed as stating that e is not followed by e’. In this paper, I will adopt a common convention in which an event can have more than one type or de- scription associated with it [Davidson, 1989; Goldman, 1970; Pollack, 19861. Alternative descriptions of a par- ticular instance of an event are often conditioned either on the prevailing circumstances at the time of occur- rence or on the previous event history. To use a typi- cal example, flipping a switch can also be described as turning on a light just as long as the switch and light are connected and functioning in the appropriate man- ner and just as long as the light was not already on. C&L allow predications over events in order to support this convention. This requires, however, second order statements in order to quantify over those predications. Instead, in this paper I add a predicate type(el, ez) to their language which is true at a particular world-time point if e2 is an alternative type for ei. That is, one might have IV, w, t b happens(e) Atype(e, flip- switch) A type(e, turn-on) The act of preventing some event will then be treated as an act whose type is, among others, a prevention of that event. In general, when referring to a prevention I will be referring to a particular event token (an event type occurring at a specific world-time point) as pre- venting some event type. Given this, the definition of necessary prevention can be expressed symbolically as: type(e, prevents(e’)) f happens(e) A q g[e + e’] A Oghuppens(e’) An alternative, common definition ([Jackendoff, 19911) equates prevention with causation of a particu- lar sort. Assuming, for the moment, that one has an adequate treatment of negative events, the definition is the following: er PREVENTS e2 .iff. eiCAUSES (NOT ea) 684 Ortiz As Shoham observes [Shoham, 19881, however, the two behave quite differently counterfactually: ei PRE- VENTS e2 entails that if ei hadn’t occurred, e2 could have occurred, but need not have’. Whereas ei CAUSES (NOT e2) entails that if ei had not oc- curred e2 would have. Another approach has been sug- gested by Shoham, who defines prevention according to the form that the causal rules actually take. Briefly, Shoham’s suggestion is to count the “exceptional con- ditions” in causal rules (usually represented with “ab- normality” predicates) as preventing conditions. One problem with this suggestion is that it does not address contextual effects, involving, for example, the agent’s mental state in delimiting the space of possibilities that agent is willing to consider. Finally, note that though many of the examples I will discuss refer to past preventions (for example, (la), (lb), (W, th e issue of the evaluation of the associated counterfactual - for example, in the case of necessary prevention, “if e had not occurred then e’ would have been possible” - is an issue which is, I believe, orthog- onal to the matters with which this paper is concerned: namely, the conditions which must hold at some world- time pair (w, t) such that one is justified in claiming that some event, e, occurring at (w, t), will prevent the occurrence of some later e’, irrespective of how one should counterfactually reconstruct the state, (w, t), from a later state. Agent-Centered Notions of Event Prevention In order to derive a more manageable definition of event prevention - one that does not suffer from Allen’s objection - I will examine the role of context in the interpretation of natural language statements involving the verb prevents. Consider first what ap- pears to be a simple instance of necessary prevention as I have defined it: (3) I prevented him from drinking this glass of water by drinking it myself. Previously, I said that before some event was prevented it need only have been possible. However, in (3) in which an action is prevented, this does not seem quite right. If the individual in question does not intend to drink the water, say, because he is not even in the room and thereby unaware of the glass of water, then, even if (3) satisfies the conditions of necessary prevention, it hardly seems to represent a justified claim. Consider another example: suppose someone is standing next to a window and I claim that by standing between that person and the window I thereby prevent the person from being shot. This seems justified only if I was aware of someone trying to shoot that person in the first place. Sometimes these beliefs might only be de- faults: for example, if the president is visiting a hostile 1 Will Actually, this is not quite discuss this further in the correct if e2 next section. is an action. I country one can claim to prevent an attempt on his life - say, by keeping him indoors - if one is justified in assuming that an attempt might be made in that sort of situation. Continuing with (3), suggesting that the agent must intend the specific action in question and therefore be committed to its execution, is not completely right ei- ther: consider the situation in which two glasses are in front of the agent and (3) is uttered. For the claim in (3) to hold it seems only necessary that the agent intend to drink (generic) water (call this act-type cy), either glass believed by the agent to be an acceptable choice (call one of these e’). The agent may have com- mitted to drinking the referenced glass of water or re- mained uncommitted; but the agent should not have committed to another choice. That the agent must be aware of these choices is necessary otherwise we could claim an event as prevented even though the agent never believed that event to be a possibility. From this it seems that the following amendment to the definition of necessary prevention of some e2 by ei is needed: PRl) (i) The agent, A, who is the patient of the pre- vention, intends some act o (ii) there is some act, e’, which A believes is of type o, and (iii) e’ is of type e2, (iv) the agent has at most committed to e’. In (PRl), the qualification in (ii) that it need not be a fact that A can do (Y by doing e’, but rather that A need only believe that the relation holds together with (iii), is crucial for handling cases of the prevention of the unintended side-effects of an agent’s intended ac- tions. In the following variation of (3), if the glass actually contains alcohol, but agent A doesn’t know it, one is justified in stating that one can prevent the agent from getting drunk (e2) by drinking the glass oneself, even if the agent does not intend to become drunk but only intends to drink water (o), since, as far as A knows, drinking from that glass (e’) will sat- isfy that intention. In this example I am making the following assumption regarding the agent’s rationality: intentions are always as specific as possible. Therefore, if one intends to drink something cold one doesn’t also intend to drink some particular water out of some par- ticular glass. So the more general intention is super- seded by the more specific one as soon as it is formed. The need for the referenced qualification in (ii) is a con- sequence of an observation made by [Bratman, 19871 that an agent, in general, does not intend all of the side effects of his actions. It is also meant to capture some sense of agent A’s awareness of available possibilities; lack of awareness being modeled by lack of belief that some event is of some specific type. I will express (PRl) in Cohen and Levesque’s logic in the following way. First define a notion of potential action relative to some arbitrary agent i: Representation for Actions & Motion 685 poss(e2, i) G 3ct?le’3z.intentionu!(e’) A intends(i) CV) AlBed(i, lhuppens(i, CC; e’; type(e’, a)?)) AOBhuppens(i, x; e’; type(e’, es)?) these are the actions (ez) we believe might eventuate by virtue of the fact that they depend on a prior in- tention on the part of some agent. Note that rather than introduce another accessibility relation for these potential actions, I have chosen to express the restric- tion with the available modal operators from C&L’s logic, much as Moore did in defining ability deriva- tively, by way of knowledge axioms[Moore, 19851. The requirement that the e’ in the definition of poss be “intentional” seems necessary because not all actions are of the sort requiring a prior intention; only those in which the prior intention has “causal force.” Consider, the following. (4) The carpet prevented him from slipping. This certainly does not suggest that the agent must have had the prior intention to slip. Other cases in- clude the more problematic variety of “intention-in- action” ([Davidson, 1989; Anscombe, 19631). Returning to the definition of poss, the last three conjuncts capture conditions (i),(ii), and (iii) of (PRl). In order to handle condition (iv) we need some sort of closed-world assumption with respect to intentions. The third conjunct states that there is some possi- ble future in which the agent believes doing ei will result in the performance of a. A statement of BeZ(i, huppens(z; e’; type(e’, a)?)) would have instead said that agent i was certain that the referenced se- quence occurred next. The last conjunct states that the agent might be wrong or might not be aware of other descriptions for that event (e’). We can then de- fine the following notion of agent-centered prevention: type(el , prevents(e2, i)) Z Ahappens Aposs(e2, i) A q B(el + e2) This says that ei prevents e2 from the point of view of the set of potential actions of agent i, where e2 is an action the agent might perform as a side-effect of some other action. I refer to this definition as agent-centered since its semantics must explicitly appeal to an agent’s attitudes. Note that (PRl) immediately suggests a useful planning strategy: if we desire to prevent some o and we know that a is not intended, then no action on our part is necessary since, as far as we know, the action we desire to be prevented will never eventuate. A further simple strategy, falls out of the fact that e’ is considered a possibility simply by virtue of the fact that it is an option that agent i considers possible. This suggests that one can also prevent an agent from per- forming some action by forcing the agent to only believe that the conditions for prevention obtain, even if they actually don’t; that is if BeZ(i, type(e, prevents(e’, i))). Of course, agent i can hold the belief that e will pre- vent e’ without having any reason to believe that e will actually occur. Some statements which refer to an event prevention are inherently statements about the “abilities” of an agent. For example, consider: (5) Going to the the seminar. meeting prevented her from atten ding in which there was no intention Contrast this example with: to attend the seminar. (6) The phone call prevented her from attending the entire meeting. in which a prior intention to attend the entire meeting is required. Example (5) is a statement that reflects the inherent abilities of the agent while (6) makes a statement about the necessary future. This is clear if one considers the associated counterfactuals. In the case of (5), we can say that “If she hadn’t gone to the meeting she could have attended the seminar (whether or not she had intended to or not),” whereas in (6)) we can say that “If she hadn’t received the phone call she would have attended the entire meeting. In certain situations, an agent may not have fixed on any particular intention but might only be deliber- ating over the various alternatives that could achieve some desired goal. For example, consider an agent em- bedded in a traffic situation with a goal to arrive at a particular address. Prior to forming any intentions regarding actions to achieve its goal, the agent may consider many possible actions available in that traffic micro-world. In cases such as these, where it is difficult to be certain about which actions might play a role in an agent’s plan, we must either weaken the definition of poss(e, i) to actions desired - but not necessarily intended - by an agent or consider means for neces- sary prevention under the restricted causal structure of that particular microworld. Consider the second al- ternative by way of the following example: (7) The red light prevents him from turning left. Suppose that in this example the agent’s intentions are not at issue. Here it appears that the current con- text, prescribed by the content of the vehicle code, con- strains the set of actions the agent is disposed to con- sider, even if the agent is aware of possibilities outside that set. Alternatively, one could explain this exam- ple by suggesting that the agent has a constant back- ground maintenance goal or intention to “not break the law” and that this goal is serving to constrain the actions about which it will deliberate. Although, seem- ingly formally equivalent, it appears that the first alter- native is to be preferred on practical grounds: under the second alternative the agent would have to con- 686 Ortiz tinually deliberate regarding the consistency of a con- sidered .action with the maintenance goal in question, and would be forced, at the same time, to consider its entire stock of beliefs and potential ‘actions. Further, in the case of the constant background intention, an agent would normally almost always be “intentionally not breaking the law.” This seems somehow to cor- rupt the use of the concept of prior intention, in the sense that intentions are “formed” for the purposes of guiding future actions until they have been realized. I propose to handle this case by modifying the tem- poral portion of the logic so that a model now includes not only the set T of possible worlds but also, a set {G,C2, "', Cn}, where each set Cl (w, t) is a nonempty subset of T; the case of n = 1 reduces to the earlier version of the logic with comp(w, t). Each Ci can be viewed as a sort of cont’ext; the entire set of contexts forming a lattice structure closed under meet and join operations, I-I and LJ, corresponding to intersection and union of possible worlds in the model; with a partial order 3 defined on the set (Ci fl Cj 5 Cj ) Each Ci will be consistent with some set of causal rules which constraint the structure of possible worlds in the set. The definition for satisfaction would then be modified as follows:* ? ww I= CznC3 oB$ iff M, w, t kc1 0~4 and M, 20, t kc3 0~4 where, as before, hf,w,t kc* DiumondB4 iff M, w’, t’ b 4 for some t’ 2 t and some w’ E Ci(w, t). One could then have a particular set of causal rules in each context which defined the necessary or possible actions sanctioned by that context. For example, con- sider the following simple set, relative to some model, world, and time: /=“’ obstucZe(dir) > q Bl(huppens(e) A type(e, turn(dir))) kc2 green(Zight) > OB(huppens(e) A type(e,proceed) kc2 red(Zight) > lOB(huppens(e) A type(e,proceed)) AlOB(huppens(e)eA type(e) turn(left))) That is, in Cl if there is an obstacle in some direction, dir, one may not turn into that direction; while in C:! if there is a green light one can proceed and if there is a red light one may not turn left or proceed forward. Given these axioms, we then might have: I= cm2 happens(e) A type(e) red(Zight)) > type(e,prevents(turn(Zeft))) In other words, in the everyday context of a typical driver, a red light will prevent a driver from turning left while, in a less specific context we might have, kc1 happens(e) A type(e) red(Zight)) > +ype(e,prevents(turn(Zeft))) as desired (where cl fl c2 3 cl). I explore such a re- striction on actions in more detail in [Ortiz, 19931. Consider now the following, slightly different exam- ple which demoristrates an even more restrictive notion of prevention. In the situation following this utterance, (8) I prevented him from drinking this water by telling him it was poisoned. the action,of drinking the water is still certainly possi- ble though, unlike the case in (3), no longer desirable. So, while in (3) the agent had the intention to drink the water, my statement of (8) causes the agent to drop its intention thereby rendering the action “impossible.” In this case, the intention must have been relativized to some belief, as suggested in [Cohen and Levesque, 19901; such a belief representing a justification or rea- son for forming and maintaining the intention. The justification might be of the form “the x substance is water and there is nothing abnormal about it.” The balance of an agent’s goals and intentions will also serve to filter the options he is disposed to consider [Bratman, 19871. For example, consider the situation illustrated in Figure 1, abstracted from the video game discussed in [Chapman, 19921. In this example, there is a character called the Amazon (A) who wishes to get to an object called a Scroll (S). Scrolls are only activated for a limited period of time. The Amazon has two possible routes by which to get to the scroll: either via door d or via door d’. Several demons (D) are blocking door d; although the Amazon is perfectly capable of destroying them she still chooses door d’ because it takes time to destroy a demon and in so doing the Amazon would not be able to get to the scroll in time. In this sort of situation, one would be perfectly justified in asserting: (9) The demons prevented the Amazon through door d. from exiting even though the action claimed to be prevented is cer- tainly still “possible.” Note that crucial to a correct treatment of this example is the previous observation in (PRl) that the Amazon could not have already formed the intention to take some other option (door d’). In that case, we would not be justified in claiming the prevention. This property falls directly out of a the- orem from [Cohen and Levesque, 19901 on intentions providing a screen of admissibility: intends(x, b) A q beZ(x, a #+ b) > +ntends(x, a; b) That is, if an agent intends b, but the agent believes that a prevents b then -the agent will never form the intention to do a; b. Therefore, if the intention is never formed, it cannot be considered a potential action. This example suggests that one can prevent an action either by preventing the physical act behind it or creat- ing circumstances in which the physical act either cre- ates additional unintended effects or does not realize its intended ones. In some cases, the prevailing context Representation for Actions & Motion 687 Figure 1: Taking an agent’s plans and goals into ac- count. The Amazon wants to get to the readied scroll before it changes state. It is “prevented” from exiting through door d by virtue of its existing commitments. might render impossible the successful prevention of the physical act itself. For example, if a pitcher wishes to prevent a player from hitting a home run, unavail- able to him is the act of tying the hitter’s hands behind his back (for the same reason I argued explained (7)); unhappily, he must resort to more traditional means such as by catching the corner of the plate with a fast- ball. Conclusions I began with the observation (originally due to Allen) that cases of necessary prevention are often almost im- possible to enforce. In particular, the definition of nec- essary prevention seemed not to be at the heart of the interpretation of the natural language event descrip- tions I discussed. This led to my suggestion for a more restrictive, agent-centered version of the definition that took into account an agent’s mental state as well as the constraining effect of the current situation on the set of actions an agent was disposed to consider. I ar- gued that an agent’s intentions figure prominently in determining the contents of that set of potential ac- tions. I then demonstrated that many cases of preven- tion fall out of a theory of the rational constitution of agents: agents will generally not consider actions that conflict with current commitments captured in their intentions. I went on to suggest an extension to Cohen and Levesque’s BDI logic in which alternative causal microworlds, represented as alternative sets of possible worlds, capture useful and common restrictions that agents make to the set of actions about which they will deliberate. Finally, these observations suggested a number of planning strategies: for example, one can prevent an agent from performing an action by forcing the agent to drop its intention, by causing the agent to believe that the action is no longer possible, by creating circumstances in which the action would either inter- fere with the realization of other intentions or would introduce unintended effects, or finally by restricting consideration to the smaller micro-world in which we believe the agent is currently focusing upon. Such strategies never represent sure bets, nevertheless, they do represent good bets and certainly more tractable strategies for getting by in the world. Acknowledgments I would like to thank Mark Steedman, Barbara DiEu- genio, Jeff Siskind , and Mike White for helpful com- ments on an earlier draft of this paper. References Anscombe, G.E.M. 1963. Intention. Cornell Univer- sity Press. Bratman, Michael 1987. Intentions, Plans, and Prac- tical Reason. Harvard University Press. Chapman, David 1992. Vision, Plans, and Instruc- tion. MIT Press. Cohen, Philip and Levesque, Hector 1990. Inten- tion is choice with commitment. Artijkial Intelligence 42:213-261. Davidson, Donald 1989. Actions and Events. Claren- don Press. Goldman, Alvin 1970. A Theory of Human Action. Princeton University Press. Jackendoff, Ray 1991. Semantic Structures. MIT Press. McDermott, Drew 1982. A temporal logic for rea- soning about processes and plans. Cognitive Science 6:101-155. Moore, Robert C. 1985. A formal theory of knowledge and action. In Formal Theories of the Commonsense World. Ablex Publishing Corporation. Ortiz, Charles L. 1993. Event description in causal explanation, forthcoming. Pollack, Martha 1986. Infering Domain Plans in Question-Answering. Ph.D. Dissertation, University of Pennsylvania. Shoham, Yoav 1988. Reasoning About Change: Time and Causation from the Standpoint of Artificial Intel- ligence. MIT Press. 688 Ortiz | 1993 | 102 |
1,302 | ame Proble Richard B. Scherl* and Hector J. Levesquet Department of Computer Science University of Toronto Toronto, Ontario Canada M5S 3A6 email: scherl@cs.toronto.edu hector@cs.toronto.edu Abstract This paper proposes a solution to the frame prob- lem for knowledge-producing actions. An example of a knowledge-producing action is a sense opera- tion performed by a robot to determine whether or not there is an object of a particular shape within its grasp. The work is an extension of Reiter’s solution to the frame problem for ordi- nary actions and Moore’s work on knowledge and action. The properties of our specification are that knowledge-producing actions do not affect fluents other than the knowledge fluent, and ac- tions that are not knowledge-producing only affect the knowledge fluent as appropriate. In addition, memory emerges as a side-effect: if something is known in a certain situation, it remains known at successor situations, unless something relevant has changed. Also, it will be shown that a form of regression examined by Reiter for reducing rea- soning about future situations to reasoning about the initial situation now also applies to knowledge- producing actions. Introduction The situation calculus provides a formalism for rea- soning about actions and their effects on the world. Axioms are used to specify the prerequisites of actions as well as their effects, that is, the fluents that they change. In general, it is also necessary to provide frame axioms to specify which fluents remain unchanged by the actions. In the worst case this might require an axiom for every combination of action and fluent. Re- cently, Reiter [1991] (generalizing the work of Haas [1987], Schubert [1990] and Pednault [1989]) has given a set of conditions under which the explicit specifica- tion of frame axioms can be avoided. In this paper, we extend his solution to the frame problem to cover *National Sciences and Engineering Research Council of Canada International Postdoctoral Fellow ‘Fellow of the Canadian Institute for Advanced Research knowledge-producing actions, that is, actions whose ef- fects are to change a state of knowledge. A standard example of a knowledge-producing ac- tion is that of reading a number on a piece of pa- per. Consider the problem of dialing the combination of a safe [McCarthy and Hayes, 1969; Moore, 1980; Moore, 19851. If an agent is at the same place as the safe, and knows the combination of the safe, then he can open the safe by performing the action of dialing that combination. If an agent is at the same place as both the safe and a piece of paper and he knows that the combination of the safe is written on the pa, per, he can open the safe by first reading the piece of paper, and then dialing that combination. The effect of the read action, then, is to change the knowledge state of the agent, typically to satisfy the prerequisite of a later action. Another example of a knowledge- producing action is performing an experiment to de- termine whether or not a solution is an acid [Moore, 19851. Still other examples are a sensing operation performed by a robot to determine the shapes of ob- jects within its grasp [Lesperance and Levesque, 1990; Lesperance, 19911 and the execution of UNIX com- mands such as Is [Etzioni et al., 19921. To incorporate knowledge-producing actions like these into the situation calculus, it is necessary to treat knowledge as a fluent that can be affected by ac- tions. This is precisely the approach taken by Moore [1980]. What is new here is that the knowledge flu- ent and knowledge-producing actions are handled in a way that avoids the frame problem: we will be able to prove as a consequence of our specification that knowledge-producing actions do not affect flu- ents other than the knowledge fluent, and that ac- tions that are not knowledge-producing only affect the knowledge fluent as appropriate. In addition, we will show that memory emerges as a side-effect: if some- thing is known in a certain situation, it remains known at successor situations, unless something relevant has changed. We will also show that a form of regression examined by Reiter for reducing reasoning about fu- ture situations to reasoning about the initial situation now also applies to knowledge-producing actions. This Representation for Actions & Motion 689 From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. has the desirable effect of allowing us to reduce reason- ing about knowledge and action to reasoning about knowledge in the initial situation, where techniques such as those discussed in [Frisch and Scherl, 1991; Scherl, 19921 can be used. Finally, we show that if certain useful properties of knowledge (such as posi- tive introspection) are specified to hold in the initial state, they will continue to hold automatically at all successor situations. In the next section, we briefly review the situation calculus and Reiter’s solution to the frame problem. In the following section, we introduce an epistemic fluent into the situation calculus as an accessibilit over situations, as done by Moore[1980; 1985 s relation . Our so- lution to the frame problem for knowledge producing actions, based on this epistemic fluent, is developed and illustrated over the next four sections. In the next to the last section, we consider regression for the situ- ation calculus with knowledge-producing actions. nally, future work is discussed in the last section. Fi- The Situation Calculus and the Frame Problem The situation calculus (following the presentation in [Reiter, 19911) is a first-order language for represent- ing dynamically changing worlds in which all of the changes are the result of named actions performed by some agent. Terms are used to represent states of the world-i.e. situations. If CY is an action and s a situ- ation, the result of performing cv in s is represented by do(cr,s). The constant Se is used to denote the initial situationl. Relations whose truth values vary from situation to situation, called jhents, are denoted by a predicate symbol taking a situation term as the last argument. For example, Broken (x, s) means that object 2 is broken in situation s. It is assumed that the axiomatizer has provided for Act ion Precondition Axiom Poss(a(Z), s) E 7rra(Z, s) (1) An action precondition axiom for the action drop is given below. Poss(drop(x), s) E HoZding(x, s) (2) Furthermore, the axiomatizer has provided for each fluent F, two general eflect axioms of the form given 1 By convention, single lower case letters (i.e. roman), possibly with subscripts or superscripts, are used to rep- resent variables, strings of letters beginning with a capital letter are used for predicate symbols, strings of lower case letters are used for function symbols, and possibly sub- scripted strings of letters beginning with a capital letter are used for constants. When quantifiers are not indicated, the variables are implicitly universally quantified. in 3 and 4. General Positive Effect Axiom for Fluent F Poss(a, s) A r,‘(u,s) 4 F(do(a,s)) (3) General Negative Effect Axiom for Fluent F Poss(a, s) A y; (a, s) - lF(dO(U, s)) (4 Here 7: (a, s) is a formula describing under what condi- tions doing the action a in situation s leads the fluent F to become true in the successor situation do(a, s) and similarly y~(u, s) is a formula describing the con- ditions under which performing action a in situation s results in the fluent F becoming false in situation do(u, s). For example, 5 is a positive effect axiom for the fluent Broken. Poss(u, s) A [(u = drop(y) A Fragile(y)) V((3b)u = explode(b) A Nexto(b, y, s))] (5) + Broken (y, do(u, s)) Sentence 6 is a negative effect axiom for broken. Poss(u, s) A a = repair(y) -+ lBroken( y, do(u, s)) (6) It is also necessary to add the frame axioms that specify when fluents- remain unchanged. The frame problem arises because the number of these frame ax- ioms in the general case is 2 x A x ,T, where A is the number of actions and F is the number of fluents. The solution to the frame problem [Reiter, 1991; Pednault, 1989; Schubert, 19901 rests on a complete- ness assumption. This assumption is that axioms 3 and 4 characterize all the conditions under which ac- tion a can lead to R becoming true (respectively, false) - in the successor situation. Therefore, if action a is pas- sible and R’s truth value changes from false to true as a result of doing a, then yi(u,s) must be true and similarly for a change from true to false. Addition- ally, unique name axioms are added for actions and situations. Reiter[l991] h s ows how to derive a set of successor state axioms of the form given in 7 from the axioms (positive effect, negative effect and unique name) and the completeness assumption. Successor State Axiom Poss(u, s) - [F(do(a, s)) z r,‘(% 4 v (F(s) A 1YF (a, s))l (7) Similar successor state axioms may be written for func- tional fluents. A successor state -axiom is needed for each fluent F, and an action precondition axiom is needed for each action a. The unique name axioms need not be explicitly represented as their effects can be compiled. Therefore only F+d axioms are needed. From 5 and 6 the following successor state axiom for Broken is obtained. Poss(u, s) - [Broken(y, do(u, s)) z a = drop(y) A Fragile(y) V (3b)a = explode(b) (8) ANexto(b, y, s) V Broken(y, s) A a # repair(y)] 690 Scherl Now note for example that if -Broken( Ob&., So), then it also follows (given the unique name axioms) that -Broken( Objl, do( drop(Objz), So)). This discussion has assumed that there are no ram- ifications, i.e., indirect effects of actions. This can be ensured by prohibiting state constraints, i.e., sentences that specify an interaction between fluents. An exam- ple of such a sentence is VsP(s) ++ Q(s). The assump- tion that there are no state constraints in the axiom- atization of the domain will be made throughout this paper. In [Lin and Reiter, 19931, the approach dis- cussed in this section is extended to work with state constraints by compiling the effects of the state con- straints into the successor state axioms. An Epistemic Fluent The approach we take to formalizing knowledge is to adapt the standard possible-world model of knowledge to the situation calculus, as first done by Moore[1980]. Informally, we think of there being a binary accessi- bility relation over situations, where a situation s’ is understood as being accessible from a situation s if as far as the agent knows in situation s, he might be in situation s’. So something is known in s if it is true in every s’ accessible from s, and conversely something is not known if it is false in some accessible situation. To treat knowledge as a fluent, we introduce a binary relation K(s’, s), read as “s’ is accessible from s” and treat it the same way we would any other fluent. In other words, from the point of view of the situation calculus, the last argument to Ii’ is the official situation argument (expressing what is known in situation s), and the first argument is just an auxiliary like the y in Broken( y, s).~ We can now introduce the notation Knows(P, s) (read as P is known in situation s) as an abbreviation for a formula that uses K. For example Knows(Broken( y), s) def V’s’ K(s’, s) + Broken( y, s’). Note that this notation supplies the appropriate situ- ation argument to the fluent on expansion (and other conventions are certainly possible). This notation can be generalized inductively to ar- bitrary formulas so that, for example 3sKnows(3y[Nezto(z, y) A -Broken(y)], s) def 3xVs’ fqs’, s) + 3y[Nexto(x, y, s’) A lBroken(y, s’)]. We will however restrict our attention to knowledge about atomic formulas in both this and the next sec- tion. Turning now to knowledge-producing actions, there are two sorts of actions to consider: actions whose ef- fect is to make known the truth value of some formula, and actions to make known the value of some term. 2 Note that using this convention means that the argu- ments to K are reversed from their normal modal logic use. In the first case, we might imagine a Sensep action for a fluent P, such that after doing a Sensep, the truth value of P is known. We introduce the nota- tion Kwhether(P, s) as an abbreviation for a formula indicating that the truth value of a fluent P is known. Kwhether( P, s) dgf Knows( P, s) v Kmws( 1 P, s), It will follow from our specification in the next section that Kwhether( P, do(Sensep, s)). In the second case, we might imagine an action Reudt for a term t, such that after doing a Readl, the denotation oft is known. For this case, we introduce the notation fined as follows: Kref(t, s) def %Knows(t = x, s) where x does not appear in t. It will follow from the specification developed in the next section that Kref(t, do( Readt, s)). For simplicity, we assume that each type of knowledge-producing ac- tion is associated with a characteristic fluent or term in this way. Solving the Frame Problem The approach being developed here rests on the spec- - - ification of a successor state axiom for the I< relation. For all situations do(u, s), the Ii’ relation will be com- pletely determined by the I< relation at s and the ac- tion a. For non-knowledge-producing actions (e.g. drop(x)), the specification (based on Moore [1980; 19851) is as follows: Poss(drop(x), s) + K(s”, do( drop(x), s)) G 3s’(K(s’,s) A (s” = do( drop(x) , s’))) (g) The idea here is that as far as the agent at world s knows, he could be in any of the worlds s’ such that K(s’, s). At do( drop(x), s) as far as the agent knows, he can be in any of the worlds do( drop(x), s’) for any s’ such that I<(s’, s). So the only change in knowledge that occurs in moving from s to do( drop(x), s) is the knowledge that the action drop has been performed. Now consider the simple case of a knowledge- producing action Sensep that determines whether or not the fluent P is true (following Moore [1980; 19851). Poss(Sensep, s) + IC(s”, do(Sensep, s)) z [Ys (K(s’, s) A (s” = do( Sensep, s’))) (10) A (P(s) E P(s’))] Ag ain, as far as the agent at world s knows, he could be in any of the worlds s’ such that IC(s’, s). At do(Sensep, s) as far as the agent knows, he can be in any of the worlds do(Sensep, s’) for all s’ such that K(s’, s) and P(s) E P(s’). The idea here is that in moving from s to do(Sensep, s), the agent not only knows that the action Sensep has been performed (as above), but also the truth value of the predicate P. Observe that the succes- sor state axiom for P guarantees that P is true at Representation for Actions & Motion 691 do(Sensep, s) iff P is true at s, and similarly for s’ and do(Sensep, s’). Therefore, P has the same truth value in all worlds s” such that K(s”, do(Sensep, s)), and so Kwhether(P, do( Sensep, s)) is true. In the case of a Rea& action that makes the deno- tation of the term t known, P(s) * P(s’) is replaced by t(s) = t(s’). Th ere ore, f t has the same denotation in all worlds s” such that K(s”, do( Rea&, s)), and so Kref(t, do(Readt, s)) is true. In general, there may be many knowledge-producing actions. Each knowledge-producing action Ai will have associated with it a formula (pd(s, s’). In the case of a Sense type of action, the formula is of the form Fi(s) zz Fi(s’), where R’i is a fluent. In the case of a Read type of action, the formula is of the form (ti(s) = ti(s’)), w h ere ti is a situation-dependent term. Assume that there are n knowledge-producing actions Al, . . . , A, and therefore n associated formulas (Pl~*--,(Pn- The form of the successor state axiom for Ii’ is then as follows: Successor State Axiom for K Poss(a,s) ---f K(s”,do(a, s)) E [3s’ (K(s’, s) A (s” = do@, s’))) A (11) v ((a = 4 A %)>>I The relation K at a particular situation do(u, s) is com- pletely determined by the relation at s and the action U. Example increases in knowledge. Consider the example of opening a safe whose com- bination is written on a piece of paper (adapted from Moore[1980], but without the frame axioms). The suc- cessor state axiom for the fluent Open (i.e. be open) is: Theorem 2 (Default Persistence of Ignorance) If lKnows(P, s) then -Knows(P, do(u, s)) unless Poss(u, s) and either a is an Ai and P is the corre- I Poss(u, s) + [Open(y, do(a, s)) E (3xu = did(x, y) A x = comb(y, s)) ‘@pen (Y, s) A a # lock(~))] (12) spending ‘pi in the successor state uxio is a fluent whose successor state axiom is changed by action a. The preconditions for dial (actually dialing and pulling the handle) are: Poss(diul(x, y), s) z Safe(y, s) A At(y, s) A (%‘K(s’, s) + x = comb(y, s’)) (13) Proof: For lKnows(P, s) to be true, it must be the case that Vs’K(s’,s) 4 P(s’) is false. There must be some s’ such that P(s’) is false. By sentence 11, for all situations s” such that K(s”, do(u, s)), it is the case that s” = do(u,s’) for some s’ such that K(s’,s). Since P(s’) is false for some s’, P(do(u, s’)) will (by the successor state axiom for P) also be false, unless either (1) the successor state axiom for P specifies that the effect of a is to make P true, Poss(u, s) is true, and the conditions for this change are satisfied in s, or (2) a is a knowledge-producing action Ai, the corresponding cpi in the successor state axiom for K is P(s) c--+ P(s’), P(s) is true, and Poss(u,s). If neither is the case by the successor state axiom for K, there will be an s” such that K(s”, do(u, s)) where P(s”) is false and therefore ~Knows(P, do(u, s)) will be true. cl Finally, it is a property of this specification that The idea here is that the object being dialed needs to be a safe, the agent needs to be at the safe, and the agent needs to know the combination of the safe. The axiomatization of the initial state includes Safe(Sfl, SO), At(Sfl, SO)), At(Ppr, SO), and info(Ppr, SO) = comb(Sf1, SO). Note that GlxPoss(diul(x, Sfi), So). It is assumed that there are successor state axioms for Safe, At and the func- tional fluents info and comb. There is a knowledge-producing action the following action precondition axiom: read(x), with Poss(reud(x), s) E At(x, s) (14) The successor state axiom for K is as follows: Poss(u, s) 3 K(s”, do@, s)) = 3s’(K(s’, s) A (s” = do(u, s’))) A ((a # read(x)) V (31: (a = reud(x)) (15) A (info(x, s) = inf o(x, s’)))) Note that the axiomatization entails: I-ref(info(Ppr),do(reud(Ppr),SO))A info(Ppr, do(read(Ppr), SO)) = (16) comb(Sf1, do(read(Ppr), SO)) Since the successor state axioms ensure that a read ac- tion does not change At, Safe, Comb and Info, it is the case that ZlxPoss(diuZ(x, Sfl), do(reud(Ppr), So)) and therefore the axiomatization entails that ElxOpen(Sfl, do(diul(x, Sfi), do(reud(Ppr), SO))). Correctness of the Solution The following theorem shows that knowledge- producing actions do not change the state of the world. The only fluent whose truth value is altered by a knowledge-producing action is I<. Theorem 1 (Knowledge-Producing Effects) For all fEuents P (other than IC) and all knowledge producing actions a, if P(s) then P(do(u, s)). Proof: Immediate from having successor state axioms for each fluent. Cl It is also necessary to show that actions only affect knowledge in the appropriate way. The truth of the following theorem ensures that there are no unwanted m for I< , 0rP specifies that it agents never forget. 692 Scherl Theorem 3 (Memory) For all fluents P and situu- tions s, if Knows(P, s) then Knows(P, do(u, s)) un- less the e$ect of a is to make P false. The proof is similar to that of the previous theorem. Consider again the successor state axiom for broken given in sentence 8. If Knows(lBroken(Obji), So) is true, then Knows(lBroken(Obji), do(drop(Ob&), So)) must also be true. Also, note that if Knows( Frugile( Obj,), So) and Knows(Poss(drop(Objz)), So) are true, then Knows( Broken( Obj,), do( drop( Obj,), Se)) must also be true. Knowledge of Formulas Up to this point, all results have been stated in terms of fluents. But both the argument to Kwhether and Knows can be an arbitrary formula. In the discussion of sense type actions, nothing hinged on the argument to Kwhether being a fluent, rather than a formula. Thus the effect of a sense action performed by a robot [Lespkrance and Levesque, 1990; Lesperance, 19911 may be specified as follows: Kwhether(%c (Object(x) A 4Yolding(x) A Ofshupe(x, Shupel))) (17) Now the formula cpi associated with each knowledge- producing action is of the form &i(s) E cy;(s’), where oi is a formula. Also, the arguments to the Knows operator may be arbitrary formulas. Now, we may also want nested Knows operators. The situation argument of the op- erator is then understood contextually. If it is not the outermost operator, the situation argument is under- stood to be the first argument of the immediately dom- inating I< atom. For example, 18 is understood as an abbreviation for 19. Knows(Knows(P)) (18) VSl qs1, So) ---f (V’s2 Iqs2, Sl) + P(s2)) (19) By a simple induction on the size of formulas, The- orems 1, 2, and 3 expressed in terms of fluents, can be generalized to formulas as well. So, the solution to the frame problem for knowledge-producing actions is correct for knowledge understood as the knowledge of arbitrary sentences. The only remaining issue concerns requiring that the Knows operator conform to the properties of a par- ticular modal logic. For example, if the logic chosen is S4, then we want positive introspection (sentence 20) to be a property of the logic. Knows(d) --f Knows( Knows( 4)) (20) Restrictions need to be placed on the Ii’ relation so that it correctly models the accessibility relation of a particular modal logic. The problem is to do this in a way that does not interfere with the successor state axioms for li’, which must completely specify the I< relation for non-initial situations. The solution is to axiomatize the restrictions for the initial situation and then verify that the restrictions are then obeyed at all situations. The sort Init is used to restrict variables to range only over Se and those situations accessible from So. It is necessary to stipulate that: Init -+ (I<(s,sl) + Init( lInit(sl) + (IC(s,sl) + iInit(s)) Init llnit(do(a, s)) The various restrictions are listed below.3 The reflex- ive restriction is always added as we want a modal logic of knowledge. Some subset of the other restrictions are then added. eflexive V’s1 : Init IC(sl , s1) Euclidian Vsl :Init, s2:Inits3:Init I+, sl) A 1+3, sl) - IC(s3,4 Symmetric Vsi:Init, s2:Init Iil(s2, q) + IC(sl, s2) Transitive Vsl : Init, s2: Init, ss: Init IC(s2, sl) A @3, s2) + IC(s3, sl) To model the logic S4, for example, one would need to include the axioms for both reflexivity and transitivity. The next step is to prove that if the I< relation over the initial situations satisfies a particular restriction R, that restriction R will also hold over the other situa- tions as well. Theorem 4 If the Ii’ relation on the set of initial sit- uations is restricted to conform to some subset of the properties of reflexive, symmetric, transitive and eu- clidian, then the I< relation at every level will satisfy the same set of properties. The proof involves showing for each restriction that if the restriction holds for s, then it holds for do(u, s). The significance of this theorem is that if the I< re- lation at the initial situation is defined as satisfying certain conditions, then the I< relation at all situa- tions reachable from the initial situation also satisfies those properties. So, if we decide to use, for example, the logic S4 to model knowledge, we can go ahead and stipulate that the I< relation at the initial situation is reflexive and transitive. Then we are guaranteed that the relation at all reachable situations will also satisfy those properties and our model of knowledge will remain S4, without danger of conflicting with the successor state axiom. Reasoning Reiter [Reiter, 19911 develops a form of regression to reduce reasoning about future situations to reasoning about the initial situation. In this section, a regression operator is developed for knowledge-producing actions and applied to the problem of determining whether 3Vs:lnit p is an abbreviation for Vslnil(s) * q Representation for Actions & Motion 693 or not a particular plan satisfies a particular prop- erty. So given a plan, expressed as a ground situ- ation term (i.e. a term built on SO with the func- tion do and ground action terms4) sgr, the question is whether the axiomatization of the domain F entails G(s,,) where G is an arbitrary sentence. Under these circumstances, the successor state axioms (including 11) are only used to regress the formula G(s,,). The result of the regression is a formula in ordinary modal logic-i.e. a formula where the only situation term is SO. Then an ordinary modal theorem proving method (e.g. that developed in [Frisch and Scherl, 1991; Scherl, 19921) may be used to determine whether or not the regressed formula holds. In what follows, it is assumed that the formulas do not use the fluent I< except as abbreviated by Knows. The regression operator R is defined relative to a set of successor state axioms 0. The first four parts of the definition of the regression operator 7Zo concern ordi- nary (i.e. not knowledge-producing) actions [Reiter, 1991; Pednault, 19891. i When A is a non-fluen t atom, including eq uality atoms, and atoms with the predicate symbol Puss, or when A is a fluent atom whose situation argument is a situation variable, or the situation constant So, Ro[A] = A. ii When F is a fluent state axiom in 0 is (other than K) whose successor POSS(U, s) - [F(xl, . . . , xn, do@, s)) = @‘F] (21) then Ro[F(b, * * . , tn , do(a > r))] = Q’F I;,‘,:::;;:;:: (22) iii Whenever 7?.@[(3V)Wl] = (3v)7& p&3. iv Whenever Wi and W2 are formulas, %&[W~AW2] = %[Wl] A %[Wl], %[Wl V W2] = Ro[Wl] V R@[Wl], %[Wl - W2] = Ro[W1] - R@[Wl]. Additional steps are needed to extend the regression operator to knowledge-producing actions. For sim- plicity, it is assumed that there are only knowledge- producing operators of type sense-Sense1 . . . Sense,. Two definitions are needed for the specification to fol- low. When 4 is an arbitrary sentence and s a situation term , then apply(4, s) is the sentence that results from adding an extra argument to every fluent of 4 and in- serting s into that argument position. The reverse op- eration apply- ’ (4) is the result of removing the last argument position’from all the fluents in 4. - v Whenever a is not a knowledge-producing action, Ro [Knows( W, do(u, s))] = Knows(upply-l(Ro(apply(W, do@, s)))), s). 41t is also assumed that this plan is known cutable [Reiter, 19911, i.e., each step is possible. to be exe- vi Ro [Knows( W, do( Sensei, s))] = (cd4 - p;ys(pi + W, s)) A (ipi (s) - Knows(-cpi --) 9 In the following theorem, J= is the axiomatization of the domain including F$,, the successor state axioms. Theorem 5 For any ground situation term sgr r I= %r 1 ++ F - 6 k R&G(+) The proof is based on an induction over all ground action terms [Reiter, 19931. Each regression step pre- serves logical equivalence given an axiomatization of the form developed here (i.e. successor state axioms). The process must terminate as every step removes the outer do from the situation terms and the number of do function symbols making up any such term is fi- nite. Since each step preserves equivalence, the whole process results in an The result means equivalent formula. that to test if some sentence G is true after executing a plan, it is only necessary to first regress G(s,,), where sgr is the plan expressed as a situation term, using the successor state axioms. This is accomplished by repeatedly passing the regres- sion operator through the formula until the only sit- uation term is SO. Then the successor state axioms (including 11) are no longer needed. At that point an ordinary modal logic theorem proving method can be utilized to perform the test to determine whether or not F - Fss b R*,G(s,,). Consider the following example adapted from [Moore, 19851 (but without the frame axioms). The task is to show that after an agent performs a litmus paper test on an acidic solution, the agent will know that the solution is acidic. The litmus paper turns red if and only if the solution is acidic. The axiomatization includes Acid($). The actions are Test1 and SenSeR. As the action preconditions are all True, the predicate Poss is ignored in the presentation here. The successor state axiom for Red is given below: “~$$-$y~’ - ca s a = Testl) V (Red(s) A a # Testl) (23) The instance of the successor state axiom (11) for the I{ relation is: Vu, s, s”, K(s”, do(u, s)) G 3s’(K(s’, s) A (s” = do(u, s’))) A ((a # SenSeR) v ((U = SenSeR) A (Red(s) z Red( s’)))) (24) The formula to be initially regressed is Knows( Acid, do( SenseR, do( TestI, SO))) (25) Step vi of the definition of R is used with 25 to yield 26. (Red( do( TestI, SO)) ---f Knows( Red + Acid, do( TestI, SO))) A (TRed( do( TestI, So)) --) m-9 Knows(lRed ----f Acid, do( TestI, SO))) 694 Scherl This is then regressed to sentence 27 by steps iii, iv, and v of the regression definition along with 23. (Acid($) + Knows(Acid -+ Acid, So)) A (lAcid(So) + Knows(lAcid ---t Acid, So)) (27) Sentence 27 is clearly entailed by Acid(&) and so 25 is entailed by the original theory. Note that 27 can be rewritten as a sentence in an ordinary modal logic because the only situation term is So. Conclusion This paper provides a solution to the frame problem for knowledge-producing actions. As long as the condi- tions needed for Reiter’s solution for ordinary actions can be met, the work presented here provides a solu- tion for knowledge-producing actions as well. In terms of future work, we are extending the work discussed here so that the knowledge prerequisites and effects of actions can be indexical rather than objective knowledge. Following [Lespkrance and Levesque, 1990; Lesperance, 19911, this will be done by making situa- tions a composite of agents, times and worlds. Also, the consideration of logics of beliefis a topic for future research. The results presented in this paper are limited to logics of knowledge-logics with a possible world semantics in which the accessibility relation is re- flexive. Note that in the case of a knowledge-producing action a that causes P to be known at do(a,s), there must be a situation s’ such that l<(s’,s), and P(s’). But in the case of a belief-producing action, there is no guarantee that such a situation s’ exist. This is why the results do not directly extend to modal logics without a reflexive accessibility relation. Acknowledgments We thank Ray Reiter and Fangzhen Lin for useful dis- cussions on the situation calculus and the frame prob- lem. Additionally, we would like to thank both Ray Reiter and Sheila McIlraith for comments on an earlier version of this paper. This research was funded in part by the National Sciences and Engineering Research Council of Canada, and the Institute for Robotics and Intelligent Systems. References Etzioni, Oren; Hanks, Steve; Weld, Daniel; Draper, Denise; Lesh, Neal; and Williamson, Mike 1992. An approach to planning with incomplete information. In Nebel, Bernhard; Rich, Charles; and Swartout, William, editors 1992, Principles of Knowledge Rep- resentation and Reasoning: Proceedings of the Third International Conference, Cambridge, Massachusetts. 115-125. Frisch, Alan and Scherl, Richard 1991. A general framework for modal deduction. In Allen, J.A.; Fikes, R.; and Sandewall, E., editors 1991, Principles of Knowledge Representation and Reasoning: Proceed- ings of the Second International Conference, San Ma teo,CA : Morgan Kaufmann. Haas, A. R. 1987. The case for domain-specific frame axioms. In Brown, F. M., editor 1987, The Frame Problem in Artificial Intelligence. Proceedings of the 1987 Workshop. Morgan Kaufmann Publishers, Inc., San Mateo, California. 343-348. Lesperance, Yves and Levesque, Hector J. 1990. In- dexical knowledge in robot plans. In Proceedings Eighth National Conference On Artificial Intelligence. 1030-1037. Lesperance, Yves 1991. A Formal Theory of Indexical Knowledge and Action. Ph.D. Dissertation, Univer- sity of Toronto. Lin, Fangzhen and Reiter, Ray 1993. State con- straints revisited. Presented at the Second Sympo- sium on Logical Formalizations of Commonsense Rea- soning. McCarthy, 9. and Hayes, P. 1969. Some philosophi- cal problems from the standpoint of artificial intelli- gence. In Meltzer, B. and Michie, ID., editors 1969, Machine Intelligence 4. Edinburgh University Press, Edinburgh, UK. 463-502. Moore, R.C. 1980. Reasoning about knowledge and action. Technical Note 191, SRI International. Moore, R.C. 1985. A formal theory of knowledge and action. In Hobbs, J.R. and Moore, R.C., editors 1985, Formal Theories of the Commonsense World. Ablex, Norwood, NJ. Pednault, E.P.D. 1989. ADL: exploring the middle ground between STRIPS and the situation calculus. In Brachman, R.J.; Levesque, H.; and Reiter, R., editors 1989, Proceedings of the First International Conference on Principles of Knowledge Representa- tion and Reasoning. Morgan Kaufmann Publishers, Inc., San Mateo, California. 324-332. Reiter, Raymond 1991. The frame problem in the sit- uation calculus: A simple solution (sometimes) and a completeness result for goal regression. In Lif- schitz, Vladimir, editor 1991, Artificial Intelligence and Mathematical Theory of Computation: Papers in Honor of John McCarthy. Academic Press, San Diego, CA. 359-380. Reiter, R. 1993. Proving properties of states in the situation calculus. Artificial Intelligence. to appear. Scherl, Richard 1992. A Constraint Logic Approach To Automated Modal Deduction. Ph.D. Dissertation, University of Illinois. Schubert, L.K. 1990. Monotonic solution of the frame problem in the situation calculus: an efficient method for worlds with fully specified actions. In Kyberg, H. E.; Loui, R.P.; and Carlson, G.N., editors 1990, Knowledge Representation and Defeasible Reasoning. Kluwer Academic Press, Boston, Mass. 23-67. Representation for Actions & Motion 695 | 1993 | 103 |
1,303 | The Paradoxical Success of Fuzzy Logic* Charles Elkan Department of Computer Science and Engineering University of California, San Diego La Jolla, California 92093-0114 Abstract This paper investigates the question of which aspects of fuzzy logic are essential to its prac- tical usefulness. We show that as a formal sys- tem, a standard version of fuzzy logic collapses mathematically to two-valued logic, while em- pirically, fuzzy logic is not adequate for rea- soning about uncertain evidence in expert sys- tems. Nevertheless, applications of fuzzy logic in heuristic control have been highly success- ful. We argue that the inconsistencies of fuzzy logic have not been harmful in practice because current fuzzy controllers are far simpler than other knowledge-based systems. In the future, the technical limitations of fuzzy logic can be expected to become important in practice, and work on fuzzy controllers will also encounter several problems of scale already known for other knowledge-based systems. 1 Introduction Fuzzy logic methods have been applied successfully in many real-world systems, but the coherence of the foun- dations of fuzzy logic remains under attack. Taken to- gether, these two facts constitute a paradox, which this paper attempts to resolve. More concretely, the aim of this paper is to identify which aspects of fuzzy logic ren- der it so useful in practice and which aspects are inessen- tial. Our conclusions are based on a new mathematical result, on a survey of the literature on the use of fuzzy logic in heuristic control, and on our own practical ex- perience developing two large-scale expert systems. This paper is organized as follows. First, Section 2 proves and discusses the theorem mentioned above, which is that only two truth values are possible inside a standard system of fuzzy logic. In an attempt to un- derstand how fuzzy logic can be useful despite this para- dox, Sections 3 and 4 examine the main practical uses of fuzzy logic, in expert systems and heuristic control. Our tentative conclusion is that successful applications of fuzzy logic are successful because of factors other than *This work was supported in part by the National Science Foundation under Award No. IRI-9110813. 698 Elkan the use of fuzzy logic. Finally, Section 5 shows how cur- rent work on fuzzy control is encountering dilemmas that are already well-known from work in other areas of ar- tificial intelligence, and Section 6 provides some overall conclusions. 2 A paradox in fuzzy logic As is natural in a research area as active as fuzzy logic, theoreticians have investigated many different formal systems, and applications have also used a variety of systems. Nevertheless, the basic intuitions are relatively constant. At its simplest, fuzzy logic is a generaliza- tion of standard propositional logic from two truth val- ues false and true to degrees of truth between 0 and 1. Formally, let A denote an assertion. In fuzzy logic, A is assigned a numerical value t(A), called the degree of truth of A, such that 0 < t(A) 5 1. For a sentence composed from simple assertions and logical connectives “and” (A), “or” (V), and “not” (-A or A), degree of truth is defined as follows: Definition 1: t(A A B) = min{t(A), t(B)} t(A V B) = max{t(A), t(B)} +A) = 1 - t(A) t(A) = t(B) if A and B are logically equivalent. H In the last case of this definition, let “logically equiva- lent” mean equivalent according to the rules of classical two-valued propositional calculus. The use of alternative definitions of logical equivalence is discussed at the end of this section. Fuzzy logic is intended to allow an indefinite variety of numerical truth values. The result proved here is that only two different truth values are in fact possible in the formal system of Definition 1. Theorem 1: For any two assertions A and B, either t(B) = t(A) or t(B) = 1 -t(A). Proof: Let A and B be arbitrary assertions. Consider the two sentences A A B and B V (z A B>. These are logically equivalent, so t(A A B> = t(B v (x A B). From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. Now t(A) = 1 - min(t(A), 1 - t(B)} = 1 + max{-t(A), -1 + t(B)} = max{ 1 - t(A), t(B)} and t(B V (x A B)) = max{t(B),min{l - t(A), 1 - t(B)}}. The numerical expressions above are different if t(B) < 1 -t(B) < 1 - t(A), that is if t(B) < l-t(B) and t(A) < t(B), which happens if t(A) < t(B) < 0.5. So it cannot be true that t(A) < t(B) < 0.5. Now note that the sentences A A B and B V (x A B) are both re-expressions of the material implication A + B. One by one, consider the seven other material implication sentences involving A and B Z-B A+B X--+B B-A B+A B---+X B + -A. BY the same can be true: reasoning as before, none of the following H 1 -t(A) < t(B) < 0.5 t(A) < 1 -t(B) < 0.5 l-t(A)<l-t(B)<0.5 t(B) < t(A) < 0.5 1 - t(B) < t(A) < 0.5 t(B) < 1 - t(A) < 0.5 1 -t(B) < 1 -t(A) < 0.5. Now let 2 = min{t(A), 1 - t(A)} and let y = min{t(B), 1 - t(B)}. Clearly x 5 0.5 and y 2 0.5 so if x # y, then one of the eight inequalities derived must be satisfied. Thus t(B) = t(A) or t(B) = 1 - t(A). It is important to be clear as to what exactly is proved above, and what is not proved. The first point to note is that nothing in the statement or proof of the theo- rem depends on any particular definition of the meaning of the implication connective, either in two-valued logic or in fuzzy logic.Theorem 1 could be stated and proved without introducing the symbol -+, since A + B is used just as a syntactic abbreviation for B V (x A B). The second point to note is that the theorem also ap- plies to any more general formal system that includes the four postulates listed in Definition 1. Any exten- sion of fuzzy logic to accommodate first-order sentences, for example, collapses to two truth values if it admits the propositional fuzzy logic of Definition 1 as a spe- cial case. The theorem also applies to fuzzy set theory, because Definition 1 can be understood as axiomatizing degree of membership for fuzzy set intersections, unions, and complements. On the other hand, the theorem does not necessarily apply to any version of fuzzy logic that modifies or rejects any of the four postulates of Definition 1. It is however possible to carry through the proof of the theorem in many variant systems of fuzzy logic. In particular, the theorem remains true when negation is modelled by any operator in the Sugeno class [Sugeno, 19771, and when disjunction or conjunction are modelled by operators in the Yager classes [Yager, 19801.’ Of course, the last postulate of Definition 1 is the most controversial one, and the postulate that one naturally first wants to modify in order to preserve a continuum of degrees of truth. Unfortunately, it is not clear which sub- set of classical tautologies and equivalences should be, or can be, required to hold in a system of fuzzy logic. What all formal fuzzy logics have in common is that they re- ject at least one classical tautology, namely the law of excluded middle (the assertion XVA). Intuitionistic logic [van Dalen, 19831 1 a so rejects this law, but rejects in ad- dition De Morgan’s laws, which are entailed by the first three postulates of Definition 1. One could hope that fuzzy logic is therefore a formal system whose tautologies are a subset of the classical tautologies, and a superset of the intuitionistic tautologies. However, Theorem 1 can still be proved even if logical equivalence is restricted to mean intuitionistic equivalence.’ It is an open question how to choose a notion of logical equivalence that si- multaneously (i) remains philosophically justifiable, (ii) allows useful inferences in practice, and (iii) removes the opportunity to prove results similar to Theorem 1. 3 Fuzzy logic in expert systems Any logical system or calculus for reasoning such as fuzzy logic must be motivated by its applicability to phenom- ena that we want to reason about. The operations of the calculus must model the behaviour of the ideas in certain classes. One way to defend a calculus is to show that it succeeds in interesting applications, which has certainly been done for fuzzy logic. However, if we are to have confidence that the successful application of the calculus is reproducible, we must be persuaded that the calculus correctly models the interaction of all phenomena in a well-characterized general class. The basic motivation for fuzzy logic is clear: many ideas resemble traditional assertions, but they are not ‘The postulates of standard fuzzy logic have been used quite widely, but it happens that even the same author some- times adopts them and sometimes does not. For example (fol- lowing [Gaines, 19831) Bart Kosko explicitly uses all four pos- tulates to resolve Russell’s paradox of the barber who shaves all men except those who shave themselves [Kosko, 19901, but in later work he uses addition and multiplication instead of maximum and minimum [public lecture at UCSD, 19911. 2The Gijdel translations [van Dalen, 1983; p. 1721 of clas- sically equivalent sentences are intuitionistically equivalent. For any sentence, the first three postulates of Definition 1 make its degree of truth and the degree of truth of its Godel translation equal. Thus the proof given for Theorem 1 can be carried over directly. Rule-Based Reasoning 699 naturally either true or false. Rather, uncertainty of some sort is attached to them. Fuzzy logic is an attempt to capture valid patterns of reasoning about uncertainty. The notion is now well accepted that there exist many different types of uncertainty, vagueness, and ignorance [Smets, 19911. H owever, there is still debate as to what types of uncertainty are captured by fuzzy logic.3 Many papers have discussed at a high level of mathematical abstraction the question of whether fuzzy logic provides suitable laws of thought for reasoning about probabilistic uncertainty. Our conclusion from practical experience in the construction of expert systems is that fuzzy logic is not uniformly suitable for reasoning about uncertain evidence. A simple example shows what the difficulty is. Suppose the universe of discourse is a collection of mel- ons, and there are two predicates red and watermelon, where red and green refer to the colour of the flesh of a melon. For some not very well-known melon 2, sup- pose that t(red(x)) = 0.5 and t(watermelon(x)) = 0.8, meaning that the evidence that 2 is red inside has strength 0.5 and the evidence that x is a watermelon has strength 0.8. According to the rules of fuzzy logic, t(red(x) A watermelon(x)) = 0.5. This is not reason- able, because watermelons are normally red inside. Red- ness and being a watermelon are mutually reinforcing facts, so intuitively, x is a red watermelon with certainty greater than 0.5. The deep issue here is that the degree of uncertainty of a conjunction is not in general determined uniquely by the degree of uncertainty of the assertions entering into the conjunction. There does not exist a function f such that the rule t(A A B) = f(t(A), t(B)) is always valid, when t represents the degree of certainty of fragments of evidence. The certainty of A A B depends on the content of the assertions A and B as well as on their numerical certainty. This fact is recognized implicitly in probabilistic reasoning, since probability theory does not assign unique probability values to conjunctions. What probability theory says is that l- (l-Pr(A)+l-Pr(B))<Pr(AfiB) <, min{Pr(A), h(B)}. The actual probability value depends on further aspects of the situation that have not been stated. For example, if the two assertions A and B are independent, then the probability of their conjunction is I+(A) . Pr(B). Although probability theory is more flexible than fuzzy logic, the red watermelon example shows that it is not a universally adequate system of laws of thought for reasoning about all types of uncertainty either. If t(red(x)) = 0.5 and t(watermelon(x)) = 0.8, then it is natural to want t(red(x) A watermelon(x)) > 0.5, which probability theory cannot permit. 3Misunderstanding on these issues has reached the non- technical press: see articles based on [Kosko, 1990] in Busi- ness Week (New York, May 21, 1990), the Financial Times (London, June 5, 1990), the Economist (London, June 9, 1990), Popular Science (New York, June 1990), and else- where. The difficulties identified here with fuzzy logic and probability theory as formalisms for reasoning about un- certainty do occur in practice. We have recently de- signed, implemented, and deployed at IBM two large- scale expert systems [Hekmatpour and Elkan, 1993; Hekmatpour and Elkan, 19921. One system, CHATKB, solves problems encountered by engineers while using VLSI design tools. The other system, WESDA, diagnoses faults in machines that polish semiconductor wafers. The knowledge possessed by each system consists of a library of cases and a deep domain theory which is rep- resented as a decision tree where each node corresponds to a fact about the state of the tool being diagnosed. Relevant cases are attached to each leaf of the decision tree. Roughly, the children of each node represent evi- dence in favour of the parent node, or potential causes of the parent node. CHATKB or WESDA retrieves an old case to solve a new problem by choosing a path through its decision tree. A path from the root to a leaf is chosen by combining a priori child node likelihoods with evi- dence acquired through questioning the user. We have found that this process of combining evidence is a type of reasoning about uncertainty that cannot be modelled adequately by the axioms of fuzzy logic, or by those of probability theory. Methods for reasoning about uncertain evidence are an active research area in artificial intelligence, and the con- clusions reached in this section are not new. Our prac- tical experience does, however, independently confirm previous arguments about the inadequacy of systems for reasoning about uncertainty that propagate numerical factors according only to which connectives appear in assertions [Pearl, 1988]. 4 Fuzzy logic in heuristic control Heuristic control is the area of application in which fuzzy logic has been the most successful. There is a wide con- sensus that the techniques of traditional mathematical control theory are often inadequate. The reasons for this include the reliance of traditional methods on lin- ear models of systems to be controlled, their propensity to produce “bang-bang” control regimes, and their fo- cus on worst-case convergence and stability rather than typical-case efficiency. Heuristic control techniques give up mathematical simplicity and performance guarantees in exchange for increased realism and better performance in practice. A heuristic controller using fuzzy logic is shown to have less overshoot and quicker settling in [Burkhardt and Bonissone, 19921 for example. The first demonstrations that fuzzy logic could be used in building heuristic controllers were published in the 1970s [Zadeh, 1973; Mamdani and Assilian, 19751. Work using fuzzy logic in heuristic control continued through the 198Os, and recently there has been an explosion of in- dustrial interest in this area; for surveys see [Yamakawa and Hirota, 19891 and [L ee, 19901. One reason why fuzzy controllers have attracted so much interest recently is that they can be implemented by embedded specialized microprocessors [Yamakawa, 19891. Despite the intense industrial interest (and, in Japan, consumer interest) in fuzzy logic, the technology contin- 700 Elkan ues to meet resistance. For example, at the 1991 Inter- national Joint Conference on Artificial Intelligence (IJ- CAI’91, Sydney, Australia) Takeo Kanade gave an in- vited talk on computer vision in which he described at length Matsushita’s camcorder image stabilizing system [Uomori et al., 19901, without mentioning that it uses fuzzy logic. Almost all currently deployed heuristic controllers us- ing fuzzy logic are similar in five important aspects. A good description of a prototypical example of this stan- dard architecture appears in [Sugeno et al., 19891. o First, the knowledge base of a typical fuzzy con- troller consists of under 100 rules; often under 20 rules are used. Fuzzy controllers are orders of magnitude smaller than systems built using tradi- tional artificial intelligence formalisms: the knowl- edge base of CHATKB, for example, occupies many megabytes. o Second, the knowledge entering into fuzzy con- trollers is structurally shallow, both statically and dynamically. It is not the case that some rules pro- duce conclusions which are then used as premises in other rules. Statically, rules are organized in a flat list, and dynamically, there is no run-time chaining of inferences. Third, the knowledge recorded in a fuzzy controller typically reflects immediate correlations between the inputs and outputs of the system to be con- trolled, as opposed to a deep, causal model of the system. The premises of rules refer to sensor ob- servations and rule conclusions refer to actuator settings.4 m The fourth important feature that deployed fuzzy controllers share is that the numerical parameters of their rules and of their qualitative input and output modules are tuned in a learning process. Many different learning algorithms have been used for this purpose, and neural network learning mech- anisms have been especially successful [Keller and Tahani, 1992; Yager, 19921. What the algorithms used for tuning fuzzy controllers themselves have in common is that they are gradient-descent “hill- climbing” algorithms that learn by local optimiza- tion [Burkhardt and Bonissone, 19921. o Last but not least, by definition fuzzy controllers use the operators of fuzzy logic. Typically “minimum” and “maximum” are used, as are explicit possibility distributions (usually trapezoidal), and some fuzzy implication operator. 4Rule premises refer to qualitative (“linguistic” in the ter- minology of fuzzy logic) sensor observations and rule con- clusions refer to qualitative actuator settings, whereas out- puts and inputs of sensors and actuators are typically real- valued. This means that two controller components usually exist which map between numerical values and qualitative values. In fuzzy logic terminology, these components are said to defuzzify outputs and implement membership functions respectively. Their behaviour is not itself describable using fuzzy logic, and typically they are implemented procedurally. The question which naturally arises is which of the features of fuzzy controllers identified above are essential to their success. It appears that the first four shared properties are vital to practical success, because they make the celebrated credit assignment problem solvable, while the use of fuzzy logic is not essential. In a nutshell, the credit assignment problem is to dis- cover how to modify part of a complex system in order to improve it, given only an evaluation of its overall perfor- mance. In general, solving the credit assignment prob- lem is impossible: the task is tantamount to generating many bits of information (a change to the internals of a complex system) from just a few bits of information (the input/output performance of the system). However, the first four shared features of fuzzy controllers make the credit assignment problem solvable for them. First, since it consists of only a small number of rules, the knowledge base of a fuzzy controller is a small system to modify. Second, the short paths between the inputs and outputis of a fuzzy controller mean that the effect of any change in the controller is localized, so it is easier to discover a change that has a desired effect without hav- ing other undesired consequences. Third, the iterative way in which fuzzy controllers are refined allows a large number of observations of input/output performance to be used for system improvement. Fourth, the continu- ous nature of the many parameters of a fuzzy controller allows small quantities of performance information to be used to make small system changes. Thus, what makes fuzzy controllers useful in practice is the combination of a rule-based formalism with nu- merical factors qualifying rules and the premises enter- ing into rules. The principal advantage of rule-based formalisms is that knowledge can be acquired from ex- perts or from experience incrementally: individual rules and premises can be refined independently, or at least more independently than items of knowledge in other formalisms. Numerical factors have two main advan- tages. They allow a heuristic control system to interface smoothly with the continuous outside world, and they al- low it to be tuned gradually: small changes in numerical factor values cause small changes in behaviour. None of these features contributing to the success of systems based on fuzzy logic is unique to fuzzy logic. It seems that most current applications of fuzzy logic could use other numerical rule-based formalisms instead, if a learning algorithm was used to tune numerical values for those formalisms, as is customary when using fuzzy logic. Several knowledge representation formalisms that are rule-based and numerical have been proposed besides fuzzy logic. For example, well-developed systems are presented in [Sandewall, 19891 and [Collins and Michal- ski, 1989; Dontas and Zemankova, 19901. To the extent that numerical qualification factors can be tuned in these formalisms, we expect that they would be equally use- ful for constructing heuristic controllers. Indeed, at least one has already been so used [Sammut and Michie, 19911. 5 eeapitulating mainstream AI Several research grou ps are attempting to scale up sys- terns based on fuzzy logic, and to lift the architectural Rule-Based Reasoning 701 limitations of current fuzzy controllers. For example, a methodology for designing block-structured controllers with guaranteed stability properties is studied in [Tanaka and Sugeno, 19921, and methodological problems in con- structing models of complex systems based on deep knowledge are considered in [Pedrycz, 19911. Controllers with intermediate variables, thus with chaining of infer- ences, are investigated in [von Altrock et al., 19921. However, the designers of larger systems based on fuzzy logic are encountering all the problems of scale al- ready identified in traditional knowledge-based systems. It appears that the history of research in fuzzy logic is recapitulating the history of research in other areas of artificial intelligence. This section discusses the knowl- edge engineering dilemmas faced by developers of fuzzy controllers, and then points to dealing with state infor- mation as another issue arising in research on fuzzy con- trollers that has also arisen previously. The rules in the knowledge bases of current fuzzy con- trollers are obtained directly by interviewing experts. Indeed, the original motivation for using fuzzy logic in building heuristic controllers was that fuzzy logic is designed to capture human statements involving vague quantifiers such as “considerable.” More recently, a con- sensus has developed that research must focus on ob- taining “procedures for fuzzy controller design based on fuzzy models of the process” [Driankov and Ek- lund, 19911. Mainstream work on knowledge engineer- ing, however, has already transcended the dichotomy be- tween rule-based and model-based reasoning. Expert systems whose knowledge consists of if- then rules have at least two disadvantages. First, maintenance of a rule base becomes complex and time-consuming as the size of a system increases [Newquist, 19881. S econd, rule-based systems tend to be brittle: if an item of knowledge is missing from a rule, the system may fail to find a solution, or worse, may draw an incorrect conclusion [Abbott, 1988]. The main disadvantage of model-based approaches, on the other hand, is that it is very difficult to construct sufficiently detailed and accurate models of complex systems. More- over, models constructed tend to be highly application- specific and not generalizable [Bourne et al., 19911. Many recent expert systems, therefore, including CHATKB and WESDA, are neither rule-based nor model- based in the standard way. For these systems, the aim of the knowledge engineering process is not simply to acquire knowledge from human experts, whether this knowledge is correlational as in present fuzzy controllers, or deep as in model-based expert systems. Rather, the aim is to develop a theory of the situated performance of the experts. Concretely, under this view of knowledge engineering, knowledge bases are constructed to model the beliefs and practices of experts and not any “objec- tive” truth about underlying physical processes. An im- portant benefit of this approach is that the organization of an expert’s beliefs provides an implicit organization of knowledge about the external process with which the knowledge-based system is intended to interact. The more sophisticated view of knowledge engineer- ing just outlined is clearly relevant to research on con- strutting fuzzy controllers more intricate than current ones. For a second example of relevant previous artifi- cial intelligence work, consider controllers that can carry state information from one moment to the next. These are mentioned as a topic for future research in [von Al- track et al., 19921. Symbolic AI formalisms for repre- senting systems whose behaviour depends on their his- tory have been available since the 1960s [McCarthy and Hayes, 19691. Neural networks with similar properties (called recurrent networks) have been available for sev- eral years [Elman, 19901, and have already been used in control applications [Karim and Rivera, 19921. It re- mains to be seen whether research from a fuzzy logic perspective will provide new solutions to the fundamen- tal issues of artificial intelligence. 6 Conclusions Applications of fuzzy logic in heuristic control have been highly successful, despite the collapse of fuzzy logic as a formal system to two-valued logic, and despite the inad- equacy of fuzzy logic for reasoning about uncertainty in expert systems. The inconsistencies of fuzzy logic have not been harmful in practice because current fuzzy con- trollers are far simpler than other knowledge-based sys- tems. First, long chains of inference are not performed in controllers based on fuzzy logic, so there is no op- portunity for inconsistency between paths of reasoning that should be equivalent to manifest itself. Second, the knowledge recorded in a fuzzy controller is not a consis- tent causal model of the process being controlled, but rather an assemblage of visible correlations between sen- sor observations and actuator settings. Since this knowl- edge is not itself consistent and probabilistic, the prob- abilistic inadequacy of fuzzy logic is not an issue. More- over, the ability to refine the parameters of a fuzzy con- troller iteratively can compensate for the arbitrariness of the fuzzy logic operators as applied inside a limited domain. The common assumption that heuristic controllers based on fuzzy logic are successful because they use fuzzy logic appears to be an instance of the post hoc, ergo propter hoc fallacy. The fact that using fuzzy logic is cor- related with success does not entail that using fuzzy logic causes success. In the future, the technical limitations of fuzzy logic identified in this paper can be expected to become important in practice. Other general dilem- mas of artificial intelligence work can also be expected to become critical-in particular, the issue of designing learning mechanisms that can solve the credit assign- ment problem when the simplifying features of present controllers are absent. Acknowledgements. The author is grateful to sev- eral colleagues for useful comments on earlier versions of this paper, and to John Lamping for asking if Theorem 1 holds when equivalence is understood intuitionistically. eferences [Abbott, 1988] K. Abbott. Robust operative diagnosis as problem solving in a hypothesis space. In Proceedkgs of the National Conference on Artificial Intelligence, pages 369-374, 1988. 702 Elkan [Bourne et al., 19911 J. R. Bourne et al. Organizing and understanding beliefs in advice-giving diagnostic systems. IEEE Transactions on Knowledge and Data Engineering, 3(3):269-280, September 1991. [Burkhardt and Bonissone, 19921 David G. Burkhardt and Piero P. Bonissone. Automated fuzzy knowledge base gen- eration and tuning. In Proceedings of the IEEE Interna- tional Conference on Fuzzy Systems, pages 179-188, San Diego, California, March 1992. [Collins and Michalski, 19891 A. Collins and R. Michalski. The logic of plausible reasoning: A core theory. Cogni- tive Science, 13(1):1-49, 1989. [Dontas and Zemankova, 19901 K. Dontas and M. Ze- mankova. APPLAUSE: an implementation of the Cohins- Michalski theory of plausible reasoning. Information Sci- ences, 52(2):111-139, 1990. [Driankov and Eklund, 19911 Dimiter Driankov and Peter Eklund. Workshop goals. In IJCAI’91 Workshop on Fuzzy ControZ Preprints, Sydney, Australia, August 1991. [Elman, 19901 Jeffrey L. Elman. Finding structure in time. Cognitive Science, 14(2):179-211, 1990. [Gaines, 19831 Brian R. Gaines. Precise past, fuzzy future. International Journal of Man-Machine Studies, 19:117- 134, 1983. [Hekmatpour and Elkan, 19921 Amir Hek- matpour and Charles Elkan. A multimedia expert system for wafer polisher maintenance. Technical Report CS92- 257, Department of Computer Science and Engineering, University of California, San Diego, 1992. [Hekmatpour and Elkan, 19931 Amir Hekmatpour and Charles Elkan. C t g a e orization-based diagnostic problem solving in the VLSI design domain. In Proceedings of the IEEE International Conference on Artificial Intelligence for Applications, pages 121-127, March 1993. [Karim and Rivera, 19921 M. N. Karim and S. L. Rivera. Comparison of feed-forward and recurrent neural networks for bioprocess state estimation. Computers and Chemical Engineering, 16:S369-S377, 1992. [Keller and Tahani, 19921 J. M. Keller and H. Tahani. Back- propagation neural networks for fuzzy logic. Information Sciences, 62(3):205-221, 1992. [Kosko, 19901 Bart Kosko. Fuzziness vs. probability. Interna- tional Journal of General Systems, 17(2-3):211-240, 1990. [Lee, 19901 C. C. L ee. Fuzzy logic in control systems-parts 1 and 2. IEEE Transactions on Systems, Man, and Cyber- netics, 20(2):404-435, March 1990. [Mamdani and As&an, 19751 E. H. Mamdani and S. Assil- ian. An experiment in linguistic synthesis with a fuzzy logic controller. International Journal of Man-Machine Studies, 7:1-13, 1975. [McCarthy and Hayes, 19691 John McCarthy and Patrick J. Hayes. Some philosophical problems from the standpoint of artificial intelligence. In Machine Intelligence, volume 4, pages 463-502. Edinburgh University Press, 1969. [Newquist, 19881 H. P. Newquist. Struggling to maintain. AI Expert, 3(8):69-71, 1988. [Pearl, 19881 Judea Pearl. Probabilistic Reasoning in Intelli- gent Systems. Morgan Kaufmann Publishers, Inc., 1988. [Pedrycz, 19911 Witold Pedrycz. Fuzzy modelling: funda- mentals, construction and evaluation. Fuzzy Sets and Sys- terns, 41(1):1-15, 1991. [Sammut and Michie, 19911 Claude Sammut and Donald Michie. Controlling a “black box” simulation of a space craft. AI Magazine, 12(1):56-63, 1991. [Sandewall, 19891 Erik Sandewall. Combining logic and differential equations for describing real-world systems. In Proceedings of the First International Conference on Principles of Knowledge Representation and Reasoning (KR’89), pages 412-420, 1989. [Smets, 19911 Philippe Smets. Varieties of ignorance and the need for well-founded theories. Information Sciences, 57- 58:135-144, 1991. [Sugeno et al., 19891 Michio Sugeno et a2. Fuzzy algorithmic control of a model car by oral instructions. Fuzzy Sets and Systems, 32(2):135-156, 1989. [Sugeno, 19771 Michio Sugeno. Fuzzy measures and fuzzy integrals-a survey. In Madan M. Gupta, George N. Saridis, and Brian R. Gaines, editors, Fuzzy Automata and Decision Processes, pages 89-102. North-Holland, 1977. [Tanaka and Sugeno, 19921 K. Tanaka and M. Sugeno. Sta- bility analysis and design of fuzzy control systems. Fuzzy Sets and Systems, 45(2):135-156, 1992. [Uomori et al., 19901 Kenya Uomori, Autshi Morimura, Hi- rohumi Ishii, Takashi Sakaguchi, and Yoshinori Kitamura. Automatic image stabilizing system by full-digital signal processing. IEEE Transactions on Consumer Electronics, 36(3):510-519, August 1990. [van Dalen, 19831 Dirk van Dalen. Logic and Structure. Springer Verlag, second edition, 1983. [von Altrock et al., 19921 C. von Altrock, B. Krause, and Hans J. Zimmermann. Advanced fuzzy logic control of a model car in extreme situations. Fuzzy Sets and Systems, 48( 1):41-52, 1992. [Yager, 19801 Ronald R. Yager. On a general class of fuzzy connectives. Fuzzy Sets and Systems, 4~235-242, 1980. [Yager, 19921 Ronald R. Yager. Implementing fuzzy logic controllers using a neural network framework. Fuzzy Sets and Systems, 48(1):53-64, 1992. [Yamakawa and Hirota, 19891 Special issue on applications of fuzzy logic control to industry. Fuzzy Sets and Systems, 32(2), 1989. Takeshi Yamakawa and K. Hirota (editors). [Yamakawa, 19891 Takeshi Yamakawa. Stabilization of an inverted pendulum by a high-speed fuzzy logic controller hardware system. Fuzzy Sets and Systems, 32(2):161-180, 1989. [Zadeh, 19731 Lotfi A. Zadeh. Outline of a new approach to the analysis of complex systems and decision processes. IEEE Transactions on Systems, Man, and Cybernetics, 3:28-44, 1973. Rule-Based Reasoning 703 | 1993 | 104 |
1,304 | ing the Structure of Rule ased Systems* Clifford Grossner, Alun D. Preece, P. Gokul Chander, T. Radhakrishnan and Ching Y. Suen Computer Science Department, Concordia University 1455 De Maisonneuve Blvd. Quest Mont&al, Quhbec, Canada H3G lM8 cliff@cs.concordia.ca Abstract In order to measure and analyze the performance of rule-based expert systems, it is necessary to ex- plicate the internal structure of their rule bases. Although a number of attempts have been made in the literature to formalize the structure of a rule base using the notion of a rule base execution path, none of these are entirely adequate. This pa- per reports a new formal definition for the notion of a rule base execution path, which adequately supports both validation and performance analy- sis of rule-based expert systems. This definition for the execution paths in a rule base has been embodied in a rule base analysis tool called Path Hunter. Path Hunter is used to analyse a rule base consisting of 442 CLIPS rules. In this analysis, the problem of combinatorial explosion, which arises during path enumeration, is controlled due to the manner in which paths are defined. The analy- sis raises several issues which should be taken into account in the engineering of rule-based systems. Introduction and Motivation Expert systems characteristically achieve a high level of performance in solving ill structured problems [Si- mon, 19731, using a body of knowledge specific to the problem domain. This knowledge is represented explic- itly in the knowledge base (KB) of the system, which is kept separate from the mechanism which applies the knowledge to solve problems (the inference engine). The intuitive appeal of rules for solving ill structured problems results from rules often being easy for non- programmers to read and write. However, the be- haviour of large rule-based systems is almost always hard to predict because, although individual rules can be easy to understand on their own, interactions that can occur between rules are not obvious. As a con- sequence of this, it is hard to measure and analyze the performance of rule-based expert systems. Tools are required to assist developers in understanding the dependencies that exist between individual rules, and *This work was funded in part by Bell Canada Inc. indicate how sets of rules will operate together to com- plete tasks in the problem-solving process. This paper describes a formal method for detecting the potential interactions between the rules in a rule base [Gross- ner et al., 1992a], and the development of a tool em- bodying this method, called Path Hunter [Gokulchan- der et ad., 19921. Path Hunter is used to analyse the rule base of the Blackbox Expert, an experimental DA1 testbed [Grossner et al., 19911. The Blackbox Expert is a rule-based expert system designed to solve a puzzle called Blackbox. Our desire to explicate the structure of rule bases is motivated by work in two related fields of artificial intelligence: expert systems, and distributed artificial intelligence (DAI). W e use the term structure to refer to the dependencies between the rules and potential in- teractions that can occur between the rules in the rule base. Mapping the structure of a rule-based expert system has become important for structural validation of expert systems [Rushby and Crow, 19901 as well as performance analysis of cooperative distributed prob- lem solving systems (CDPS) [Durfee et al., 19891. In structural validation, the structure of the rule base of an expert system is used as a guide for the generation of test cases and as an indicator of the “completeness” of the validation process [Rushby and Crow, 19901. Per- formance analysis of CDPS systems requires a descrip- tion of the structure of the rule bases of the expert systems in the CDPS to predict the operations that they will be able to perform given the ‘information’ (data items) available to them as a part of the CDPS system [Grossner et al., 1992b]. We seek to capture the structure of a rule base in terms of chains of inter-dependent rules called paths. If the notion of path in a rule base is to be useful for validation and performance analysis of expert systems, it must possess the following criteria: o The notion of path must be well-defined and unam- biguous, so that it can serve as an adequate specifi- cation for an automatic path-enumeration program. Only sequences of rules that depend upon each other for their firing are to be considered part of the same path. 704 Grossner From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. When the rules forming a path fire, their combined actions carry out an intended function of the system designer, and can be seen as having significantly ad- vanced the state of the problem being solved. The computational effort involved in finding the rules that comprise a path should not be too large to enable efficient automatic enumeration of paths. The number of paths that will be enumerated for a rule base using this definition of path must be com- putable; that is, we want to prohibit a combinatorial explosion in finding paths. While researchers in DAI have speculated about the effects of data distribution on the performance of a CDPS system [Lesser and Corkill, 1981; Fox, 19811, there have not been any attempts made at modeling the rule base of an expert system for the purposes of understanding the magnitude of the change in the per- formance of an expert system given a change in the data distribution of the CDPS. Durfee et al. have ob- served several of the effects of data distribution on spe- cific test cases of the Distributed Vehicle Monitoring Problem [Durfee et al., 19871. Other efforts at mod- eling CDPS systems have been for the purpose of un- derstanding agent behavior and their potential inter- actions [Shoham, 19911. The expert system validation literature reports a number of approaches for defining the execution paths in a rule base. The EVA system [Chang et al., 19901 de- fines a dependency graph (DG) that is used to generate test cases for validating an expert system. The defini- tion of the rule-dependency relation used to construct the DG is unsatisfactory because it allows EVA to con- sider rules to depend upon each other when in fact they do not; thus, many paths are enumerated which do not reflect ‘paths’ that will occur when a rule base is ex- ecuted. Rushby and Crow [Rushby and Crow, 19901 propose a refinement of the EVA DG method, where the rule-dependency relation is improved, but under certain conditions it is still unsatisfactory for the same reason. A stricter method for determining rule depen- dencies is proposed by Kiper [Kiper, 19921, which mod- els the state of the rule-based system as it would ap- pear when the rules are fired. While this method per- mits only true rule dependencies to be captured, the rule base states are very costly to compute. Therefore, none of the previous approaches satisfy our criteria. atlas of a Rule Conventionally, an expert system Z is built to solve ill structured problems, which we denote by P’. For our purposes, a rule-based expert system g is consid- ered to be a triple (E, RB, WM) where: E is an infer- ence engine, RB is the rule base used by the inference engine, and WM is the working memory where facts (representing current data) are stored. The facts that are stored in the working memory consists of a predi- cate name and a list of arguments. Predicates indicate t the relationships that exist among the elements of the list in a fact. Let R be the set of all predicates used by 8 in solving pf. We use the notation fa to represent a fact that may be present in the working memory WM, where: fi = (CQ, li) such that li is a list of data ele- ments, and CQ E R identifies the relationship between the elements of Zi . Finally, the state of C is denoted by the set of facts S present in WM at a given time. The rule base RB of an expert system Z is the set of rules fi for solving P I. When a rule fires, it changes the state of the expert system by adding or removing facts from the WM. We consider a rule ri to be composed of an LHS and an RHS where: the LHS indicates the fact templates such that at least one instance of each template must be present in WM for the rule to fire, denoted by Z’s; the RHS indicates the set of facts that may be asserted by ri, denoted by A’*. The Blackbox expert has been developed using the CLIPS expert system tool [Giarratano and Riley, 19891, and has been designed to solve the Blackbox puzzle. The Blackbox puzzle consists of an opaque square grid (box) with a number of balls hidden in the grid squares. The puzzle solver can fire beams into the box. These beams interact with the balls, allowing the puzzle solver to determine the contents of the box based on the entry and exit points of the beams. The puzzle solver must determine if each location of the grid square is empty or contains a ball, and in addi- tion if the conclusion drawn for the location is certain. As an intermediate step, the puzzle solver can deter- mine that there is evidence indicating that a square is both empty and contains a ball, signalling a con- flict. Conflicts may be resolved as additional evidence is obtained. For example, additional evidence may in- dicate that the ball is certain. Thus, the grid location would be considered to certainly contain a ball, and the evidence suggesting that the location is empty would be disproven. The objective of the Blackbox puzzle- solver is to determine the contents of as many of the grid squares as possible, while minimizing the number of beams fired. An example rule (Ball-Certain) from the CLIPS rule base for the Blackbox Puzzle is shown in Figure 1. Table 1 lists the predicates and user defined functions used by Ball-Certain and the other rules from the Blackbox Expert’s rule base that will be used for ex- ample purposes. The user defined functions represent an indirect access to WM; thus, each function will have a predicate associated with it as shown in Ta- ble 1. Ball-Certain is activated w,hen ample evidence is gathered to support making certain a ball located in the Blackbox grid. This rule will be activated by the presence of the fact using the predicate BALL-CERTAIN as well as a fact using the predicate CERTAIMBALLS. Once the rule is activated, it will check to see if the grid location is already certain, in which case no action is needed. Otherwise, the location is made certain; and if a conflict exists, a fact using the predicate RMCB is Rule-Based Reasoning 705 ; Update the grid to indicate that a ball in a particular location is to be considered : a certain ball. (defrule Ball-Certain ?varl <- (BALL-CERTAIN ?sn ?rule-ID ?row ?co~) ;A ball is to be made certain ?var2 <- (CERTAIN-BALLS ?cb) ;Get number of certain balls located => (retract ?varl) (if (not (i scertain ?row ?col)) then (retract ?var2) ;Is the ball already marked as certain? (assert (CERTAIN-BALLS =(+ ?cb 1))) ;Increment # of certain balls (setcertain ?row ?col) (if (eq (status ?row ?col) CONFLICT) then ;Update the grid making the ball certain ;Is There a Conflict? (assert (RMC,B ?sn ?rule-ID ?row ?col)) ;Indicate the conflict is to be resolved 1)) ; end rule Ball-Certain Figure 1: Sample CLIPS Rule USER FN Interpretation Assoc. Predicate iscertain check certainty of a square GMAP-CERT setcertain set a square certain GMAPCERTB status check contents of a square GMAP PREDICATE Interpretation BALL Ball located BALL-CERTAIN A ball is to be made certain BLANK-GRID Place an empty in a grid square CERTAIN-BALLS Count of certain balls located CONFLICT-B Conflict has occurred placing a ball DISPROVE-E Evidence for an empty square is disproven GMAP Access to the contents of a grid square GMAPB Ball location on the grid GMAPC Conflict location on the grid GMAP-CERT Certaintv of grid location GRIDSIZE Dimension of the grid P-BALL Place a ball on the grid RMCJ3 Remove a conflict by placing a ball SHOT-RECORD exit and entry point for a beam Table 1: Predicates and User Defined Function for Blackbox asserted indicating it can be resolved. Ball-Certain does not follow the form of rules as we have defined them. Therefore, Path Hunter will abstract Ball-Certain and split it into two rules: Ball-Certain%l, and Ball-Certain%2. The rule Ball-Certain%1 will update the grid to indi- cate that a ball in a particular location is to be con- sidered a certain ball, a conflict is discovered, and the conflict is to be resolved. ZBall-Certain%l = {GMAP, GMAP-CERT, CERTAIN-BALLS, BALL-CERTAIN}, and ABall-Certain%1 = {G~AP-CERTB, RMCB, CERTAINBALLS}. TheruleBall-CertainX'Lwillupdate the grid to indicate that a ball in a particular location is to be considered a certain ball. ZBa11-Certain%2 = {GMAP, GMAPXERT, CERTAINBALLS, BALL-CERTAIN}, and ABall-Certain%2 = {GMAPXERTE, ~ERTAIWBALLS}. The original rule contained a conditional on its RHS representing two different potential actions: the case that the ball made certain was successfully placed, and the case where there was a conflict when the ball was placed. Thus, Path Hunter created two abstract rules each embodying one of the potential actions taken by the RHS of Ball-Certain. The predicate associated with the user defined function used in the condition that was on the RHS ofBall-Certainhas been placed on the LHS of the abstract rules. In order to reduce the computation and memory re- quirementsneeded tosolveillstructured problems they are typically decomposed into subproblems [Grossner, 19901, and we denote asubproblem by SP,. Twoofthe subproblems for Blackbox are Beam Selection and Beam Trace. Each subproblem SPt of P’ will have its own set of rules within RB. We refer to the collec- tion of rules { ri 1 ri used to solve SPt } as tusk Tt. Each subproblem SP, of PI will have distinct states that represent acceptable solutions for the subprob- lem. States which represent an acceptable solution for SP, are characterized by the presence of facts in WM that use specific predicates called end predicates. We denote the set of end predicates for SPt by &. Definition 1 (Logical Completion) A logical com- pletion for SP, is a conjunction of selected predicates from Zt denoting a state which is a meaningful solution to a subproblem. We use the notation SPt w U to denote a set of rules 17 that assert facts using all the predicates of a logi- cal completion for SP,. A logical completion for Beam Trace would be GMAPB ABALL. We now turn our attention to the types of ‘depen- dency’ that can exist between two rules. As the original problem PI is already decomposed into subproblems, we will only consider dependencies between rules in the same task. Intuitively, we say that one rule is de- pendent upon another if the action taken by one rule facilitates the other rule to become fireable. The sim- plest form of dependency exists when one rule asserts 706 Grossner a fact that is required by the LHS of another rule. At this point, we will consider only this simple form of dependency. Definition 2 (Depends Upon) The relation depends upon between two rules ri and rj is denoted by ri + rj, and it indicates that the RHS of ra asserts a fact (cq, li) that matches a template in the LHS of rj, with the constraint that cyi @ Zt. Formally: ri 4 rj G ((A'* nP) # 8) A (Va)((cy E dra nP) a a Ws) The condition (Va)((cy E dra f-07) 3 (Y @ 2,) placed on the depends upon relation restricts this relationship to rules that are in the same task. A set of sample rules from the Blackbox Expert’s rule base are shown in Table 2. These rules are part, of the Beam Trace task. Two examples of the de- pends upon relation exist between RA-12-Right%l , Right-12-Left%l, and RA-12-Prep%l. More precisely, RA-12-Right%1 4 RA-12-Prep%1 and RA-12-Left%1 4 RA-12-Prepxl. When considering the dependencies that exist among the rules in a task, we become concerned with grouping the rules according to the dependency rela- tionship. Thus for any rule in a task we desire the ability to identify those rules which it depends upon. Definition 3 (Reachability) A rule rj is reachable from a set of rules V, if V contains all the rules that rj depends upon. We use the notation V -+ rj to indicate that rj is reachable from the rules in V. Formally, V + rj ifl(Vri E Tt)(ri 4 Tj * ri E V). For RA-12-Prep%1 in Table 2, V = {RA-12-Right%l, Right-12-Left%l}. We now consider the set of rules that enable a rule to fire. Informally, we say that a set, of rules W enables a rule rj when’ the rules in W assert facts causing rj to fire. This set of rules W must satisfy a number of conditions for it to be an enabling-set for a rule rj: Given rj, then W C V; that is, rj must depend upon every rule in W. Every rule in W must assert, at least one fact that uses a predicate specified by the LHS of rj where a fact using that predicate is not asserted by any other rule in W. Formally, we say W is minimal if (b’ri)(Vq)((ra # rk) 3 (A’* g A’“)), where r;, 7-k E W. For each predicate specified by a template on the LHS of rj, if that predicate is used in a fact asserted by at least one rule, then some rule that asserts a fact using the predicate must, be a member of W. Formally, we say that W is maximal if (VaU)(oU E Z’j)((+k)(rk E V A a, E drk) + (3ri)(ri E W A au E ,a-)). Definition 4 (Enablement) A set of rules W en- ables a rule ri iff W is a minimal set of rules that as- sert facts matching the maximum number of templates in the LHS of rj; we write W 4 rj to denote that W a’s an enabling-set for rj. Formally, given rj E Tt and V ---) Tj, W 4 rj in: W C V, and W is both minimal - and maximal. For RA-12-Prep%1 in Table 2 there are two enabling-sets: WI = {RA-12-Right%l}, and W2 = {RA-12-Left%l}. For Remove-Conf lict%l the enabling-set is {Place-Ball%l, Ball-Certain%l}. A path in a rule base of an expert system must iden- tify a sequence of rule firings that can occur when the expert system is solving a subproblem; thus, each path is composed of a sequence of rules that depend upon each other. The set, of rules comprising a path must be defined such that each rule in the path is enabled by a subset of the set, of rules comprising that path. It is desirable that each path represent a ‘meaningful’ thread of execution for the subproblem; thus, the rules in each path must assert facts using all the predicates of a logical completion for the subproblem. Definition 5 (Path) pLI a path k in task Tt is a partially-ordered set of rules (Q, X) where: @ = {q,r2.. .r,} with ri E T,, (3U c a?, SPt cuf U), (Vra E <P)(3W 4 ri, W C a), and (Vri E @) (3rj E a) such that [(Vak)(ak E A’* * (uk E Z’j v (arc E &)))I. K: a partial order indicating which rules in path Pi depend upon others. (Vri)((ri 7r rj) * ra 4 rj). The condition (Vr; E (a) (3rj E a) such that [(Vuk)(q E A’* + (ak E 2’3 V (ak E Zt)))] which we have placed on the structure of a path ensures that every fact that is asserted by a rule in a path either uses an end predicate, or it must match the template of the LHS of another rule in the path. The path formed by the rules in Table 2 as found by Path Hunter is shown in Figure 2. This path represents the combined actions of six rules. These rules recog- nize a particular configuration on the Blackbox grid, indicate that a ball should be placed on the grid, indi- cate that the ball is certain, indicate that a location is to be marked as empty, resolves a conflict that occurs when the ball is placed, and disproves that the loca- tion should be empty. The logical completion that is asserted by this path is DISPROVEI A GMAP-c A GMAPB A BALL A CERTAIN-BALLS A GMAP-CERTB. Experiences with Path Hunter has been used to analyse the structure of the Blackbox Expert’s rule base. This rule base contains 442 CLIPS rules which formed 512 abstract rules. The abstract rules formed 72 equivalence classes (explained below) as well as 170 rules not in any equiv- alence class, from which Path Hunter found 516 paths. The paths produced by Path Hunter have been veri- fied by the rule base designer as being accurate and Rule-Based Reasoning 707 z*-ll-Right%1 ;GMAP, GRIDSIZE, SHOT- A.’ { RA-12) RECORD, BALL} Indicate the occurrence of a specific con- RA-la-Left%1 {GMAP, GRIDSIZE, SHOT- (RA-12) figuration on the Blackbox grid. RECORD, BALL) Indicate the occurrence of a specific con- RA-la-Prep%1 {RA-12) {P-BALL, figuration on the Blackbox grid. BLANK-GRID, Place a ball, mark a location as empty, and indicate that the Ball is a certain ball. Place-Ball%1 { GMAP, BALL-CERTAIN) P-BALL) GMAP-CERT, {CONFLICT-B, GMAP-C} Update the grid to indicate that placing a ball has created a conflict. Place-Empty%2 { GMAP, BLANK-GRID) GMAPXERT, {DISPROVE-E} Update the grid to indicate that evidence Remove-Conflict%1 {CONFLICT-B, RMCB} for an empty grid square, is disproven. I I {GMAP-B, BALL} I A certain ball is placed in a square with a conflict, and the conflict is resolved by I Table 2: Example Rule Set Figure 2: An Example Path meaningful; that is, they capture the original intent with which the rules in the path were specified and the rules that are depicted in the paths combine together as intended. A typical path contains 5-6 rules, 2-3 branches, and has a length of 4 rules. The problem of combinatorial explosion will arise when the cardinality of U for many of the rules in a task is large. When the cardinality of U is large, there will be a large number of potential combinations for the rules in a task to form a valid path. Of course, Path Hunter must check all of these combinations. One method that was used to control combinatorial explo- sion was to form equivalence classes of rules. When the rules in a rule base are abstracted, some rules will have the same LHS and RHS. Rules that have the same LHS and RHS are said to form an equiva- lence class. Rules RA-12-Right%1 and RA-12-Left%l, shown in Table 2, form an equivalence class called RA-12-Class%l. The path shown in Figure 2 contains this equivalence class. Thus, the path shown in Fig- ure 2 represents two paths that can be observed when the Blackbox Expert’s rule base is executed: one path starting with RA-12-Right%l, and one path starting with RA-12-Lef t%l. Therefore, equivalence classes re- duce the number of paths that must be produced by Path Hunter, with no loss of generality. Controlling combinatorial explosion can require modifications to the logical completions. When a logi- cal completion is too general, many different paths will be formed that assert this logical completion. Thus, a combinatorial explosion may result. In this case, the rule base designer can control the combinatorial ex- plosion by creating several, more specific, logical com- pletions. This new set of logical completions will lead Path Hunter to create a set of paths for each new logi- cal completion, where the total number of paths for all the new logical completions is less than the paths that were to be created for the original logical completion. In effect, the paths to be created using the original logical completion are broken down into smaller paths where there are fewer potential combinations of rules for creating these smaller paths, resulting in a fewer number of total paths produced. Nevertheless, these smaller paths are still meaningful. The process of applying Path Hunter to the Black- box Expert’s rule base also served to identify various anomalies: the improper use of predicates, undesired interactions between rules, and rules which were not considered to be part of any path due to program- ming inconsistencies. In some cases, it was determined that the same predicate had been used within the rule base to reflect slightly different semantics. Thus, it was determined that while the rule base designer had intended to represent two distinct situations, an un- 708 Grossner detected ambiguity had occurred. These ambiguities also led to undesired potential interactions between the rules in the rule base. One of the rules in the Blackbox Expert’s rule base did not appear in any path because it was dependent upon rules in the Beam Trace task, but asserted a fact that used an end predicate from the Beam Selection task. This situation indicated a poor design for the rule in question. The use of Path Hunter to analyse the Blackbox Expert’s rule base also provided a method to validate the design of the rule base by indicating these inconsistencies and ambigui- ties. Our experience with Path Hunter points to the need for a well defined approach for the engineering of rule based systems. As the problem to be solved by a rule based system is analysed and a preliminary design for the rule base is created, various issues must be tackled. The modules that will comprise the rule base should be specified as the subproblems that comprise the origi- nal problem are understood. The predicates to be used in the construction of the rule base will play a central role in defining the structure of the rule base. It is very important that the semantics attached to each predi- cate be clear and unambiguous. In addition, predicates must be chosen to ensure that the states which indi- cate that an acceptable solution to a subproblem are unambiguous. Otherwise, the logical completions will be ambiguous and paths that do not reflect the intent of the rule base designer will be present in the rule base. Conclusion The rule base execution paths defined in this paper meet the requirements for the validation and perfor- mance analysis of rule based expert systems. Paths are well-defined because our rule-dependency relations de- pends upon and enablement are unambiguous and ac- curately capture potential rule firing sequences. Paths are meaningful because each path is associated with a logical completion indicating a significant state in the problem-solving process. Paths are computable because the system designer, using logical completions and equivalence classes, can control the complexity of path enumeration. Path Hunter has been used to analyse the struc- ture of the Blackbox Expert’s rule base (512 abstract rules). The use of logical completions and equivalence classes proved effective for controlling combinatorial explosion. Our experience with Path Hunter points to the need for a well-defined approach for the engineer- ing of rule-based systems, where the subproblems (or modules) required to solve the problem, the appropri- ate solutions for the subproblems, and the predicates to be used in constructing the rule base are clearly specified as early as possible during its development. 1990. A report on the Expert Systems Validation Associate (EVA). Expert Systems with Applications (US) 1(3):217-230. Durfee, Edmund H.; Lesser, Victor; and Corkill, Daniel D. 1987. Coherent cooperation among com- municating problem solvers. IEEE Transactions on Computers C-36(11):1275-1291. Durfee, Edmund H.; Lesser, Victor R.; and Corkill, Daniel D. 1989. Trends in cooperative distributed problem solving. Transactions on Knowledge and Data Engineering 1(1):63-83. Fox, Mark S. 1981. An organizational view of dis- tributed systems. IEEE Transactions on Systems, Man, and Cybernetics SMC-11(1):70-80. Giarratano, J. and Riley, G. 1989. Expert Systems: Principles & Programming. PWS-KENT. Gokulchander, P.; Preece, A.; and Grossner, C. 1992. Path hunter: A tool for finding the paths in a rule based expert system. DA1 Technical Report DAI- 0592-0012, Concordia University, Montreal Quebec. Grossner, C.; Lyons, J.; and Radhakrishnan, T. 1991. Validation of an expert system intended for research in distributed artificial intelligence. In 2nd CLIPS Conference, Johnson Space Center. Grossner, 6.; Gokulchander, P.; and Preece, A. 1992a. On the structure of rule based expert sys- tems. DA1 Technical Report DAI-0592-0013, Concor- dia University, Montreal Quebec. Grossner, C.; Lyons, J.; and Radhakrishnan, T. 1992b. Towards a tool for design of cooperating ex- pert systems. In 4th International Conference on Tools for Artificial Intelligence. Grossner, C. 1990. Ill structured problems. DAI Tech- nical Report DAI-0690-0004, Concordia University, Montreal Quebec. Kiper, James D. 1992. Structural testing of rule-based expert systems. ACM Transactions on Software En- gineering and Methodology 1(2):168-187. Lesser, Victor R. and Corkill, Daniel D. 1981. Func- tionally accurate, cooperative distributed systems. IEEE Transactions on Systems, Man and Cybernetics SMC-ll(l):Bl-96. Rushby, John and Crow, Judith 1990. Evaluation of an expert system for fault detection, isolation, and re- covery in the manned maneuvering unit. NASA Con- tractor Report CR-187466, SRI International, Menlo Park CA. 93 pages. Shoham, Yoav 1991. AgentO: A simple agent lan- guage and its interpreter. In Proc. International Con- ference on Artificial Intelligence (AAAI 91). 704-709. Simon, Herbert A. 1973. The structure of ill- structured problems. Artificial Intelligence 4:181-201. References Chang, C. L.; Combs, J. B.; and Stachowitz, R. A. 709 | 1993 | 105 |
1,305 | Supporting and Optimizing Full Unification in a Forward Chaining Rule System Howard E. Shrobe Massachusetts Institute of Technology NE43-839 Cambridge, MA 02139 hes@zermatt.lcs.mit.edu Abstract The Rete and Treat algorithms are considered the most efficient implementation techniques for For- ward Chaining rule systems. These algorithms support a language of limited expressive power. Assertions are not allowed to contain variables, making universal quantification impossible to ex- press except as a rule. In this paper we show how to support full unification in these algorithms. We also show that: Supporting full unification is costly; Full unification is not used frequently; A combination of compile time and run time checks can determine when full unification is not needed. We present data to show that the cost of support- ing full unification can be reduced in proportion to the degree that it isn’t employed and that for many practical systems this cost is negligible. 1 Introduction Relatively efficient mechanisms have been developed for the implementation of forward chaining rules [l; 41. How- ever, these mechanisms have mainly been used in the im- plementation of the OPS family of production system lan- guages. Languages in this family have limited expressive power: they are pure forward chaining languages in which assertions are restricted to ground terms.’ In this paper we explore the use of these mechanisms in more expressive languages in the tradition of [7; 2; 3; 61. Such languages work by pattern directed procedure invoca- tion. They center around a database of assertions accessed by forward and backward chaining rules as well as by nor- mal procedural code using a Tell and Ask interface. The bodies of rules in such languages may be full procedures. Such languages naturally fall into the full unification case: Assertions containing variables take on the force of uni- versally quantified statements and these may match the patterns of either forward or backward chaining rules. However, the designers of the OPS family of languages did not choose the limitation to the semi-unification case naively. Full unification significantly complicates the Rete mechanisms and leads to two forms of inefficiency: o The resulting code executes more slowly. o The resulting code is significantly larger. ‘In the remainder of the paper, we will refer to a lan- guage which only allows ground terms in assertions as a “semi- unification* language. If assertions may contain variables we will refer to the language as a “full-unification” language. 710 Shrobe Moreover, in many domains the semi-unification case closely approximates the needed expressive power; full uni- fication is rarely required. In this paper, we show that one can have both the ex- pressive power of full unification and the efficiency of the techniques developed for the more limited semi-unification case. The mechanisms described here have been imple- mented and extensively used in the Joshua system[5]. The outline for the remainder of the paper is as follows: In section 2 we begin by reviewing the conventional Rete network, describing it as a mechanism for incrementally computing unifications between a set of patterns and a set of assertions (an unconventional viewpoint). We then show in section 2.1 that conventional Rete networks are an optimization of our viewpoint for the semi-unification case. In section 3, we describe our extensions to support full unification; section 4 presents data to show the degree of inefficiency introduced by our extensions. The next two sections present techniques for addressing the inefficien- cies. Section 5 presents a set of run time optimizations that dynamically identify when semi unification alone is adequate; we show that this reduces the first form of inef- ficiency to negligible levels. Section 6 shows how the rule compiler can statically segregate out portions of the rule system which can be compiled under the semi unification assumption; we show that this reduces the second form of inefficiency to acceptable levels. 2 Rete Networks We assume that the reader is familiar with Rete networks and related techniques (see, for example, [l]). In describ- ing Rete networks, our terminology will be somewhat non- standard; we will refer to unification rather than matching in an attempt to show how our extensions fit within the pattern of the original algorithm. Rete networks incrementally maintain the partial trig- gering states of rules as new assertions are added and deleted. A rule is fully triggered when its set of patterns unify with a set of statements asserted in the database. Partial triggering states contain two kinds of information: I) The unifications between individual patterns and in- dividual assertions and 2) Extended unifications between subsets of a rule’s patterns and sets of assertions. Figure 1 shows a pair of rules and the corresponding Rete network. The network contains two sections: Match and Join. Each of these incrementally updates its internal state each time a new token is added to (or deleted from) its input nodes. The match section is a discrimination network whose From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. Figure 1: A Typical Rete Network terminal nodes (the match nodes) compute unifications be- tween rule patterns and assertions in the database. State is stored only at the match nodes (these are the alpha mem- ories of [l]). Patterns from different rules which, are vari- ants (i.e. identical up to variable renaming) share match nodes; two patterns which share leading terms share a path through the network up to the point of divergence. The tests made by the nodes above the match nodes filter out assertions which cannot possibly unify with the pattern of the match nodes below them. The Join network begins at the Match nodes. State is stored at all join nodes (these are the beta memories in the terminology of El]). E ac h node of the join section merges the partial unifications represented by its two parent nodes, checking that the shared variables are unifiable. Each ter- minal node of the join network corresponds to a complete set of rule patterns. If two rules share leading patterns, they share join nodes up to the point of divergence. The rete network compiler emits code for each node in the network to perform the above functions.2The code for match and join nodes perform the indicated unifications; they also package up the results into state tokens stored at the node.3 2.1 Optimizations for the Semi Unification Case In classic Rete networks all assertions contain only ground terms and therefore no variable in a rule’s pattern may ever be bound to another variable. Under these conditions the Rete network can be viewed as computing relational selects (at the match nodes) and relational joins (at the join nodes) (as pointed out in [4]). Operationally matching reduces to: 1) checking that constants in the assertion are equal to corresponding con- stants in the pattern and 2) checking that terms of an 21nthe Joshua system, the methods for generating this code are customizable by the user; this is part of the Protocol of Inference see [5]. 3 Joshua aho w user-supplied procedural condition elements; s these require a special node type. Procedural nodes are at- tached to a single parent node; they contain the original pro- cedure surrounded by supporting code which generates a new token each time the procedure “succeeds”. For brevity, we will not further discuss these condition elements. assertion which match different occurrences of the same variable are equal. When assertions contain only ground terms the discrim- ination nodes perform part of the unification by testing for equality between constants in the pattern and constants in the assertion. Therefore, the code at a match.node may omit these tests. Similarly, the tests at the join nodes can be reduced to checking that variables shared between the parents of the join are bound to equal values. Hashing (or other forms of indexing) may be used to speed up the join computa- tion. Each parent of a join node maintains a hash-table of tokens; the key for this table is a list of the values of the shared variables. In the semi-unification case, a hash probe will find the precise set of unifiable tokens; therefore, no other code is needed at the join nodes. Finally, in the semi-unification case, the state tokens need not contain a variable binding environment; each vari- able can be identified with a particular term from one of the mat thing assertions. These optimizations are not fully available in the full- unification case. We have described the Rete algorithm in a very general context, that of computing and extending unifications. The semi-unification case allows a variety of optimizations to be made by replacing unification with equality tests. To extend the traditional Rete algorithm to support the full-unification case we must undo these optimizations, re- placing equality checks by unifications. The following questions must be addressed: How are logic variables represented? What code is compiled to conduct the unification mat thing? Row are state-tokens represented and computed? How do auxiliary indices (e.g. hash-tables at join nodes) handle logic variables? Having made these choices we will then need to see what impact they have on the components of the Rete network. 3.1 ata Structures and asic Operations 3. I. I Represent at ion of Logic Variables We adopt a representation for logic variables based on the Prolog oriented techniques of the Warren Abstract Ma chine [8]. Logic variables are represented as pointers to their val- ues; an unbound logic variable points to itself. An unbound logic variable is unified with a value by making it to point to the value; this value might be another unbound logic- variable which might later be bound to a value, leading to a chain of logic-variable pointers as shown in figure 3. To find the value of a logic variable one must follow the chain of pointers until encountering either a value which is not a logic variable or a logic variable which points to Rde-Based Reasoning 711 (lambda (assertion) (with-unification (with-unbound-logic-variables (?x) (unify ‘P (dcreference (pop assertion))) (unify ?x (dcreference (pop assertion))) (unify ‘a (dereferencc (pop assertion))) (unify (dereference TX) (dereference (pop assertion))) (unify ‘b (dereference (pop assertion))) . . . code to be executed upon success . ..))) Figure 2: Full Unification Code Corresponding to [P ?x a ?x b] itself. This operation is referred to as dereferencing. A logic variable must be dereferenced before its use. When a logic variable is bound, an entry consisting of the logic variable is made on a stack called the trail. Before a pattern matching operation is begun, the level of the trail is saved. To return to the binding state which obtained at the beginning of the operation (e.g. when the unification fails) each logic variable above the marked point on the trail is reset to point to itself and the trail level is reset to the marked point. This operation is usually called unwinding the trail, or untrailing. 3.1.2 Implementation of Unification The match compiler is responsible for emitting the uni- fication code corresponding to a pattern. When given an assertion to match, the code must fair if the pattern and the assertion are not unifiable; otherwise it must succeed and bind the logic-variables of the pattern to the values implied by the unification. Figure 2 shows the code emitted for the pattern [P ?x a ?x b]. In this code, With-unification establishes a unification context (i.e. it notes the level of the trail on entry and unwinds the trail to that level upon exit. Also it establishes a catch tag which is thrown to in the event of failure. With- unbound-l&k-variables creates a set of new logic-variables (typically these are stack allocated). Notice that each term of the assertion must be dereferenced before the call to Unify since the term might be a logic variable. Unify is called with atomic elements (including logic variables) as the first argument; when the pattern contains compound terms, the match compiler must recurse into the substructure of these terms. For simplicity of presentation we omit the details, see [8]. The behavior of Unify is as follows: o If neither argument is a logic-variable, then UNIFY succeeds if the arguments are EQUAL and otherwise fails. o If exactly one argument is a logic-variable, UNIFY succeeds, the logic-variable is bound to the other ar- gument and a trail entry is made.4 o If both arguments are logic-variables then one is bound to the other a trail entry is made and UNIFY succeeds. (If both logic-variables are stored on the stack, then the one pushed more recently must point to the one more deeply nested). *The unification is only allowed if the variable does not occur within the structure of the other arguments. Prolog implemen- tations typically skip this “free-for” check for efficiency as do we in our implementation. Pattern: rp 7x w w 11 Aoorrtzon: [P (?a . ?b) ?a ?b ?bI Most Conwcal [P (1 . 1) 1 Unifier 1 11 Unifyxng Subotrtutxcma ?A -1 ?B -1 Figure 3: A Unification and its Implementation Level View Failing is accomplished by throwing the value NIL to a catch-tag for FAIL. This is normally established by with- unification; this causes the trail to be unwound.5 3.1.3 Saving the Binding State in Tokens The code emitted by the Rete network compiler for a match node tests whether the triggering assertion can be unified with the rule pattern; if so it produces a state-token containing the bindings of the pattern’s logic variables. Consider the unification shown in figure 3. The vari- able ?x of the rule’s pattern is unified with the list (?a . ?b) of the assertion; this list contains variables which are bound to ground terms (e.g. 1). The value of ?x is valid, therefore, only as long as ?a and ?b continue to be bound to 1. However, ?a and ?b are contained in a database assertion whose intent is to state a universal quantifica- tion. Therefore, the binding of ?a and ?b must be untrailed and the values of their current bindings must be preserved elsewhere.6 Notice that this is quite a bit more expen- sive than the semi-unification case where the assertion can serve as an adequate representation of the binding state as explained in 2.1. To preserve the volatile binding state over a longer duration, state-tokens maintain an environment of logic- variable values with a slot for each variable in the pattern. Each slot is filled with the unified-value of its correspond- ing logic-variable. The unified value of a logic-variable is computed as follows: o The logic-variable is dereferenced. o If the variable is unbound, its unified value is a new logic-variable. All occurrence of a particular unbound logic-variable have unified-value.? the same new logic-variable as their 51n the implemen tations of Joshua on Symbolics equipment, Dereference is a microcoded instruction. In implementations on more conventional machines it would be implemented either as subroutines or an inline code fragment; either approach is both slower and consumes more instructions. Our measurements are made on Symbolics equipment, yielding more favorable results for the full-unification case than would result on more conven- tional machines. 61.e. our implementation uses a shallow binding scheme for logic variables but needs to preserve their values beyond the dynamic extent. ‘Unbound logic-variables are replaced by new variables to 712 Shrobe ~iri& i.u bound and the bound value trtrilictd-value is the bound-value. is atomic, o II’ t.h(* i)c)utld value is a compound data-structure, then tlrr. sub-slructure is traversed replacing each term by its trnilicad value. 111 t.hct caxsmple of figure 3 the logic-variable ?x is bound to the pair (?a . ?b). But ?a is bound to ?b which is in turn bound to 1; so the unified value of ?x is (1 . l), a value which persists even after unwinding the trail. 3.2 Extending the Algorithm 3.2.1 The Discrimination Network We begin with the discrimination nodes of the Match network. Each discrimination node dispatches on the value of a term in the assertion, see figurel. For large branching factors, a hash table is an appropriate implementation. If the term being dispatched on is a constant then it serves as the hash-key. The value retrieved is the next discrimina- tion node to visit. If there is a branch for the key *variable* (indicating a rule pattern with a variable at this position), this must also be followed. Notice that the term being discriminated on may itself be a logic-variable. In this case all outgoing branches must be followed, since a variable can match anything.s 3.2.2 The Match Nodes The discrimination network search discards most match nodes that don’t unify with the assertion; however, some non-unifiable match nodes may still be reached. For ex- ample: Rule Pattern: (P a ?c b ?c) Assertion: (P ?x c ?x c) The discrimination network treats each occurrence of ?x in the assertion as independent, allowing this assertion to reach the match node although it isn’t unifiable with the pattern. Notice that the inconsistency between the asser- tion and the pattern occurs at constant terms in the pat- tern. As mentioned in section 2.1, this can never happen in the semi unification case and the match code need only check the positions corresponding to variables. In contrast, the full unification match code must per- form the entire unification as explained in section 3.1.2 (i.e. tests must be generated for both constant and vari- able positions). It must also save the results in a binding vector by copying out the unified values, as explained in section3.1.3. The match compiler, therefore, emits code containing two sections: The first conducts the unifica- tions, the second creates the binding vector and fill it with the unified values of the logic variables. 3.2.3 The Join Nodes Join nodes are extended in a similar manner. The rule compiler generates a map for each join node specifying prevent sharing of logic-variables held in state tokens with those in assertions (or other state-tokens). Were this not done, the unifications performed at join nodes would unintentionally bind the variables in the assertions. Resolution systems rename vari- ables in the resolvent for the same reason. ‘We do not a tt m e p t to carry along the variable bindings while traversing the discrimination network. l’s ?Y ?x ?y ?z) environment ?x - 1 ?y - 2 ?z -L 3 ?z ?w) environment ?y - 1 ?z - 2 ?w - 3 (lambda (token-l token-2) (with-unification ;; unify ?y from 1 with ?y from 2 (unify (token-slot token-l 2) (token-slot token-2 1)) ;; unify ?z from 1 with ?z from 2 (unify (token-slot token-l 3) (token-slot token-2 2)) (let ((new-token (make-new-token :n-variables 3))) ;; copy ?x (setf (token-slot new-token 1) (copy-unified-value (token-slot token-l 1))) ;; COPY ?Y (setf (token-slot new-token 2) (copy-unified-value (token-slot token-l 2))) ;; copy ?z (setf (token-slot new-token 3) (copy-unified-value (token-slot token-l 3))) ;; copy ?w (setf (token-slot new-token 4) (copy-unified-value (token-slot token-2 3))) new-token))) Figure 4: Join Code for The F’ull Unification Case which variables from the two parent nodes are to be uni- fied. The compiler emits code to perform these unifications and to copy the unified values of all the variables into a new state-token. Figure 4 shows a join to be performed and the corresponding code generated by the rule compiler. Many Rete network implementations use hashing (or other indexing) to speed up the join computation, as ex- plained in section 2.1. In the full unification case, any of the shared variables in either token might be an unbound logic-variable. Unlike ground terms, two distinct logic vari- ables might match; a list of the values of the shared vari- ables is, therefore, not an adequate retrieval key. A simple extension which solves this problem is as follows: e When storing a new token in a node: - If any of the shared variables are unbound, then hash the token under a special key: *unbound- variable *. - Otherwise ues as the the list of the shared variable val- When looking for stored tokens to’ join with a new token: - If any of the new token’s shared variables are un- bound then look at every token stored in the other parent node. - Otherwise form a key which is the list of shared variables and attempt to join with every stored token hashed in the other parent node under this key. Also attempt to join with every token hashed under the key *unbound-variable*. 3.2.4 Compiling Rule Bodies Forward rule bodies may contain normal procedural code which references the logic variables of the patterns. In the full unification case, variables referenced in the body of a rule may be left unbound by the matching process; they therefore must be treated as logic variables and be dereferenced before being used. The values of the logic-variables are stored in the envi- ronment of the triggering state token. The rule compiler, therefore, first emits a prologue which fetches the variable values from the environment into local variables. The rest of the rule body’code is transformed so that every reference to a logic-variable is wrapped within a call to dereference. Rule-Based Reasoning 713 example Natural Deduction Troubleshooting Cryptarithmetic Planning Time in Rete Network Total Run Time Semi F 11 Semi Full 14,114 84,7u25 167,816 360,740 79,353 346,726 645,634 943,602 851,288 2,872,276 3,489,086 7,309,309 209,308 739,605 891,742 1,413,495 Table 1: Run Times of Full and Semi Unification Code 4 Full Unification Support is Costly These extensions lead to semantically correct behavior in the full unification case. However, as can also be seen, each extension removes a constraint of the semi-unification case which was used in optimizing the original algorithm. Table 1 shows the relative performance of the match- ing and merging portions of a a number of demonstration systems. Table 2 shows the relative sizes of the generated code for a variety of systems. The following are among the causes of this difference: 0 e e e e e e 5 Equality tests in the match and join code are replaced by unification. The discrimination network in the match network acts only as a partial filter in the full unification case (due to the possibility of logic-variables in the assertions, see section 3.2.1). The match code generator cannot assume that every constant has already been checked and must generate checks for the constants as well as the variables. Hashing alone is a sufficient join test in the semi- unification case. In the full-unification case this is not true. A rather bulky merge procedure must be still be generated and called, see section 3.2.3. The code generated for rule bodies must replace vari- able references by calls to dereference. This incurs both increased code size and slower performance. In the semi-unification case, state tokens need not con- tain environments. In the full unification case environ- ments must be built and values copied between them. Calls to copy-unified-vahe must be used to copy logic- variable values to new state tokens. If variables are bound to compound data-structures this incurs the cost of traversing and incrementally rebuilding those sub-structures containing logic-variables. In the semi-unification case, match procedures typ- ically check only the terms corresponding to vari- ables in the pattern; constants in the pattern are ig- nored since the discrimination nodes check them. This means that two match nodes which have the same pattern of variable occurrences but distinct constants may nevertheless share match procedures. This is not the case for the full unification case, only variant pat- terns may share match procedures. ynamic Optimizations The programming style of typical knowledge based sys- tems infrequently employs the full unification case. Un- fortunately, the expressiveness of the full unification case leads to code with much worse run-time performance. This section addresses one approach to this problem: optimizations performed at run-time. We have extended the rete network compiler to gener- ate two sets of procedures for each of the match and join nodes. The first of these is the less efficient but more gen- eral code capable of handling the full unification case; the second procedure handles only the semi-unification case, but is considerably more efficient. The rete network inter- preter is responsible for dispatching to the semi-unification procedure if allowable, otherwise it must call the full uni- fication procedure. To make this decision, our system checks each newly created assertion for the presence of logic-variables and stores the result in the data-structure representing the as- sertion. (The check must be made in any event, since all logic-variables in a database assertion must be copied so as to be unique to that assertion). At a match node, the rete interpreter uses this information to determine which procedure to call. When a new state-token is created, a we check whether any element of the environment is an unbound logic vari- able; the token is marked with the result of this check. At a join node the semi-unification code is called if both input tokens are marked as logic-variable free. Notice that only the full unification procedures need to check for the pres- ence of unbound logic-variables in the output token since in the semi unification case the output will necessarily con- tain only ground terms. The crucial question for a dynamic optimization is whether the cost of detecting the opportunity swamps out the resulting benefit. In this case, the detection cost is that incurred in checking assertions for non-ground terms and (if we’re running a full-unification procedure) the cost of a similar check on any newly generated token. Metering indicates that this consumes about 1.5% of total run time. To test the efficacy of dynamic optimization, we ran a rule-based natural deduction system on three versions of the same problem. The first version uses only ground terms, the second has a mix of ground terms and terms with variables and the third version is completely quan- tified. In the first case, all matches and joins used the semi-unification code and ran the problem 6.9 times faster than would the full-unification code. In the second case, 86% of the matches and 59% of the joins used the semi- unification code with a resulting speedup of 2.4. In the last case, of course, all matches and joins used the full- unification code. These results show that the system is highly effective in dynamically identifying when the semi unification code can be utilized. pt imizat ions The dynamic optimizations incur the additional cost of generating two procedures at each node. A traditional pro- duction system would generate only the semi unification code; but our system also generates code to support the rarer case of full unification. As table 2 indicates this code is considerably larger. Furthermore, we generate only a single version of the code for rule bodies which is required to be the slower and bulkier code capable of handling full unification. In this section, we discuss how we use information avail- 714 Shrobe example Circuits Cryptarithmetic Midsummer Discrete Event Ht Atms Ht Ltms NatDed All Rules Matchers Rllle Size 10 2 37 59 7 188 7 3 56 7 4 81 7 4 82 15 5 120 7 6 266 123 36 950 Semi Unification Mergers Proc- Size edures 2 44 0 34 979 307 3 68 46 4 94 20 3 66 64 6 146 80 4 109 0 50 1380 587 Rule Matchers Body Size 212 7 364 2251 14 703 124 6 264 237 7 361 153 8 412 391 15 754 425 8 508 4009 74 3797 Full Unification Mergers Proc- RUIC Size edures Body 2 260 0 214 34 6816 628 2269 3 372 73 124 4 544 36 237 3 390 120 153 6 849 169 397 4 672 0 425 50 9186 1102 4035 Table 2: Compiled Code Size of Full and Semi Unification Code able at compile time to help the rule compiler determine which nodes of the network (as well as which rule bodies) will never encounter the full-unification case. This lets us generate only the more efficient semi unification code at any node or rule body so classified. In the Joshua system, all operations of the system (in- cluding the operations of the rule compiler) are driven by the class of the assertion (or pattern) being processed. The classes are identified with the predicate of the assertion and are, in fact, CLOS classes; see [5] for details. The data structures used to represent rule patterns and data-base assertions are instances of these CLOS classes. One such predicate class (which can be “mixed in” to any other assertion class) is no-variables-in-data-mixin. If one tries to enter an assertion of this type into the database, an error is signalled. Rule patterns which are in- stances of this class can therefore reliably assume that any triggering assertion will contain only ground terms. This information is used at rule compilation time to determine that a match node will be “semi unification only”. A join node whose parent nodes are both “semi unification only” will also be “semi unification only”. If a terminal node of the rete network is “semi unification only” then all rule bodies connected to that node are also “semi unification only” . ’ Table 2 shows the relative sizes of the full-unification and semi-unification code for 7 systems. In aggregate the full unification matchers are 4 times larger than the semi- unification matchers; the full unification mergers are 6.7 times larger. With static optimizations applied, our sys- tem generates no full unification code for most of the sys- tems (this is optimum, these systems never use the full unification capability). The system is forced to generate both versions of the code for the one subsystem which does take advantage of the full unification. In aggregate, this save 86% of the matching code and 92% of the merging code. 7 iscussion We have shown that the expressiveness of full unification can be supported with very limited cost in efficiency. This result depends on the statistics of assertion usage: most as- sertions contain only ground terms. This allows us to gen- erate optimized procedures for the semi unification case. If the triggering assertions all contain only ground terms then the more efficient semi unification case is called; this is very ‘Our system also supports user supplied procedural condi- tion elements. If such a node’s parent is semi-unification only and it introduces no new logic variables, then the node itself is semi-unification only. frequently the case. The system’s performance gracefully degrades as universally quantified assertions are entered in the database. Also we have shown that when extra information is con- veyed to the system at compile time, we can avoid generat- ing the bulkier code for the general case. Furthermore, we have indicated that in many practical cases, this a priori information is obtainable. eferences PI PI PI PI PI bl VI PI C. Forgy. Rete: A fast algorithm for the many pat- tern/many object pattern match problem. Ar-tificiul Intelligence, 19117-38, 1982. C.E. Hewitt. Description and theoretical analysis (us- ing schemata) of planner: A language for proving the- orems and manipulating models in a robot. Technical Report AI-TR-258, MIT Artificial Intelligence Labora- tory, 1972. G.J. McDermott, D.V.and Sussman. The conniver ref- erence manual. Technical Report Memo 259, MIT Ar- tificial Intelligence Laboratory, 1972. Daniel P. Miranker. TREAT: A New and Eficient Match Algorithm for AI Production Systems. Morgan Kaufmann, San Mateo, California, 1990. S. Rowley, H. Shrobe, R. Cassels, and W. Hamscher. Joshua: Uniform access to heterogeneous knowledge structures (or why joshing is better than conniving or planning). In National Conference on Artificial Intel- ligence, pages 48-52. AAAI, 1987. G.J. Sussman and D.V. McDermott. Why conniving is better than planning. Technical Report AI Memo 255A, MIT Artificial Intelligence Laboratory, Cam- bridge Mass., 1972. G.J. Sussman, T. Winograd, and E. Charniak. The micro-planner reference manual. Technical Report AI Memo 203, MIT Artificial Intelligence Laboratory, Cambridge, MA, 1970. D.H.D Warren. An abstract prolog instruction set. Technical Report SRI Technical Note 309, SRI Inter- national, October 1983. Rule-Based Reasoning 715 | 1993 | 106 |
1,306 | Comprehensibility Improvement of Tabular Knowledge Bases Atsushi Sugiurat Maximilian Riesenhubert Uoshiyuki Kosekit tC&C Systems Res. Lab. NEC Corp. 4-l- 1 Miyazaki Miyamae-ku Kawasaki 216 JAPAN sugiuraQbtl.cl.nec.co.jp Abstract This paper discusses the important. issue of knowl- edge base comprehensibility and describes a tech- nique for comprehensibility improvement. Com- prehensibility is often measured by simplicity of concept description. Even in the simplest form, however, there will be a number of different DNF (Disjunctive Normal Form) descriptions possible to represent the same concept, and each of these will have a different degree of comprehensibility. In other words, simplification does not necessarily guarantee improved comprehensibility. In this pa- per, the authors introduce three new comprehen- sibility criteria, similarity, continuity, and confor- mity, for use with tabular knowledge bases. In addition, they propose an algorithm to convert a decision table with poor comprehensibility to one with high comprehensibility, while preserv- ing logical equivalency. In experiments, the algo- rithm generated either the same or similar tables to those generated by humans. Introduction Two major requirements for knowledge base are that it contain only correct knowledge and that it, be com- prehensible. Several techniques have been reported re- garding the verification of correct,ness, including the completeness and consistency checking [Cragun 1987, Nguyen et al. 19851. However, little work has been re- ported concerning the maintenance or improvement of comprehensibility. Comprehensibility is critical, because it strongly af- fects efficiency of construction and maintenance of knowledge bases. However, the actual work of modify- ing knowledge descriptions so as to improve the com- prehensibility can prove to be a serious burden for the knowledge engineers who must manage knowledge bases. Purpose of this research is to automate such tasks. In past work [Michalski, Carbonell, & Michell 1983, Coulon & Kayser 19781, the one and only method to improve the comprehensibility of a knowledge base was SInst. of Theoretical Physics Johann Wolfgang Goethe-Univ. Robert-Mayer-Strasse 8-10 Frankfurt am Main 60054 Fed. Rep. of GERMANY to simplify the concept descriptions in it. Even in the simplest form, however, there will be a number of dif- ferent DNF descriptions possible to represent the same concept, and each of these will have a different degree of comprehensibility. In other words, simplification does not necessarily guarantee improved comprehen- sibility. Let us compare the following two logic functions of attribute-value: [Sex Male] V [Sex Female] A [Pregnant? No] --+ CanDrinkAlcohol [Pregnant? No] V [Pregnant? Yes] A [Sex Mate] + CanDrink-Alcohol While these are logically equivalent and have the same simplicity, their concept function forms, that is formalized by the combination of attribute values, conjunctions (A) and disjunctions (V), are different. The second description is incomprehensible, because it describes a case that never happens: Yes] A [Sex Male]. [Pregnant P In this paper, the authors propose three additional comprehensibility criteria for use in concept function forms on decision tables: similarity among concept functions, continuity in attributes which have ordinal values, and conformity between concept functions and real cases. The first criterion is developed on the ba- sis of an analysis of decision table characteristics. The others are developed on the basis of consideration of meaning embodied in expressions of knowledge. In addition, the authors describe an algorithm to convert a decision table with poor comprehensibil- ity to one with high comprehensibility, while preserv- ing logical equivalency. This conversion is accom- plished by using MINI-like logic minimization tech- niques [Hong, Cain, & Ostapko 19741, and it involves as well the use of a number of different heuristics. Decision Table With regard to knowledge acquisition, completeness checking, and concept comparison, decision tables offer advantages over other methods of knowledge represen- tation (e.g. production rules and decision trees, etc.) 716 Sugiura From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. Table 1: Comprehensible expression for bond selection. Table 2: Incomprehensible expression for bond selection. Materla.. Usage Hondmg-area Paper Lea- Plas- Nor- indus- Large Small ther tic ma1 trial ther tic ma1 trial [Cragun 1987, Koseki, Nakakuki, SC Tanaka 19911. A decision table represents a set of concept func- tions, expressed in DNF. This construct enables the handling of disjunctive concepts which have multiple- value attributes. Each concept function consists of a number of disjuncts, called cubes. Each cube consists of a set of values for an attribute. The union of the vertices in logic space, covered by concept function, is called a cozier for the concept. An example knowledge base is shown in Table 1. Here, each row forms a cube, and a set of cubes for the same concept name forms a a concept function. For example, the concept function for Bond-B consists of three cubes, and each cube consists of t.hree attributes. In a cube, the values specified by the circle (0) in an attribute are ORed, and all attributes are ANDecl to form the cube. Don’t-Care attributes are designated as an attribute with all OS. A min-term is a cube which has only one 0 in every attribute. The decision table facilitates the comparison of dif- ferent concepts, due to the following reason. Concept comparison means comparing the values of all the at- tributes which define concepts. In this context, the decision tables, in which description of the same at- tribute is represented in the same columns and con- cepts are represented by all attributes, can facilitate concept comparison. In other knowledge represent a- tions, for example in the production rules, the same attribute name appears in various positions of each rule, and concepts are represented only by attributes that are necessary to define concepts. This disturbs easy comparison of attribute values. This advantage is critical for the knowledge base con- structions, because, in the classification problems, it is essential to compare and to classify the concepts which have the same cover. Comprehensibility Criteria This section presents four criteria for knowledge base comprehensibility. One is simplicity of concept de- scription, which is the conventional criterion. The other three are concerned with concept function forms. Three new criteria are a reflection of the great influence of the concept function forms on its comprehensibility. Next, compare Bond-B with Bond-C. In the third cube (Bond-B) and the fifth cube (Bond-C) in Table 1, descriptions of attributes Usage and Bonding-area are the same; only those of attribute Material are differ- ent. This makes it easy to see that these two concepts are discriminated by attribute Material. On the other hand, in Table 2, descriptions of most of the attributes are different, making the common and different fea- tures unclear. The second criteria employs general rules to facili- Overall, comprehensibility of Table 1 arises from the tate the comparison of different concepts, whereas the high similarity among the concept function forms: the last two require some background knowledge, which is characteristics of attributes and their values. Table Size Preference criteria for human comprehensibility are commonly based on the knowledge description length (for example, [Michalski, Carbonell, St Michell 1983, Coulon & Iiayser 19781). Some inductive learn- ing algorithms [Michalski, Carbonell, & Michell 1983, Quinlan 19861 reduce the knowledge base size to ob- tain simple knowledge expressions. As the conventional criteria, the aut,hors define ta- ble size as one of the comprehensibility factors. Since the number of attributes to define concepts is fixed in the decision tables, table size can be measured by the number of cubes. Similarity among CJoncept Functions When comparing different concepts, it is desirable to be able to easily ascert,ain common and different fea- tures in each concept. This requires high similarity among concept functions. Table 1 and Table 2 are example knowledge bases for a bond selection problem. They have the same cover and the same table size, but their concept function forms (cube shapes) are different. In this example, Table 1 is better than Table 2, for the following reasons. First, compare Bond-B with Bond-A. -In the first, cube (Bond-A) and the second cube (Bond-B) in Ta- ble 1, their forms are exactly the same. Using Ta- ble 1, the intersection of the two concepts in logic space can be determined by looking at just two cubes, whereas, using Table 2, this would require considering four cubes. Rule-Based Reasoning 717 Table 3: Comprehensible expression for Scholarship Approval. School- Student Parent-income record earn? (ten thousand dollars) tiood l’oor Yes No -6 6- ( i-8 8- Approved 0 xxo~ooo Approved 0 x0x000x Approved X oxoooxx Approved X 0 0 x 0 x x x- ! Not-approved 0 xoxxx x 0 Not-approved X oxoxxoo Not-approved X 0 0 x x 0 0 0‘ first and the second cubes and the third and the fifth cubes. Continuity in Attributes which have Ordinal Values In many knowledge bases, attributes with ordinal val- ues are expressed by a range of values (for example, material which is harder than aluminum should be shaped by grinder-A). Therefore, high continuity of OS in such attributes leads to high comprehensibility. Two example knowledge bases for a scholarship ap- proval are shown in Table 3 and Table 4, which have the same cover and the same table size. Values of the attribute Parent-income are ordinal values. These examples implicitly epbody the meaning that students whose parent income is low are granted schol- arship, which can be seen clearly in Table 3. By con- trast, the first cube in Table 4 shows that some stu- dents are granted scholarships, if the parent income is less than $60,000 or more than $80,000. This gives the initial impression that anyone with an income of $60,000-80,000 can not be approved. By examining other cubes this can be seen to be false, but this is time-consuming; Table 3 is preferable to Table 4. Conformity between Concept Functions and Real Cases In some knowledge bases, there is a dependency re- lationship between the attributes, which the authors can divide into precondition and constrained attributes. Whether the constrained attribute relates to concept definition or not depends on the values of the precondi- tion attribute. In such a situation, the positions of the Don’t-Cares are critical, because cases that never hap- pen in the real world may be described in knowledge bases. Other knowledge bases relating to an earned credit at a university are shown in Table 5 and Table 6; they have the same cover and the same table size. These ex- amples implicitly embody the meaning that only stu- dents who failed the first exam are eligible to take the makeup exam. There exists an attribute depen- dency relationship consisting of the precondition at- Table 4: Incomprehensible expression for Scholarship Approval. t School- Student Parent-income record earn? (ten thousand dollars) Good1 Poor Yes 1 No -6 1 6-‘/ 1 ’ L- 8 1 8- Not-approved X 000x x 0 0 Not-approved X ooxxo x x tribute Exam and the constrained attribute Makeup- exam: taking the makeup exam depends on the result of the exam. In this example, Table 5 is more comprehensible than Table 6, because of the conformity bet,ween con- cept functions and the real cases. In the first cube in Table 6, a case that never happens is described: an examination result is not less than 60 points and the makeup examination result is less t,han 80 points. Ta- ble 5, however, represents only the real cases. In general, the precondition attributes should not be Don’t,-Cares, if the constrained attributes are not Don’t-Cares. Table 5: Comprehensible expression for Credit Earning. Lxam Makeup-exam >BU <tju >8U <8U - - Pass 0 X 0 0 Pass X 0 0 X Fail X 0 X 0 Table 6: Incomprehensible expression for Credit Earning. Exam Makeup-exam >BU <6U >8U <8U - - Pass 0 X X 0 Pass 0 0 0 X Fail X 0 X 0 Algorithm Figure 1 shows an algorithm to improve t.he compre- hensibility of a decision table. It convert)s a table with poor comprehensibility to one with high comprehensi- bility, while preserving logical equivalency. Table conversion is accomplished by the tech- niques used in logic minimization algorithm MINI: disjoint sharp operation, Expansion, and Reduction [Hong, Cain, & Ostapko 19741. In these operations, attributes are required to be ordered, and this order affects concept function forms in a resultant, table. The proposed algorithm first determines the attribute or- der a, where cr is a list of attributes that specifies the 718 Sugiura I Notation i 4 5 6 ii 9 10 attribute (1 5 i 5 n) list of the cubes for jth concept (1 5 j 5 m) precondition attribute on attribute dependence relationship constrained attribute on attribute dependence relationship set of attributes which have ordinal values begin /* Determination of attribute order */ Calculate n; (1 5 i 2 n) by (U @ (U @C,)) with attribute order (a;, a L;,tl t list of a; E S sorted in Increasing order of n; List2 t list of ai 4 S sorted in increasing order of ni 0 i- connect L;,t2 after List1 for all (p, q) do if q is placed after p on o then d t list which q is moved to previous position of p endif endfor /* Modification of concept functions */ for Cj(1 5 j < m) do Cj + (U @ (U @ Cj)) with o endfor Expand and Reduce the cubes with CT Figure 1: Algorithm for improving comprehensibility. order, by some heuristics (in Lines 2-10, Fig. l), and next modifies the concept function forms by MINI’s techniques (in Lines 11-14, Fig. 1). The main difference between MINI and the proposed algorithm is heuristics to determine the attribute order in each algorithm. While the heuristics in MINI al- gorithm are mainly for reducing t,he number of cubes, those in the proposed algorithm are for improving com- prehensibility. For the explanation of heuristics here, consider a de- cision table constructed solely by min-terms like Ta- ble 7. The heuristics for attribute order 0 is due to the three out of four comprehensibility criteria: 1. Table size. A small-size table can be obtained by merging as many min-terms as possible. The algorithm pre- scans t,he table and examines the merging ability of each attribute, measured by n;, where the number of cubes after merging min-terms for all concepts only on attribute a;. For example, ‘T?Bonding--area = 7, as shown in Table 8. Also, nn/laterial = 10, nusage = 8. Att.ributes are ordered in increasing order of ni (in Lines 2-4, Fig. 1). The algorithm merges the cubes, one atkibute at a time, in this order. In other words, first the cubes are merged on attribut,e with high merging ability and last on attribute with low ability. Table 1 is generated by merging the cubes in Table 7 with the 0 = (Bonding-area, Usage, Material). 2. Continuity in attributes which have ordinal values. Attributes with ordinal values are placed at the be- ginning of cr (in Line 5, Fig. 1). Cubes are first merged on those attributes, generating the maxi- mum number of OS in those attributes. As a re- sult, high continuit#y of OS in those attributes can be achieved. 3. Conformity between concept functions and real cases. Attribute order is changed so that the constrained attribute is placed before the precondition atetribute (in Lines 6-10, Fig. 1). This change prevents Don’t-Cares being generated in the precondition at- tribute, because cubes are merged on the constrained attribute before considering the precondition at- tribute. In the example of knowledge bases for credit earning, Table 5 and Table 6 are generated by c = (Makeup-exam, Exam.) and CT = (Exam., Makeup- exam,), respectively. After determining the attribute order 0, concept function forms are modified. In the modificat,ion, t,o achieve high Similarity among concept functions, other heuristics are applied. 4. Similarity among concept functions. High similarity can be achieved, when cubes for many concepts are merged on the same attributes. The algorithm merges the cubes in the same order of attributes for all concepts. If, in the early stage of merging, cubes are merged on the attributes, in which the cubes only for the specific concepts can be merged, then similarit,y be- comes low. In Table 7, if first merged 011 Material, cubes mainly for concept Bond-B can be merged. Attribute order CT = (Material, Usage, Bonding- Rule-Based Reasoning 719 Area) leads to Table 2, whose similarity is low. How- ever, it is expected that such merging would be pre- vented, because the cubes are first merged on the attributes which many cubes can be merged (See 1.). If all cubes in the given table were converted to min-terms for the pre-scan and the modification, the algorithm would take exponential time. To re- duce this to a modest amount of computational time, it uses disjoint sharp operation F @ G, where F and G are lists of cubes (details are shown in [Hong, Cain, & Ostapko 19741). In the modification, the following feature of the @ operation is utilized: U @ G with g generates more OS in the attribute placed in the earlier position of a, where U is universe. This operation can generate almost the same concept function forms, as merging min-terms in the attribute order IT. However, since the order of the cubes in Cj affects the number of cubes generated by U @ Cj, the generated table may be re- dundant. To reduce the table size, cubes are Expanded and Reduced, using the (T order (in Line 14, Fig. 1). Table 7: Table constructed only by min-terms. Material 1 Usage IBonding-area 1 Paper I Lea- 1 k’las-1 Nor- I lndus- I Large I 5mail ther tic ma1 trial I - - Bond-C X x 0 0 x 0 X Bond-C X x 0 0 x X 0 Table 8: Table after merging min-terms on attribute Bonding-area. Material Usage Bonding-area Paper Lea- Plas- Nor- Indus- Large Small ther tic ma1 trial Experimental Results To evaluate concept function forms generated by the proposed algorithm, the authors experimented on 11 real knowledge bases. In addition, they evaluated table size and computational time, using 1 real knowledge base and 24 artificial ones, which are quite large. Concept Function forms The proposed algorithm is based on MINI. However, MINI’s goal is logic minimization, not knowledge base modification, and the knowledge bases minimized by MINI are incomprehensible. To confirm the comprehensibility of the generated table, the authors experimented on 11 reaJ knowledge bases, which have 3-7 attributes, 5-20 cubes, 2-6 con- cepts, and 6-20 columns. Four examples contain at- tributes with ordinal values and attribute dependency relationships. Comprehensibility is evaluated by comparing the concept function forms modified by a human with those modified by the algorithm. In seven examples, concept functions produced by the algorithm exact,ly corresponded to those produced by a human. In the other four examples, results were different, but they were comprehensible to humans. This difference is mainly due to the limitation in the algorithm: it can only generate cubes which are mu- tually disjoint. Moreover, the difference might partly be attributed to the heuristics for det,ermining the at- tribute order. If the calculated rzi values were equal in some attributes, the algorithm would determine the order arbitrarily; it is not guaranteed that expected concept functions are obtained. This situation was ob- served in two knowledge bases. Table Size Experimental results on 24 artificial knowledge bases showed that the algorithm performs logic minimization well. These knowledge bases have 1 concept, 10 cubes, 30 attributes, and 120 columns. In 21 knowledge bases, size of the generated tables were exactly the same as that by MINI. However, in the other three knowledge bases, MINI was able to generate one or two cubes less than the proposed al- gorithm. This arises from another limitation in the algorithm: Don’t-Cares are collected in the specific at- tributes, placed in the early position of cr. The algorithm was also evaluated on a real knowl- edge base for a grinder selection problem, which has 158 concepts, 1023 cubes, 16 attributes, and 222 columns. In this experiment, the proposed algorithm generated the same number of cubes as MINI. Computational Time Pre-scan and modification of a table are based on the disjoint sharp operation. However, it is difficult to es- timate the exact cost for the disjoint sharp operation. This is because the cost depends on the concept cover, 720 Sugiura the attributes order, and the cube order in the right- side cube list of the @. To confirm the feasibility of the algorithm, the au- thors experimented on 24 artificial knowledge bases, described in the previous subsection, with a 33 MIPS work-station. The tables were generated in 210 seconds on an average , which is about 50 % of MINI. The authors also experimented on a real knowledge base for the grinder selection problem. The resultant table can be obtained in 90 seconds (80 seconds by MINI). This time is not too long and actually much shorter than the time required for modification by a knowledge engineer. Related Work Inductive learning algorithms, like ID3 [Quinlan 19861, can also improve the comprehensibility of concept func- tions. However, the produced concept functions are often incomprehensible, because of the lack of back- ground knowledge and the comprehensibility criteria; they only use the description length criteria implicitly. From another viewpoint, most induction algorithms generate decision trees, not decision tables. Generated decision trees may have a minimum number of nodes and leaves. However, this does not mean a minimum number of cubes. EG2 [NziGez 19911 uses background knowledge, which is IS-A hierarchy of values, to simplify the deci- sion tree and to obtain a more comprehensible knowl- edge expression. However, EG2 requires much back- ground knowledge for such simplification. In the pro- posed algorithm, the only background knowledge re- quired concerns the attribute dependency relationships and the orderings of ordinal attribute values. The orderings of the ordinal attribute values is used in INDUCE [Michalski, Carbonell, & Michell 19831 to generalize concepts. The proposed algorithm does not generalize the concepts, but produces the logically- equivalent concept functions to those described by knowledge engineers. Conclusion This paper presented new comprehensibility crite- ria regarding concept function forms, and an al- gorithm for automatically producing comprehensi- ble forms of concept functions. This algorithm is implemented on an expert-system shell DT, which handles classification problems on the decision table [Koseki, Nakakuki, & Tanaka 19911; its usefulness has been demonstrated in several real problems. Since the concept functions on decision table format can be eas- ily converted to the one on production rule format, this algorithm can be applied to the knowledge bases constructed by production rules. The new criteria are general ones, which can be ap- plied to many knowledge bases. However, comprehen- sibility criteria differ according to people and domains, and generated tables may not correspond exactly to the tables expected by knowledge engineers. This dis- agreement, however, can be overcome by a knowledge editor on DT. Acknowledgement The authors express their appreciation to Takeshi Yoshimura of NEC Corporation for giving them the opportunity to pursue this research. References [Cragun 19871 Cragun,B.J. 1987. A decision-table- based processor for checking completeness and con- sistency in rule-based expert systems. International Journals of Man-Machine Studies 26( 5):3-19. [Hong, Cain, & Ostapko 19741 Hong,S.J.; Cain,R.G.; and Ostapk0,D.L. 1974. MINI:A Heuristic Ap- proach for Logic Minimization. IBM Journal of Re- search and Development ~443-458. [Coulon & Kayser 19781 Coulon,D.; and Kayser,D. 1978. Learning criterion and inductive behavior. Pattern Recognition lO( 1):19-25. [Koseki, Nakakuki, & Tanaka 19911 Koseki,Y.; Nakakuki,Y.; and Tanaka,M. 1991. DT:A Classifi- cation Problem Solver with Tabular-Knowledge ac- quisition. Proceedings of Third International Con- ference on Tools for Artificial Intelligence 156-163. [Michalski, Carbonell, & Michell 19831 Michalski,R.S.; Carbonel1,J.G.; and Michel1,T.M. 1983. A Theory and Methodology of Inductive Leaning, Machine Learning: An Artificial Intelle- gence Approach Chapter 4, Tioga Press, Palo Alto, 83-134. [Nguyen et al. 19851 Nguyen,T.A.; Perkins,W.A.; Laf- fey,T.J.; and Pecora,D. 1985 Checking an Ex- pert Systems Knowledge Base for Consistency and Completeness. Proceedings of the Ninth Interna- tional Joint Conference on Artificial Intelligence 375-378. [Nzifiez 19911 N&fiez,M. 1991. The Use of Background Knowledge in Decision Tree Induction, Muchine Learning 6(3):231-250. [Quinlan 19861 QuinlaI1,J.R. 1986. Induction of Deci- sion Trees. Machine Learning 1(1):81- 106. Rule-Based Reasoning 721 | 1993 | 107 |
1,307 | Time-Saving Tips for Problem olving wit Incomplete Informat ion Michael R. Genesereth Illah R. Nourbakhsh Computer Science Department Computer Science Department Stanford University Stanford University Stanford CA 94305 Stanford CA 94305 Abstract Problem solving with incomplete information is usually very costly, since multiple alternatives must be taken into account in the planning pro cess. In this paper, we present some pruning rules that lead to substantial cost savings. The rules are all based on the simple idea that, if goal achieve- ment is the sole criterion for performance, a plan- ner need not consider one “branch” in its search space when there is another “branch” character- ized by equal or greater information. The idea is worked out for the cases of sequential plan- ning, conditional planning, and interleaved plan- ning and execution. The rules are of special value in this last case, as they provide a way for the problem solver to terminate its search without planning all the way to the goal and yet be assured that no important alternatives are overlooked. * in a piece of stock, and it decides to do this by using a drill press. The complication is that there might or might not be some debris on the drill press table. In some cases, it may be possible to formulate a se- quential plan that solves the problem. One possibility is a sequential plan that covers many states by us- ing powerful operators with the same effects in those states. In our example, the robot might intend to use a workpiece fixture that fits into position whether or not there is debris on the table. Another possibility is a sequential plan that coerces many states into a single known state. For example, the robot could insert into its plan the action of sweeping the table. Whether or not there is debris, this action will result in a state in which there is no debris. A second possibility is for the planner to insert a conditional into the plan, so that the robot will exam- ine the table before acting, in one case (debris present) clearing the table, in the other case (table clear) pro- ceeding without delay. Introduction In much of the early literature on robot problem solv- ing, the problem solver is assumed to have complete information about the initial state of the world. In some cases, the information is provided to the robot by its programmer; in other cases, the information is obtained through a period of exploration and observa- tion. A more interesting possibility is for the planner to interleave planning and execution, deferring some plan- ning effort until more information is available. For ex- ample, the robot plans how to get its materials to the drill press but then suspends further planning until af- ter those steps are executed and further information about the state of the table is available. In fact, complete information is rarely available. In some cases, the models used by our robots are quan- titatively inaccurate (leading to errors in position, ve- locity, etc.). In some cases, the incompleteness of in- formation is more qualitative (e.g. the robot does not know the room in which an essential tool is located). In this paper, we concentrate on problem solving with incomplete information of the latter sort. The difficulty with all of these approaches is that, in the absence of any good pruning rules, the planning cost is extremely high. In the case of deferred planning, the absence of good termination rules means that the problem solver must plan all the way to the goal, thus eliminating the principal value of the approach. There are, of course, multiple ways to deal with qual- itatively incomplete information. To illustrate some of the alternatives, consider a robot in a machine shop. The robot’s goal is to fabricate a part by boring a hole ‘Funding was provided by the Office of Naval Research under contract number N0001490-J-1533. In this paper, we present some powerful pruning rules for planning in the face of incomplete informa- tion. The rules are all based on the simple idea that, if goal achievement is the sole criterion for performance, a planner need not consider one “branch” in its search space when there is another “branch” characterized by equal or greater information. Fikes introduced interleaved planning and execution in the limited instance of plan modification during ex- ecution [Fikes 19721. Rosenschein’s work on dynamic 724 Genesereth From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. logic formalized conditional planning but paid little at- tention to computational aspects [Bosenschein 19811. More recent works do provide domain dependent guid- ance, but have not uncovered methods that gen- eralize across domains [Hsu 19901, [Olawsky 19901, [Etzioni 19921. In the next section, we give our definition for prob- lem solving. In section 3, we present a traditional ap- proach to problem solving with complete information. In sections 46, we present pruning rules for the three approaches to problem solving mentioned above. Sec- tion 7 offers some experimental results on the use of our rules. The final section summarizes the main re- sults of the paper and describes some limitations of this work. Problem Solving Our definition of problem solving assumes a division of the world into two interacting parts - an agent and its environment. The outputs of the agent (its actions) are the inputs to the environment, and the outputs of the environment are the inputs to the agent (its percepts). Formally, we specify the behavior of our agent as a tuple (P, B, A, int, ezt, bl), where P is a set of input objects (the agent’s percepts), B is a set of internal states, A is a set of output objects (the agent’s actions), int is a function from P x B into B (the agent’s state transition function), ezt is a function from P x B into A (the agent’s action function), and br is a member of B (the agent’s initial internal state). We characterize the behavior of an agent’s environ- ment as a tuple (A, E, P, see, do, el), where A is a finite set of actions, E is a set of world states, P is a finite set of distinct percepts, see is a function that maps each world state into its corresponding percept, do is a function that maps an action and a state into the state that results from the application of the given action in the given state, and el is an initial state of the world. Note the strong similarity between our characteriza- tion of an agent’s behavior and that of its environment. There is only one asymmetry - the see function is a function only of the environment’s state, whereas the ezt function of an agent is a function of both the per- cept and the internal state. (For automata theorists, our agent is a Mealy machine, whereas our environ- ment is a Moore machine.) This asymmetry is of no real significance and can, with a little care, be elimi- nated; it just simplifies the analysis. The behavior of an agent in its environment is cycli- cal. At the outset, the agent has a particular state br, and the environment is in a particular state el. The en- vironment presents the agent with a percept pl (based on see), and the agent uses this percept and its inter- nal state to select an action al to perform (based on the ezt function). The agent then updates its internal state to b=~ (in accordance with int), and the environ- ment changes to a new state e2 (in accordance with do). The cycle then repeats. In what follows, we define a goal to be a set of states of an environment. We say that an agent achieves a goal G if and only if there is some time step n on which the environment enters a state in the goal set: 3n e, E G In problem solving with complete information, the agent has the advantage of complete information about the environment and its goal. In problem solving with incomplete information, some of this information is missing or incomplete. The pruning rules presented here are fully general and apply equally well in cases of uncertainty about initial state, percepts, and ac- tions. However, for the sake of presentational simplic- ity, we restrict our attention to uncertainty about the robot’s initial state. In our version, the robot’s job is to achieve a goal G, when started in an environment (A, E, P, see, do, e), where e is any member of a set of states I C E. roblern Solving with Complete Information The traditional approach to problem solving with com- plete information is sequential planning and execution. An agent, given a description of an initial state and a set of goal states, first produces a plan of operation, then executes that plan. In single state sequential planning, information about the behavior of the agent’s environment is rep- resented in the form of a state graph, i.e. a labelled, directed graph in which nodes denote states of the agent’s environment, node labels denote percepts, and arc labels denote actions. There is an arc (sr, ~2) in the graph if and only if the action denoted by the label on the arc transforms the state denoted by sr into the state denoted by ~2. By convention, all labelled arcs that begin and end at the same state are omitted. To find a plan, the robot searches the environment’s state graph for a path connecting its single initial state to a goal state. If such a path exists, it forms a sequen- tial plan from the labels on the arcs along the path. Obviously, there are many ways to conduct this search - forward, backward, bidirectional, depth-first, breadth-first, iterative deepening, etc. If the search is done in breadth-first fashion or with iterative deepen- ing, the shortest path will be found first. As an illustration of this method, consider an appli- cation area known as the Square World. The geogra- phy of this world consists of a set of 4 cells laid out on a 2-by-2 square. The cells are labelled a, b , c , d in a clockwise fashion, starting at the upper left cell. There is a robot in one of the cells and some gold in another. One state of the Square World is shown on the left in Figure 1. The robot is in cell a and the gold is in cell c. The picture on the right illustrates another state. In this case, the robot is in cell b and the gold is in cell d. Search 725 Figure 1: Square World m V Figure 2: State Graph of Square World If we concentrate on the location of the robot and the gold only, then there are 20 possible states. The robot can be in any one of 4 cells, and the gold can be in any one of 4 cells or in the grasp of the robot (5 possibilities in all). Given our point of view, we can distinguish every one of these states from every other state. By contrast, consider an agent with a single sensor that determines whether the gold is in the grip of the robot, in the same cell, or elsewhere. This sensory limitation induces a partition of the Square World’s 20 states into 3 subsets. The first subset contains the 4 states in which the robot grasps the gold. The second subset consists of the 4 states in which the gold and the robot are in the same cell. The third subset consists of the 12 states in which the gold and the robot are located in different cells. The Square World has four possible actions. The agent has a single movement action move, which moves the robot around the square in a clockwise direction one cell at a time. In addition, the agent can grasp the gold if the gold is occupying the same cell, and it can drop the gold if it is holding the gold, leading to 2 more actions grub and drop. Finally, it can do nothing, i.e. execute the noop action. ” Our robot’s objective in the Square World problem is to get itself and the gold to the upper left cell. In this case, the goal G is a singleton set consisting of just this one state. Figure 2 presents the state graph for the Square World. The labels inside the nodes denote the states. The first letter of each label denotes the location of the the the Figure 3: State-set Graph of Square World robot. The second letter denotes the location of gold using the same notation as for the robot, with addition of i indicating that the gold is in the grip _ - _- _ of the robot. The structure of the graph clarifies the robot’s three percepts. The four inner states indicate that the robot is holding the gold. The next four states indicate that the gold is in the same cell as the robot (aa,bb, cc ,dd). The outermost twelve states indicate that the robot and the gold are in different locations. Looking at the graph in Figure 2, we see that there are multiple paths connecting the Square World state ac to state aa. Consequently, there are multiple plans for achieving aa from ac. It is a simple matter for sssp to find these paths. If the search is done in breadth- first fashion, the result will be the shortest path - the sequence move, move, grab, move, move, drop. Sequential Planning with Incomplete Information In problem solving with incomplete information, our robot knows that its initial state is a member of a set I of possible states. How can this robot reach the goal, given a state graph of the world and this set I? One approach is to derive a single sequential plan that is guaranteed to reach the goal no matter which state in 1 is the actual initial state. The robot can execute such a plan with confidence. A multiple state sequential planner finds a sequen- tial plan if a sequential solution exists using a state-set graph instead of the state graph that sssp uses. In the state-set graph, a node is a set of states. An action arc connects node n1 to node n2 if n2 contains exactly the states obtained by performing the corresponding action in the states of nl. Figure 3 illustrates a partial state-set graph for the Square World. In this case, the robot knows at the outset that it is in the upper right cell. However, it does not know the whereabouts of the gold, other than that it is not in its grasp or in its cell. Therefore, the initial state set I consists of exactly three states: bb, bc, and bd. 726 Genesereth Note that actions can change the state-set size, both increasing node size and coercing the world to decrease node size. The mssp architecture begins with node I and expands the state-set graph breadth-first until it encounters a node that is a subset of the goal node. Msspa expands nodes using Results(N), which returns all nodes that result from the application of each a E A to node N. The following is a simple version of such an algorithm. For a more thorough treatment of problem solving search algorithms, see [Genesereth 19921. MSSPA Algorithm 1. graph= I, frontier= (I) 2. S = Pop(frontier) 3. If S C G go to 6. 4. frontier = Append(frontier, Results(S)) 5. Go to 2. 6. Execute the actions of the path from I to S in graph. One nice property of this approach is that it is guar- anteed - the robot will achieve its goal if there is a guaranteed sequential plan. Furthermore, it will find the plan of minimal length. However, the cost of simple mssp is very high. Given i = III, g = ICI, a = IAl, and search depth I%, the cost cost nas3p of finding a plan is proportional to igak. Fortunately, many of the paths in the state-set graph can be ignored; these are useless partial plans. Any path reaching a node that is identical to some earlier node in that path is accomplishing nothing. Further- more, any path that leads to a state from which there is no escape is simply trapping the robot. Finally, if we compare two paths and can show that one path is always as good as the other path, we needn’t bother with the inferior path. We formally define useless in terms of any partial plan that begins at any node in the graph. Therefore, note the distinction between the root node, the current node being expanded, and node I, the node at which our solution plan must begin: A partial plan q is useless with respect to root (the current node) and result-node(q) (the resultant node of executing plan q from root) if (1) there is a node n on the path from I to root (inclusive) such that n is a subset of result-node(q), (2) there is a state s in result- node(q) that has no outgoing arcs in the state graph, or (3) there is a plan r such that q is not a sub-plan of r and result-node(r) is a proper subset of result-node(q). Pruning Rule 1: Sequential Planning Prune any branch of the state-set graph that leads only to useless plans. Theorem: Pruning Rule 1 preserves completeness. Furthermore, we can guarantee the minimal solution by modifying condition (3) to (3e): there is some plan r such that length(r) <length(q) and result-node(r) is a proper subset of result-node(q). Note that once the planner finds a useless partial plan, it can prune all extensions of that plan since any solution from the result-node of a useless plan must work either from an earlier node (1) or from some other plan’s result-node (3). This rule can lead to significant cost savings. Recall that cost,,,p was igak. If the pruning rule decreases the branching factor from a to a/b and searches to depth d for case 3, the cost of mssp including the cost of Pruning Rule I is proportional to (Hi2 + ai + adi + ig)ak/bk. We would have savings when: new cost = old cost ki+ a +adi+g < 1 9bk As a result of the k term in the denominator, cost tnssp+heuristics will grow significantly more slowly than costmjsp as the solution length increases. Conditional Sequential planning has a serious flaw: some problems require perceptual input for success. In these cases, a sequential planner would fail to find a solution al- though the system can reach the goal if it consults its sensory input. We need a planner that will find such solutions. A multiple state conditional planner finds the min- imal conditional solution using a conditional state-set graph. This graph alternates perceptory and effectory nodes. An effectory node has action arcs emanating from it and percept arcs leading to it. A perceptory node has percept arcs emanating from it and action arcs leading to it. Action arcs connect nodes exactly as in state-set graphs. Percept arcs are labelled with percept names and lead to nodes representing the sub- set of the originating states that is consistent with the corresponding percept. Figure 4 illustrates part of a conditional state-set graph. lMscp begins with just the state set I and expands the conditional state-set graph in a breadth-first (or iterative deepening) manner until it finds a solution. The planner uses both Results and Sees, which expands a perceptory node into a set of nodes, to accomplish the construction. Searching this graph is much less trivial and often more costly than the state-set graph search that mssp conducts. This is basically an and-or graph search problem. Mscp returns a conditional plan. This plan specifies a sequence of actions for every possible sequence of in- puts. It is effectively a series of nested case statements that branch based upon perceptual inputs. Mscpa then executes the conditional plan by checking the robot’s percepts against case statements and executing the corresponding sub-plans. Below is a greatly simplified version of the mscp algorithm. MSCPA Algorithm 1. graph = I (a perceptory node) 2. Expand every unexpanded perceptory node n Search 727 using Sees(n). 3. If there is a sub-graph of graph that specifies all action arcs and reaches a subset of G for every possible series of percept arcs, then go to 6. 4. Expand every unexpanded effectory node m ;i”cpo ltZzs;lis( m) . . . 6. Execute that sub-graph as a conditional plan. Mscpa will reach the goal with a minimal action sequence provided that there is a conditional solution. Unfortunately, greater power has a price. At its worst, costnasep is even greater than cost,,Sp because the space contains perceptual branches: igpkak. To extend Pruning Rule 1 to mscp, remember that a se- quential plan has a single “result node” while a con- ditional plan has many possible “result nodes.” We define the result-nodes(q) to be the set of possible re- sultant nodes (depending upon perceptory inputs) of conditional plan q. Pruning Rule parts (I) and (2) re- quire trivial changes to take this into account. But part (3) now intends to compare two plans, which amounts to comparing two sets of result-nodes. We define domination such that if plan T dominates plan q, then if there is a solution from result-nodes(q), there must be a solution from result-nodes(r). Each node in result-nodes(r) is dominating if it is a proper subset of some node in result-nodes(q). But “result- nodes” that maintain goal-reachability and do not in- troduce infinite loops are also acceptable. Therefore, we also state that n is dominating if it has reached the goal or even if it is a proper subset of root. Below, we define domination and revisit useless in terms of conditional plans: Formally, conditional plan r dominates a conditional plan q if and only if (A) V(n, Eresult-nodes(q)) 3(n,. Eresult-nodes(r)) n, c nq, and (B) V(n, Eresult-nodes(r))either 1. 3(np Eresult-nodes(q))+ C nq, or 2. n,CG,or 3. n, C root. A partial conditional plan q is useless with respect to root and result-nodes(q) if: (1) There is a node n on the path from I to root (in- clusive) and there is a node np in result-nodes(q) such that n is a subset of n,, or (2) There is a node nq m result-nodes(q) such that there is a state in nnp with no outgoing action arcs in the state graph, or (3) There is a partial conditional plan r such that r dominates q. Pruning Rule 2: Conditional Planning Prune any branch of the conditional that leads only to useless plans. state-set graph Theorem: Pruning Rule 2 preserves completeness. Once again, we can reimpose the minimality guarantee by modifying condition (3): (3e) There is a plan r such that length(r) 5 length(q) and r dominates q without use of case B3. Cost analysis of mscp with Pruning Rule 2 yields re- sults identical to costmssD with the exception that all uk terms become akpk. &The pruning rule again pro- vides search space savings as solution length increases. Interleaved Planning and Execution Conditional planning is an excellent choice when the planner can extract a solution in reasonable time. But this is not an easy condition to meet. As the branching factor and solution length increase mildly, conditional planning becomes prohibitively expensive in short or- der. These are cases in which conditional planning wastes much planning energy by examining simply too much of the search space. What if the system could cut its search short and execute effective partial conditional plans? The sys- tem could track its perceptual inputs during execution and pinpoint its resultant fringe node at the end of execution. The planner could continue planning from this particular fringe node instead of planning for ev- ery possible fringe node. This twophase cycle would continue until the system found itself at the goal. DPA Algorithm 1. states = I. 2. if states C G exit (success!!!) 3. Invoke t&ninating mscp from states and re- turn the resultant conditional plan. 4. Execute the conditional plan, updating states during execution. 5. Go to 2. Dpa will reach the goal provided that there is a con- ditional solution and the search termination rules pre- serve completeness. 728 Genesereth Figure 5: Search Space Savings of DPA Assume for the moment that our search termination rules return sub-plans of the minimal conditional plan from I to G. We can quantify the dramatic search space savings of delayed planning in this case. Re- call that the cost of conditional planning is igpkak. The cost of delayed planning is the sum of the costs of each conditional planning episode. If there are j such episodes, then the total cost of delayed planning is jigp klj~k/3. Figure 5 demonstrates the savings, rep- resenting mscpa with a large triangle and dpa by suc- cessive small triangles. Note that if the system could terminate search at every step, the search cost would simplify to a linear one: kigpa. Let us return to the search termination problem: how can the planner tell that a particular plan is worth executing although it does not take the system to the goal? The intuition is clear in situations where all our actions are clearly inferior to one action: we might as well execute that one action before planning further. For example, suppose Sally the robot is trying to de- liver a package. She is facing the staircase and has two available actions: move forward and turn right 90 degrees. The pruning rules would realize that flying down the stairs is useless (deadly) and the planner should immediately return the turn right action. We can generalize this rule from single actions to partial plans. Termination Rule 1 (Forced Plan)If there exists a plan r such that for all plans q either q is useless or r is a sub-plan of q, then return r as a forced plan. Theorem: Termination Rule I preserves completeness and provides a minimal solution. The forced plan rule has trivial cost when its condi- tional planner is using Pruning Rule 2. Unfortunately, the forced plan criterion can be difficult to satisfy. This rule requires that every non-useless solution from root share at least a common first action. This fails when there are two disparate solutions to the same problem. Still, complete conditional planning to the goal may be prohibitively expensive. We need a termination rule with weaker criteria. The viable plan rule will select a plan based upon its own merit, never comparing two plans. The foremost feature of any viable plan is reversibility. We want to insure that the plan does not destroy the ability of the system to reach the goal. This justifies the requirement that each fringe node of a viable plan be a subset of root. A viable plan must also guarantee some sort of progress toward the goal. We guarantee such progress by requiring every fringe node to be a proper subset of root. Each viable plan will decrease uncertainty by decreasing the root state set size. This can occur at most 111 - 1 times. Termination Rule 2 (Viable Plan) If there exists a plan r such that for all nodes n, in result-nodes(r): n, is a proper subset of root, then return r as a viable plan. Theorem: Termination Rule 2 preserves completeness. The fact that the viable plan rule does not preserve minimality introduces a new issue: how much of the viable plan should the system execute before returning to planning ? Reasonable choices range from the first action to the entire plan. Experimental and qualitative analysis indicates that this variable allows a very mild tradeoff between planning time and execution time. Average-case cost analysis of dpa using the Viable Plan Rule yields hopeful results. Recall that pure con- ditional planning would cost igpkak. Suppose a dpa system executes n partial plans of depth j, resulting in node I,, with size h. From In, there are no search termination opportunities and the planner must plan straight to the goal. Assume that there is some c such that i = is i2. ch. The cost per node of the Viable Plan Rule For case 1, assume g > h. The cost from I to I, is n(i2 + ig)$caj. The worst-case cost from In to the goal is (h2 + hg)p k ak when In is no closer to the goal than I. This can occur precisely when g > h and coercion is not necessary. When we divide the cost of dpa by the cost of mscp we are left with savings when: n(i + g)p’aj + h + h2 gpkak i p For case 2, assume g 5 h. Then a number of co- ercive actions occur along the way from I to 6. If we assume that these coercives are distributed evenly, then there are (h - g)/2 coercives from In to the goal and k - (i - h)/2 total steps from In to the goal. The total cost changes to n(i2 + ig)piaj + (h2 + hg)pk-(‘-h)/2ak-(‘-h)/2. The third term, h2/ig, changes to h/(gp(“-h)/2a(‘-h)/2), which is now less than one since we assumed that g 5 h. Experimental We implemented these planners in four domains us- ing property space representations, in which sets of properties correspond to sets of states satisfying those properties. For DPA, we implemented both termi- nation criteria and executed the first step of viable plans. MJH World is a realistic indoor navigation Search 729 problem. Wumpus World is a traditional hero, gold, and monster game. The Bay Area Transit Problem [Hsu 19901 models an attempt to travel from Berkeley to Stanford despite traffic jams. The Tool Box Prob- lem [Olawsky 19901 d escribes two tool boxes that our robot must bolt. The following depicts p, a, i, and g: MJHl MJH2 MJH3 WUMl WUM2 BAT TBOX Pa ’ 24: : 24 6 6 24 6 6 4 6 24 4 4 6 44 4 16 4 8172 8172 I3144 4 Below are running times (in seconds) and plan lengths, including average length in brackets, for all ar- chitectures with and without pruning rules. The DPA statistics were derived by running DPA on every initial state and averaging the running times. The dash (-) signifies no solution and the asterisk (*) indicates no solution after 24 hours running time. MJHl MJH2 MJH3 WUMl WUM2 BAT TBOX SPA SPAh CPA CPAh DPA 34.6 4.1 82.8 21.4 1.6 74.6 24.6 1.5 * 623.6 2.4 877.7 104.5 1.3 * 15111 1.7 * * 3.6 * * 73.1 Lend,, Lenidd 9-ll[lO] 8-12[10] 6-l&] S-16[11] 6-12[10] 7-15[9.2] 7-11[8.5] ;2;;;;;] 7-15[9.8] 10~13[1;.7] 5-12 10-13 BAT introduces a huge initial state set and a high branching factor. DPA time results for BAT are based upon a random sampling of thirty actual ini- tial states. TBOX is the hardest problem because the action branching factor is so high that even sequential programming with complete information is impossible without pruning. The TBOX running times are based upon running DPA on every I possible in the Tool Box World. Our DPA planner never issued an unbolt com- mand in any TBOX solution. Olawsky regards the use of unbolt as a failure and, using that definition, our ter- mination rules produced zero failures in TBOX. A sur- prising result concerning both of these large domains is that the execution lengths were extremely similar to the ideal execution lengths. Conclusion This paper presents some powerful pruning rules for problem solving with incomplete information. These rules are all domain-independent and lead to substan- tial savings in planning cost, both in theoretical analy- sis and on practical problems. The rules are of special importance in the case of interleaved planning and ex- ecution in that they allow the planner to terminate search without planning to the goal. Although our analysis concentrates exclusively on uncertainty about initial states, the rules are equally relevant to uncertainty about percepts and actions. Our analysis also assumes that state sets are repre- sented explicitly, but the pruning rules apply equally well to planners based on explict enumerations of prop- erty sets (e.g. Strips) and logic-based methods (e.g. Green’s method). One substantial limitation of this work is our empha- sis on state goals. We have not considered the value of these methods or rules on problems involving con- ditional goals or process goals. We have also not con- sidered the interactions of our rules with methods for coping with numerical uncertainty. Further work is needed in both areas. Acknowledgements David Smith introduced the machine shop robot ex- ample. Sarah Morse provided a helpful early critique of this paper. Tomas Uribe provided useful late-night suggestions. eferences Etzioni, O., Hanks, S., and Weld, D. 1992. An Ap- proach to Planning with Incomplete Information. In Proceedings of the Third International Conference on Knowledge Represent at ion and Reasoning. Fikes, R. E., Hart, P.E., and Nilsson, N. J. 1972. Learning and Executing Generalized Robot Plans. Artificial Intelligence 3(4): 251-288. Genesereth, M. R. 1992. Discrete Systems. Course notes for CS 222. Stanford, CA: Stanford University. Hsu, J. 1990. Partial Planning with Incomplete Infor- mation. In Proceedings of AAAI Spring Symposium on Planning in Uncertain, Unpredictable, or Chang- ing Environments. Menlo Park, Calif.: AAAI Press. Olawsky, ID., and Gini, M. 1990. Deferred Planning and Sensor Use. In Proceedings of the DARPA Work- shop on Innovative Approaches to Planning, Schedul- ing, and Control. Los Altos, Calif.: Morgan Kauf- mann. Rosenschein, S.J. 1991. Plan Synthesis: A Logical Perspective. In Proceedings of the Seventh Interna- tional Conference on Artificial Intelligence. Vancou- ver , British Columbia, Canada. 730 Genesereth | 1993 | 108 |
1,308 | ase Finite Constraint-Sati icro-structure of Philippe Jegou * L.I.U.P. - UniversitC de Provence 3, place Victor Hugo F1333 1 Marseille cedex 3, France jegou@gyptis.univ-mrs.fr Abstract In this paper, we present a method for improving search efficiency in the area of Constraint- Satisfaction-Problems in finite domains. This method is based on the analysis of the “micro-structure” of a CSP. We call micro-structure of a CSP, the graph defined by the compatible relations between variable- value pairs: vertices are these pairs, and edges are defined by pairs of compatible vertices. Given the micro-structure of a CSP, we can realize a pre- processing to simplify the problem with a decomposition of the domains of variables. So, we propose a new approach to problem decomposition in the field of CSPs, well adjusted in cases such as classical decomposition methods are without interest (i.e. when the constraint graph is complete). The method is described in the paper and a complexity analysis is presented, given theoretical justifications of the approach. Furthermore, two polynomial classes of CSPs are induced by this approach, the recognition of them being linear in the size of the instance of CSP considered. Introduction Constraint-satisfaction problems (CSPs) involve the assignment of values to variables which are subject to a set of constraints. Examples of CSPs are map coloring, conjunctive queries in a relational databases, line drawings understanding, pattern matching in production rules systems, combinatorial puzzles.. . In the general case, finding a solution or testing if a CSP admits a solution is a NP-complete problem. A well known method for solving CSP is the Backtrack procedure. If n is the number of variables, d the size of the domains of variables, and m the number of constraints, the complexity of this procedure is O(m.#). A better bound is given using decomposition methods as tree-clustering (Dechter & Pearl 1989) or cycle-cutset method (Dechter 1990). The complexity is then of the order of dK, K being * This work is supported by the BAHIA project of the PRC- GDR LA of CNRS. a parameter function of the structure of the CSP (the constraint graph). If the constraint network is a complete graph, then K = n. The decomposition methods are based on the structure of the CSP, i.e. the structure of the constraint graph. In this paper, we present a decomposition method based on the “micro-structure” of the CSP. We call micro- structure of a CSP, the graph defined by the compatible relations between variable-value pairs: vertices are these pairs, and edges are defined by pairs of compatible vertices (compatible values). Given the graph associated to the micro-structure of a CSP, the problem of finding a solution to the CSP is equivalent to the problem of finding a n-clique (a set of vertices that induces a complete subgraph with these n vertices) in the micro-structure. Considering this property, we use triangulation of graphs (Kjzrulff 1990) and clustering of values driven by maximal cliques in the micro-structure to decompose the micro-structure associated to the CSP 3, to solve. This approach is motivated by the good algorithmic properties of triangulated graph, particularly to find maximal cliques. Every maximal clique induces a domains decomposition, and so, generates a collection of problems 5i’ 1, 9 2,. . . Pp, equivalent to the initial problem 9. Each problem Pi, corresponds to a sub-problem of 9 with a size of domains equal to Si, with the inequality Si Id. So the complexity of solving 9, is now the sum of the complexities O(m.Sin), for i = I, 2,... p. The complexity of the decomposition is linear in the size of the problem P, and the number of new sub-problems is at most linear in the size of 9. The quality of the decomposition is related to the value of each &: more the value Si is small, more the decomposition is good. For example, if 6i = 1 or 2, the complexity of the problem Pi is now polynomial. The second section introduces some preliminaries about CSPs while the third section defines formally the micro- structure. The method of domains decomposition is presented in the next section. This is followed by a theoretical analysis of the method, concerning a complexity analysis, and showing some polynomial classes of problems associated to the method. Search 731 From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. Preliminaries A finite CSP (Constraint Satisfaction Problem) is defined as a set X of n variables XI, X2,... Xn, a set D of finite domains DI, Dz,... Dn, and a set C of M constraints Cl, C2,...Cm. A constraint Ci is defined on a set of variables (Xil,..,XGi> by a subset of the Cartesian product DilX... DQi; we note this subset Ri (Ri specifies which values of the variables are compatible with each other). R is the set of all Ri, for i=l...m. So, we denote a CSP 9 = (X,D,C,R). A solution is an assignment of value to all variables which satisfies all the constraints. For a CSP ‘3, the hypergraph (X,C) is called the constraint hypergraph. A binary CSP is one in which all the constraints are binary, i.e. they involve only pairs of variables, so (X,C) is then a graph (called constraint graph) associated to (X,D,C,R). This paper deals only with binary CSPs. To simplify notations for binary CSPs, a constraint between variables Xi and Xj is denoted C’ij, and the associated relation Rij. For a given CSP, the problem is either to find all solutions or one solution, or to know if there exists any solution; the last problem is known to be NP- complete. 82 RI3 Figure 1. Binary CSP with complete constraint graph. CSPs are normally solved by different versions of backtrack search. In this case, if d is the size of domains (maximum number of values in domains Di), the theoretical time complexity of search is then O(m.dn). Consequently, many works try to improve the search efficiency. They mainly deal with binary CSPs. In (Freuder 1982), Freuder, considering the problem of finding one solution, gives a preprocessing procedure for selecting a good variable ordering prior to running the search. One of his main results is a sufficient condition for backtrack-free search. This condition concerns on one hand a structural property of the constraint graph, and on the other hand a local consistencies. After (Freuder 1982), Dechter and Pearl (Dechter and Pearl 1988) give two classes of polynomially solvable CSPs. For example, they define a property: if a binary CSP is arc-consistent, and if its constraint graph is acyclic, then the CSP admits a solution and there is a backtrack-free search order. This property holds for n-ary CSPs with hypergraphs (Janssen et al 1989). - Some methods use decomposition techniques based on structural properties of the CSP. These methods exploit the fact that the tractability of CSPs is intimately connected to the topological structure of their underlying constraint graphs. Moreover, these methods give an upper bound to the complexity of the problem, therefore, an upper bound to the search. The adove property gives the goal of the transformation: given a CSP, the result must be an other CSP, equivalent to the first one, whose the structure is a tree. Two methods are based on this principle: the cycle-cutset method (Decther 1990) and tree- clustering scheme (Decther & Pearl 1989). The cycle-cutset method (CCM) is based on the notion of cycle-cutset. The cycle-cutset of a graph, is a set of vertices such as the deletion of these vertices induces an acyclic graph. CCM is based on the fact that variables assignments changes the effective connectivity of the constraint graph. So, as soon as all the variables of the cycle-cutset are assigned, all the cycles of the constraint graph are cut. Therefore, the resulting problem is tree- structured and Freuder’s theorem (Freuder 1982) can be applied to solve it. A property summarizes the method: if all the variables belonging to the cycle-cutset are instantiated, and if the resulting CSP is arc-consistent, then the problem admits solutions and a backtrack-free order. So, searching a solution, we can consider that the size of cycle-cutset corresponds to the height of the backtracking. More precisely, if K is the size of the cycle- cutset, the complexity of CCM is O(m.dK+2). Tree-clustering (TC) consists in forming clusters of variables such as the interactions between the clusters is tree structured. The hyper-edges of the induced constraint hypergraph are defined by the clusters of variables. The new CSP is equivalent to the first one, but the associated constraint hypergraph is acyclic. So, the property concerning acyclic n-ary CSPs holds for this CSP. If E is the size of the maximal cluster, the complexity of TC is then O(n.E.dE). If the constraint network is a complete graph, we have the equality E = K+2 = n. So, the complexity of decomposition methods is the same than for classical backtracking, of the order of dn. Consequently, complete constraint graphs (n-cliques) can be considered as hard instances of CSP for decomposition methods. The decomposition method described in this paper proposes a solution to handle these hard CSPs, but can also be used on incomplete constraints graph. It is based on a decomposition of the micro-structure of a CSP. Microstructure of GSPs We call micro-structure of a CSP, the graph defined by the compatible relations between variable-value pairs: vertices are these pairs, and edges are defined by pairs of compatible vertices. Definition 1. Given a binary CSP 9 = (X,D,C,R) such as (X,C) is a complete graph, p( 9) is called micro- structure of 3, and it is a n-partite graph defined by *XD= ((Xi,a)/XiEXandaEDi) * CR = ( I (Xi.a),(Xj,b) I / (XiXj) =CGE C and (a,b)E Rg) ’ P(p) = (XDecR) 732 Jhgou (x4’@ o< 3’ e) Figure 2. Micro-structure of the CSP given in figure 1. Necessary, &p) is a n-partite graph because it can not exist edges between vertices of a same domain. In the example in figure 2, we have sets {(Xl ,a),(Xl,b)) , ((X~J),(X~JOL WQd9~Wl) and I(X4~dGdOl with no one edge between vertices associated to the same variable, i.e. a set ((Xi,@, (Xi,/?),. . . ) . If (X,C) is not a complete graph, i.e. there are two variables Xi and Xi such as the constraint Cij does not exist between variables Xi and Xj, p( 9) can be completed adding the universal relation between these variables. The universal relation is the relation Rg = Di x Dj (all pairs of values are compatible). In this paper we always consider CSPs with complete constraint graph. Given a CSP 3, = (X,D,C,R) and its micro-structure p(p), we can derive a basic property. Property 2. Given a CSP 9 = (X,D,CR) and its micro- structure /1( 3’) we have: (al 42,. . . an) is a solution of 3) KbaMX;!,a2),... (x,~)l * is a n-clique of p( 9) Proof: (al,a;Z,... an) is a solution of 9 e Vi, j, 1 li < jln, (Ui,aj) E Rij W Vi, j, 1 I i < j ,< n, {(Xi,ai),(Xj,aj)] E CR e {(Xl ,al),. . . (Xn,an)] is a n-clique Of p.( 9) n We remark that a solution of 3) corresponds to a covering of n vertices in the constraint graph (XC): there is exactly one vertex (Xi,a) for each domain Di, for i = I, 2 ,. . . n. So, solving a CSP can be considered as the problem of finding a n-clique in its microstructure. The method we present for the decomposition of domains is based on the topological analysis of the microstructure, related to the existence of n-cliques. Solving CSPs by domains decomposition We seen that solving a CSP (finding one solution) can be considered as the problem of finding a n-clique in its microstructure. This problem is known to be NP-hard (Karp 1972), but there are classes of graphs such as polynomial (linear) algorithms have been defined. The method we present is based on one of these classes: triangulated graphs. So, some definitions and properties must be recalled. Definition 3. A graph is triangulated iff every cycle of length at least four has a chord, i.e. an edge joining two non-consecutive vertices along the cycle. Property 4. (Fulkerson & Gross 1965) A triangulated graph on n vertices has at most n maximal cliques (a clique is maximal iff it is not included in an other clique). Property 5. (Gavril 1972) The problem of finding all maximal cliques in a triangulated graph (X,C) is in O(n+m) if n = /X/ and m = /C/. Given the micro-structure of any CSP, it is not possible to immediately use these properties because any micro- structure is not necessary a triangulated graph (eg. the micro-structure in the figure 2). Nevertheless, it is possible to use these results: given any graph G = (XC), it is possible to add new edges in C to obtain C’, such as the graph T(G) = (X,C’) is a triangulated graph. This addition of edges is called triangulation, and can be realized in a linear time in the size of the graph (Kjzrulff 1990). (x 1. a) (X 4’ g) Figure 3. Triangulation of the micro-structure of figure 2. Added edges are given by the dotted lines. After a triangulation, it is possible to apply the property 5. We show how this approach can be used here. Suppose we have a CSP 3, = (X,D,C,R) and its micro- structure & 9) = (Xg ,CR). Consider a triangulated graph defined by a triangulation of (X~,CR). There are three classes of edges in T(XD,CR): a edges {(Xid),(Xj,b)) already in & 3’), i.e. (a,b) is in Rij 9 edges ((Xip),(Xj,b)) / i f j : adding this edge corresponds to add the tuple (a,b) in Rij e edges { (Xid),(Xi,b)) : adding this edge has no semantic Since T(XD,CR). is a triangulated graph, we know that there are no more than n.d maximal cliques in this graph (by property 4) and that it is possible to find them with a linear algorithm (by property 5). Furthermore, we know Search 733 that if there exist solutions, anyone is in a maximal clique of this triangulated graph, and consequently, the search of solutions of Ep will be limited to the search of solutions on separated problems, each one associated to a maximal clique. r Consider Y, a maximal clique in the triangulated (XDCR); two possibilities must be considered: graph l Y is not a covering of all domains: there is at least one variable Xi of X that does not appear in the vertices (Xi,a) of Y. Consequently, the clique Y does not contain a n- clique that is a covering of all domains, and so there is no solution in Y. 0 Y is a covering of all domains. Given Y, we can induce a new CSP,. by the projection of vertices in Y on each domains. So, we obtain a collection of domains DY,i such as DY,i C Di, each new domain DY,i being induced by the vertices (Xi,a) in Y. The constraints of the new CSP associated to Y are the old constraints, restricted to the values in new domains. Searching a solution can be realized on this new CSP. Nevertheless, the fact that Y is a covering of all domains does not guarantee that there is a solution, because the triangulation adds some new edges that connect vertices corresponding to incompatible values. Theoretical foundations of the method are given below. Definition 6. Given a binary CSP 9 =(X,D,C,R), its micro-structure /L( .Y)=(XD,CR), and Y a subset of Xg. The CSP induced by Y on 9, denoted p(Y) is defined by: l DY = {DY,l,..DY,n) such as Dy,i = {aEDi /(Xi,a)EY) l RY,~ = ( Cab) E Rij I (Xi,a), (Xj ,b) E Y 1 * % I = (XD y,C☺y) The theorem below define the principle driving the domain decomposition: Theorem 7. Given a binary CSP 9 = (X,D,C,R), its micro-structure & 9) and QJ = ( YI ,. . . Yk) , the set of the maximal cliques of roL( 3))) , we have: Solutions(P) = I & Solutions( 3, (Yi)) -- Proof: 9 With property 2. we know that any solution of the problem 3, is associated to a n-clique. So, this n-clique is necessary included in one set Yi because in a graph, each clique belongs necessary to a maximal clique of the considered graph. Consequently, the considered solution of 9 is necessary a solution of P(Yi). a Every solution of a problem P(Yi) is a clique in pL( P) because all the edges of this clique are edges induced by compatible values in 9. Consequently, every solution of P(Yi) is a solution of .9. D We remark that a solution of 3,(Yi) can appear as a solution of an other 3’(Yi). In the figure 4, we present the applying of theorem 7 to the example. Maximal cliques YI = ((Xl,a),(X2,c),(Xq,h)) Y2 = {(Xl,b),(X2,c),(X3,e),(Xq,g)) y3 = ((x1,6),(X2,6),(X3,e),(x3f) Y4 = ((X;!,c),(X3,e),(Xq.h)) Y.5 = 1 (Xl bMX3 ,eMX3 JMX4,g) Decomposed domains Dr,,l = ia), DY,,z = Id, Dy1,3 = 0, DyI,4 = {h} DY~,I = @~JY,,z = kIDy2,3 = kl,Dy2,4 = {g) DYJ,I = @), Dy3,2 = M, Dy3,3 = {ef), Dy3,4 = 0 DY4,1 = 09y4,2 = k),Dy4,3 = klJy4,4 = (h) Dy5.1 = {h Dr,,2 = 0, Dy5,3 = {ef), Dy+ = (g) Figure 4. Applying theorem 7 to the CSP of figure 1. The cliques Y,, YJ, Y, and Ys do not cover all the domains; so the induced sub-problem are not consistent. On the other hand, the cliques Yz induces a consistent sub-problem. Algorithm: 1 - generation of p( ZP) 2 - triangulation of p( .P); we obtain r@( 9)) 3 - research of all maximal cliques in r@( 3’)); the result of this step is QJ = { Yl,. . . Yk) 4 - for all Yi in Y do if Yi is a covering of all the domains in D then solve 3’(Yi) else 3) (Yi) has no solution The first step is realized first with an enumeration of the values of all the domains to obtain the vertices of p( P), and secondly, with an enumeration of all the compatible tuples of relations to obtain the edges of p( 9). If the problem 9 has not a complete constraint graph, it is possible to transform it with the addition of the universal constraint between non-connected variables. The second step can be realized using triangulation algorithms - see (Kjaerulff 1990). The maximal cliques can be obtained by the algorithm of Gavril (Gavril 1972)(Golumbic 1980). 734 Jkgou The last step is first realized with the generation of the problem ?(Yi): it is sufficient to define new domains based on the vertices in Yi. Finally, solving 9 (Yi) is possible with a any classical method such as standard backtracking for example. Theoretical analysis Complexity analysis We first give some notations. Given 9 = (X,D,C,R) and its micro-structure pL( 9) = (XD,CR). = n is the number of variables 0 d is the maximal number of values in domains, i.e. Vi, I li In, /Di/ ,<d 0 N the number of vertices in p( 3)); N = 1 cfcnlDd 5n.d -- 9 m is the number of constraints Two trivial polynomial classes 0 M is the number of edges in p( 3’): h4 = and M < N.(N-I)/2 c n2.d2 . p is the number of maximal cliques in r@( 9)) The cost of step 1 of the algorithm is 0 (N +M). Nevertheless, if (X,C) is not a complete graph, we have O(n2.d2). Triangulation step (step 2) is linear is the size of the resultant graph: O(N+M’), if M’ is the new set of edges after triangulation. Necessary, M IM’ < n2.d2. The cost of finding all maximal cliques in r(p( 9)) is also linear: O(N+M’). By property 4, we know that the number of maximal cliques p satisfies the inequality p IN. For the last step, we first evaluate the cost of solving one problem p<Yj); it can be bounded by: So, the cost of the last step, i.e. the cost of solving all sub-problems F(Yj), for j = I, 2, . . . p, is: o(m ( ' ( = /owl>)) ’ lsj<p 1liSn J, The comparison of this cost with respect to the cost of standard backtracking on the initial problem is necessary. The cost of backtracking on 3) is If we consider d = IDi/ and 6 = lDYj,d, for i = I, 2,.. . n and j = I, 2,. . . p, the comparison between standard backtracking and domains decomposition is now or m.d* VS m.p.Sn dn VS p.6” We know that p is bounded by n.d (cf. property 4). So we give comparison of exponential terms, dn and 6n. Suppose that the decomposition induces a simplification of domains, such as we have for example d = 2.6. The comparison is now dn VS $ .dn because p.Sn = p.(d/2)n = p.(l/2)n.dn I [n.d.(l/2)n].dn. Consequently, the decomposition can be very interesting on the instances of problems such as these kind of hypothesis on d and 6 hold, i.e. for the problems such as we have [n.d.(l/2)n] <<I. CSBs with triangulated micro-structures. A first polynomial class induced by the domains decomposition is naturally the class of CSPs such as their micro-structure is already triangulated: Property 8. Let 3, be a CSP, and its micro-structure p( 9). If p( 9) is a triangulated graph, then the number of solutions of 9 is linear in the size of 3, and there is a linear algorithm to find all solutions. Proof. Applying the algorithm, the first step is linear in the size of 9. The second step does not add new edges in the micro-structure JL( 9) because /.L( 9) is already triangulated graph. The number of maximal cliques is linear in the size of M 9), and in the size of 9. Finding all these maximal cliques Yl, Yz,... Yp, is linear in the size of P . Finally, all the induced sub-problems Y(Y$ have at most one value in all the domains DYj,i, for j = 1, 2,. . . p and i = I, 2,. . . n. Consequently, a search for any solution will be linear in the number of variables, that is exactly in O(m). 0 The interest of this polynomial class is principally that checking for the adherence to it will be linear in the size of any checked instance. CSPs such as the triangulation of their micro- structures induces domains of size 1 or 2. W e now consider the class of CSPs 3, such as the triangulation of their micro-structure p(3)) connects at most two values belonging to the same domain in every obtained maximal cliques. Property 9. Let 9 be a CSP, and its micro-structure HP). If in r(p( 9 )) there is at most one new edge ((Xi,a),(Xi,b)) per domain Di in every new maximal Search 735 cliques then, there is a polynomial algorithm to solve 3, (searching for one solution). Proof. After applying the algorithm for tiangulation of the micro-structure M3)), the size of domains in all the induced sub-problems P( Yj) is at most two. Consequently, all induced sub-problems can be solved applying the result given in (Dechter 1992). One corollary of this theorem deals for binary CSPs with bivalent domains, and provides a polynomial method to solve this class of CSPs. R For the same reasons than for the first polynomial class, the interest of this class is also that checking for the adherence will be linear in the size of any checked instance. Moreover, one can observe that the fist class is a subclass of the second: already triangulated graphs are graphs such as their triangulation do not add any edge, and consequently, the size of domains induces by maximal cliques is so necessary equal to 1. Conchsion We proposed a new method to reduce domains in constraint satisfaction problems. This method is based on the analysis of the micro-structure of CSP, i.e. the structure of the relations between compatibles values of the domains. Given the micro-structure of a CSP, we present a scheme to decompose domains of variables, forming a set of sub-problems such as they have necessary less values than domains in the initial problem. This decomposition is driven by combinatorial properties of triangulated graphs. The complexity analysis of the . method shown the theoretical advantages of the approach. Indeed, given a CSP p, if d is the size of domains of the n variables, and if this problem is defined on m constraints, the complexity of any search like standard backtracking, is O(m.dn). We shown that the method induces the complexity O(m.p. 6n) with p being the number of induced sub-problems - p is necessary linear in the size of the problem 9 - and 6 is the size of new domains, always satisfying 6 5 d. Furthermore, two polynomial classes of CSPs has been defined, the recognition of their elements being linear in the size of instances. Nevertheless, an experimental analysis must now be realized to see practical interest of the approach. The decomposition method is at present only defined on binary CSPs. Nevertheless, an extension to n-ary CSPs is possible. A way to realize this extension consists in using primal constraint graph. Suppose we have a n-ary CSP with a constraint Cl between three variables; that is Cl = (Xi,Xj,XkJ. To generate the microstructure, we consider three binary constraints: Cu, Cik and C’jk. The associated relations are R@ = Rl[(Xi,Xj)I, Rik = Rl[(Xi,Xk)J and Rjk = Rl[(Xj,Xk)I. This primal representation is not equivalent to the initial n-ary CSP because the new problem is less constrained. But it is sufficient to realize domains decomposition, since the constraints finally considered to solve the initial CSP will be the initial n-ary constraints, with possibly, smallest domains. Acknowledgements I would like to thank Philippe Janssen for the helpful pastaga discussion we had on the method discribed in this paper. References Dechter R., Enhancement Schemes for Constraint- satisfaction problems: Backjumping, Learning and Cutset Decomposition, Artificial Intelligence, 41 (1990) 273- 312. Dechter R. & Pearl J., Network-based heuristics for constraint-satisfaction problems, Artificial Intelligence, 34 (1988) l-38. Dechter R. & Pearl J., Tree Clustering for Constraint Networks, Artificial Intelligence, 38 (1989) 353-366. Freuder E.C., A sufficient condition for backtrack-free search, JACM, 29-l (1982) 24-32. Fulkerson D.R. & Gross O., Incidence matrices and interval graphs, Pacrj?c J. Math., 15 (1965) 835-855. Gavril F., Algorithms for minimum coloring, maximum clique, minimum covering by cliques, and maximum independent set of a chordal graph, SIAM J. Comput, 1-2 (1972) 180-187. Golumbic, Algorithmic Graph Theory and Perfect Graphs, Academic Press, New- York (1980). Janssen P., Jegou P., Nouguier B. & Vilarem M.C., A filtering process for general constraint satisfaction problems: achieving pairwise-consistency using an associated binary representation, Proc. IEEE Workshop on Tools for ArtifZcial Intelligence, Fairfax, USA (1989) 420- 427. Karp E.C., Reducibility among combinatorial problems, in Complexity of Computer Computation, Miller & Thatcher Eds., Plenum Press, New-York, (1972), 85-103. Kjarulff U., Triangulation of Graphs - Algorithms Giving Small Total State Space, Judex R.R. Aalborg, Denmark (1990) 736 Jbgou | 1993 | 109 |
1,309 | Case-Based Edwina L. Rissland, Jody J. aniels, Zachary instein, and Department of Computer Science University of Massachusetts Amherst, MA 01003 rissland@cs.umass.edu Abstract In this project we study the effect of a user’s high-level expository goals upon the details of how case-based reasoning (CBR) is performed, and, vice versa, the effect of feedback from CBR on them. Our thesis is that case retrieval should reflect the user’s ultimate goals in appealing to cases and that these goals can be affected by the cases actually available in a case base. To examine this thesis, we have designed and built FRANK (Flexible Report and Analysis System), which is a hybrid, blackboard system that integrates case-based, rule-based, and planning components to generate a medical diagnostic report that reflects a user’s viewpoint and specifications. FRANK’s control module relies on a set of generic hierarchies that pro- vide taxonomies of standard report types and problem- solving strategies in a mixed-paradigm environment. Our second focus in FRANK is on its response to a failure to retrieve an adequate set of supporting cases. We describe FRANK’s planning mechanisms that dy- namically re-specify the memory probe or the param- eters for case retrieval when an inadequate set of cases is retrieved, and give an extended example of how the system responds to retrieval failures. Introduction This project places case-based reasoning (CBR) in a worka- day context, as one utility for gathering, analyzing, and presenting information in service of a user’s high-level task and viewpoint. A user’s ultimate task might be to prepare a medical consultation as a specialist to an attending physi- cian, to write a legal memorandum as a lawyer to a client, or to create a policy brief as an advisor to a decision-maker. For each task, what the writer (and his or her audience) plans to do with the information gathered affects the kind of information desired, the way it is found and analyzed, and the style in which it is presented. For instance, to generate a balanced, “pro-con” analysis of a situation, one would present in an even-handed manner the cases, simulations, *This work was supported in part by the National Science Foundation, contract IRI-890841, and the Air Force Office of Sponsored Research under contract 90-0359. 66 Rissland and/or other analyses that support the various points of view. On the other hand, to create a “pro-position” report that ad- vocates one course of action over all others, one would present information deliberately biased toward that point of view. Furthermore, if, in either situation, the retrieved cases only partially or meagerly support the intended presentation form, the user may have to temper his or her high-level goal by the information actually found, perhaps to the extent of radically revising a presentation stance or even abandoning it. Such revision may be required, for instance, if the cases destined for a balanced report are heavily skewed toward one side of an argument, or compelling cases for an oppos- ing viewpoint subvert a proposed one-sided presentation. To accommodate a variety of user task orientations, strategies, and viewpoints, we have designed and imple- mented a blackboard architecture that incorporates case- based and other reasoning mechanisms, a hierarchy of “re- ports” appropriate to different tasks, and a flexible control mechanism to allow the user’s top-level considerations to filter flexibly throughout the system’s processing. Our sys- tem, which is called FRANK (Flexible Report and Analy- sis System), is implemented using the Generic Blackboard toolkit (GBB) [Blackboard Technology Group, Inc., 19921 in the application domain of back-injury diagnosis. Specifically, our goals in pursuing this project focus on two kinds of evaluation and feedback: 1. To investigate the effect of a failure to find useful cases upon the current plan or the user’s task orientation; and, vice versa, the effects of the context provided by the user’s task and viewpoint on case retrieval and analysis. 2. To build a CBR subsystem that can dynamically change its case retrieval mechanisms in order to satisfy a failed query to case memory. We first give a broad sense of FRANK’s overall architec- ture in the System Description and Implementation section, where we describe its control and planning mechanisms, particularly the two kinds of evaluation and feedback within the system. That section also describes the task hierarchies that are used by the control modules of the system: a re- ports hierarchy, a problem-solving strategies hierarchy, and a presentation strategies hierarchy. We follow this with an extended example where we present a scenario of FRANK’s From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. responses to case retrieval failure. A discussion of related research and a summary close the paper. escription and lementation Overview of FRANK FRANK is a mixed-paradigm, report planning and gener- ation system with a CBR component that has been imple- mented in a blackboard architecture. While we have se- lected the diagnosis of back injuries as the initial domain to illustrate its capabilities, the architecture is not specific to diagnostic tasks. In this domain, the user provides a descrip- tion of a patient’s symptoms and selects from a hierarchy of report types the type of report the system is to generate. The output of the system is a natural language report with appropriate supporting analysis of the problem. The system’s architecture is divided into the three basic components of control, domain reasoning, and report gen- eration (see Figure 1). Control is provided by a planner that selects an appropriate plan from its library and then performs the hierarchical planning needed to instantiate it. Plan selection is based on the report type. Domain reasoning capabilities currently implemented include a CBR module with several processing options (e.g., similarity metrics) and an 0PS5 production system, as well as knowledge sources that incorporate procedural reasoning. The domain rea- soning capabilities are flexibly invoked by the planner to execute the plan. In particular, different types of case re- trieval probes are created as necessary to complete query tasks set up by the plan. Finally, a report generator uses rhetorical knowledge to generate a report for the user. To support the various components, we have developed several hierarchies, which we describe next. control [*I Figure 1: Overview of FRANK Architecture ierarchies To support different expository goals, we have devised three hierarchies. The first hierarchy - the Report hierarchy - dif- ferentiates among reports based on expository goals. The second - the Problem-Solving Strategies hierarchy - rep- resents the different problem-solving strategies inherent in finding, analyzing, and justifying the data that go into a report. A third hierarchy -the Presentation Strategies hier- archy -contains the methodologies and policies for present- ing the material in its final form. The first two hierarchies support the planner, while the third helps guide report gen- eration. Our first consideration in classifying reports is their overall goals. This is reflected in the first level in our hierarchy (see Figure 2). Reports are categorized based on whether they are argumentative or summarizing in nature, although in this paper we discuss the argumen- tative reports only. Argumentative reports are further sub- divided into those that take a neutral stance and those that are pro-position, that is, endorse particular positions. Fur- ther subdivisions within the argumentative reports that take a neutral stance differentiate between reports that provide conclusions and those that do not. CONSULT WITH CONCLUSION DIAGNOSIS MEMO NEUTRAL POSITION JUDICIAL OPINION / WITHOUT REFERRAL CONCLUSION LEGAL ANALYSIS ARGUMENTATIVE DL4GNOSlS OWN MERIT ON OWN MERIT PRIVATE LE’lTER RULING < OWN MERIT LEGAL BRIEF OWN MERIT PRO POSITION DIAGNOSIS WITH ALTERNATIVES ELEVATE ABOVE PRIVATE LETTER RULING ALTERNA’TWES WITH ALTERNATIVES LEGAL BRIEF WITH ALTERNATIVES Figure 2: Report Hierarchy (partial) Within the portion of the hierarchy that supports pro- position argumentative reports, there is a subdivision be- tween reports that justify a position based soley on similar resulting conclusions (the on own merit category) and those that justify by additionally examining and discounting pos- sible alternatives (the elevate above alternatives category). Examples of reports in these two subcategories are med- ical reports written from a pro-position viewpoint where there is a predisposition toward a particular conclusion: the Diagnosis-Own-Merit and the Diagnosis-with-Alternatives reports. A Diagnosis-Own-Merit report justifies a diagno- sis, in part, by drawing analogies between the current situ- ation and like cases. A Diagnosis-with-Alternatives report not only draws analogies to like cases but also discounts or distinguishes alternative diagnoses. Besides these reports from the medical domain, our report hierarchy contains sim- ilarly categorized reports for law [Statsky and Wernet, 19841 and policy analysis. Case-Based Reasoning 67 Associated with each report on a leaf node in this hierar- chy is a list of groups of strategies. Each group serves as a retrieval pattern for accessing plans to carry out the pro- cessing needed to create the report. Currently, the plan that matches the greatest number of strategies is selected first. Problem-Solving Strategies Hierarchy. Problem-solv- ing strategies encode knowledge about how to perform the analysis necessary to generate a report. These strategies provide guidance on such matters as how to deal with con- traindicative data and anomalies, the domain indices (e.g., factors) to use, how extensively to pursue the search for relevant data, what methodologies to use when correlating the data (e.g., pure CBR or CBR with rule-based support), and whether to include or exclude arguments that support alternative conclusions (see Figure 3). EMPHASIZE STRENGTHS IGNORE ALTERNATIVES c < IGNORE WEAKNESSES I* WEAKNESSES MITIGATE WEAKNESSES JUSTIFICATION TECHNIQUES EXPLAIN WRARNESSES \ WITH ALTERNATIVE3 -4 NO COMPETITORS STRENGTH VS STRENGTH BEST COMBINATION Figure 3: Problem-Solving Strategies Hierarchy (partial) Presentation Strategies Hierarchy. Presentation strate- gies guide the system in which aspects of a case to discuss and how to do so. They cover how to handle contraindica- tive information and anomalies in the output presentation, as well as how to report weaknesses in a position. Presentation strategies also suggest alternative orders for presentation of material within a report. Example presentation strategies are: (1) give the strongest argument first while ignoring al- ternatives, (2) state alternatives’ weaknesses, then expound on the strengths of the desired position, or (3) concede weaknesses if unavoidable and do not bother discussing anomalies. Control Flow Top-Level Control. The top-level control flow in FRANK is straightforward. Each processing step in FRANK corresponds to the manipulation by knowledge sources (KSs) of data (units) in its short-term memory, which is implemented as a global blackboard (see Figure 4). The following knowledge sources represent the steps in the top-level control: 1. Create-Input-KS: Initially, FRANK is provided with a problem case, the domain, and user preferences for problem-solving and report presentation. The user may also specify the report type and a position to take as part of the preferences. This information is stored on an Input unit. 2. Process-Input-KS: This KS analyzes the problem case for quick, credible inferences that are then also stored on the Input unit. In addition, it creates a Report-Envelope 68 Rissland 3. 4. 5. 6. 7. KSs $7 create-input-ks process-in put-?ZJ, Spaces Input Space Completed Input Space Report Envelope Space Plans Space Tasks Space Reports Space evaluate-plan-integriqks Evaluation Space generate-report-ks Figure 4: Top-Level Control Flow unit that contains pointers to the Input unit, the problem case, and the domain, and has additional slots to maintain information about the current state of the problem solving. The Report-Envelope unit represents the context for the current problem-solving session. Select-Report-KS: Using information from the Input unit and the Report-Envelope unit, this KS selects a report type and stores it on the Report-Envelope. Currently, the preferred report type must be input by the user. Extract-Strategies-KS: Associated with each report type is a list of groups of strategies; these groups act as indices into a library of plans. This KS extracts a group from the list of groups and adds it to a Report-Envelope unit. Select-PZan-KS: Using the extracted group of strategies, this KS selects and then stores a specific plan on the Report-Envelope unit. Initially, the plan having the great- est overlap of strategies with the Report-Envelope strate- gies is selected. (The selection process is described in the Plan Re-Selection section.) Execute-Plan-KS: This KS instantiates the selected plan into a set of Goal units. The plan is executed by activating the top-level Goal unit. Leaf Goal units can use a variety of reasoning mechanisms such as model-based reasoning, OPS5, procedural reasoning, or CBR to achieve their respective tasks. The first step in all plans is to create a Report unit that specifies the presentation template to be used and contains slots for the necessary information to complete that template. Evaluate-Plan-Integrity-KS: Upon plan completion, the results are evaluated by this KS to determine if the overall cohesiveness of the report is acceptable. Various alter- natives, such as switching the plan or report type, are available should the results be unacceptable. 8. Generate-Report-KS: The report is generated by filling out the template with the information stored on the Report unit. The planning mechanism directs the system’s overall efforts to achieve the top-level expository goal. Plans can have de- pendent, independent, ordered, and unordered goals. Goal parameters may be inherited from supergoals and updated by subgoals. Ultimately, leaf goals invoke tasks through specified KS triggerings. Like plan goals, KSs can be de- pendent, independent, ordered, or unordered. The KSs triggered by the leaf goals may use any of a variety of reasoning paradigms suitable for solving the cor- responding leaf goal. A goal may be solved by procedural, rule-based, model-based, or case-based reasoning. Proce- dural and model-based reasonings are Lisp and CLOS mod- ules. The rule-based element is OPS5. The CBR component is the CBR-task mechanism, described below. Evaluation and Feedback The analysis and report generation process has several layers. From the highest-level processing abstraction down to the actual executables, there are: reports, plans, goals/subgoals, and tasks/queries (see Figure 5). Currently, a user selects the report type, which indexes an initial plan based on a group of problem-solving strategies. The plan consists of a hierarchy of goals with leaf goals submitting tasks or queries for execution. The tasks/queries are the methodology-specific plan steps such as making inferences using rule-based reasoning (RBR) or finding the best cases using CBR. Replanning may be done at each level to achieve the report’s expository goals. the impact on higher-level ones. However, if a process at a lower level has exhausted all its possible remedies, then it returns that feedback to its superior, which can then try a different approach. This type of feedback occurs at all levels of the system. For example, if a report relies on a particular type of information, like good supporting cases to make a pro-position argument, and the necessary informa- tion is not found through an (initial) query, then the report (initially) fails at the lowest level. Immediate reparations (e.g., changing particular CBR retrieval parameters) could be made at this level based on evaluation feedback to al- low processing to resume without having to abort the whole reporting process. The mechanism supporting this evaluation-feedback cy- cle is modeled on operating system vectored interrupts. In our case, the interrupts are unmet expectations detected by goals and the interrupt service routines (ISRs) are the reme- dies for the various interrupts. Instead of maintaining a single, global table of ISRs, FRANK supports multiple ta- bles, found at the various levels, to permit specialization. When an interrupt occurs, the most local ISR is found by looking first at the table associated with the current goal, then the table of next super goal and so on. If no ISR is found, then a global table is used. While the global table is created at system initialization, the goal ISR tables are specified as part of the goal definition and are created at goal instantiation time. Report Failures. A report fails only after all the possible plans for it have failed. If the user has not requested a specific report type, FRANK will automatically switch to a potentially more suitable type based on feedback from the failed plans. Otherwise, the user is prompted concerning the deficiencies in the report and he or she can request a new report type. Report Hierarchy Report Level e-Selection. There are two general ways to select a new plan when the current plan fails. In the first, a priority- based (or local search of plans) approach, if there are more plans available under the current group of strategies stored on the Report-Envelope unit, the system can use one of these. Failing that, the system checks if there are any other Plan Level Goal Level groupings of strategies available under the report type to use as indices in selecting a new plan. Finally, if no other plans are available, then failure occurs, the problem(s) noted, and the system attempts to change the report type. Figure 5: Levels of FRANK Task Level There is a need to provide evaluation and feedback throughout the entire process of gathering, analyzing, and presenting information, rather than waiting until a report is finished to review the final product. Lower-level tasks at- tempt to rectify any problems they observe in order to lessen The second method of selecting a new plan is to use infor- mation about the failure to select a better alternative. For ex- ample, the Diagnosis-with-Alternatives report type requires cases supporting the advocated position to compare favor- ably to cases supporting alternative diagnoses. If no cases supporting the advocated position can be found, then no plan associated with the Dibgnosis-with-Alternatives report type will be successful. If this failure occurs, the system switches the report type to Diagnosis-Own-Merit, which does not require supporting cases for the advocated posi- tion. Case-Based Reasoning 69 Leaf Goals and CBR Methodologies. FRANK supports several CBR methodologies. Currently, two basic method- ologies have been implemented: nearest-neighbor and HYPO-style CBR [Ashley, 19901. Each CBR methodology brings with it different means of retrieving cases, measuring similarity, and selecting best cases. Having multiple types of CBR available allows the system to invoke each type to support the others and to lend credibility to solutions. The flexibility of having different CBR approaches to draw upon also allows the system to apply the best type in a particular context. For example, if no importance rankings can be attached to a set of input-level case features, but a collection of important, derived factors can be identified, a HYPO claim lattice can identify a set of best cases, whereas nearest neighbor retrieval based on input-level features may be less successful if many features are irrelevant. A HYPO claim lattice is a data structure used to rank cases in a partial ordering by similarity to a problem situation according to higher-level domain factors [Ashley, 19901. FRANK tries to satisfy the current leaf goal’s CBR re- quirement with the most suitable methodology. Should feedback indicate that another methodology may be bet- ter suited to the problem, FRANK automatically makes the transition while retaining the feedback about each method. Should no method be able to satisfy the requirement, or only partially satisfy it, then the higher-level goals receive that feedback and decide how to proceed. CBR-Task Mechanism. The CBR-task mechanism is one of the KSs operating at the lowest level of the planning hierarchy. It controls queries to the case base. Depending on the plan being used to generate a report, a CBR query may be more or less specific or complete. By grouping queries into classes according to what they are asking and what they need as input, this mechanism is able to fill out partial queries. It completes them with viable defaults and then submits them. If a query is unsuccessful, then the CBR-task mechanism alters the values it initially set and resubmits the query, unless prohibited by the user. Again, this level of processing provides feedback concerning the success, partial success, or failure to the next higher process. Extended Example The following example demonstrates some of the flexibility of FRANK. In particular, it shows evaluation and reparation at the various levels of the system. The overall motivation for this example is to illustrate how top-level goals influence the type of CBR analysis done and how results during CBR analysis can affect the top-level goals. Suppose the user wants a pro-position report justifying the diagnosis of spinal stenosis for a problem case. The first step is to select a report type and problem-solving strategies. Given the user input, FRANK selects a pro-position re- port type called Diagnosis-with-Alternatives. Since the user does not specify a problem solving strategy, a default one is used. FRANK selects stronger problem-solving strategies when given a choice. In this case, the default strategy is to make an equitable comparison of an advocated position and the viable alternatives. In particular, the advocated position is considered justified if it is supported by “Best Cases.” There are a variety of definitions for “Best Case” and, as for the problem-solving strategies, FRANK is predisposed to selecting stronger (less inclusive) definitions over weaker (more inclusive) ones. Initially, a Best Case must satisfy three criteria: (1) be a Most On-Point Case (MOPC), (2) support the advocated position, and (3) not share its set of dimensions with other equally on-point cases that support an alternative position (i.e., there can be no equally on-point “competing” cases). In turn, FRANK currently has two definitions for a MOPC: (1) a case that shares the maximal number of overlapping symptoms with the problem situation (“maximal overlap”), or (2) a case in the first tier of a HYPO claim lattice. (These definitions are distinct because cases in the first tier of a claim lattice can share different subsets of dimensions with the problem and these subsets may have different cardinalities.) FRANK uses the default problem-solving strategy and the above Best Case definition to select a plan associated with the Diagnosis-with-Alternatives report type. The selected plan generates goals for finding the Best Cases, collecting the diagnoses associated with them, and then comparing and contrasting these cases. First Query. The subgoal for finding the Best Cases creates a Best-Cases query, specifying the above Best Case definition. Two other case retrieval parameters, (1) whether nearly applicable (“near miss”) dimensions are considered during case retrieval and (2) the MOPC definition, are unas- signed and are set by the CBR-task mechanism to the de- faults of “no near misses” and the maximal overlap defi- nition. The first attempt to satisfy the query results in no Best Cases because all the MOPCs found support diagnoses other than spinal stenosis, thereby violating criterion (3) of the Best Case definition. Local Modifications. The CBR-task mechanism then alters one of the parameters it set, in this case allowing near misses, and resubmits the query. Again, no Best Cases are found because of competing cases so the CBR-task mech- anism changes the MOPC definition from “maximal over- lap” to “HYPO Claim Lattice” and resubmits the query. The CBR-task mechanism continues to alter the parameters it has control over and resubmitting the query until either some Best Cases are found or it has exhausted the reason- able combinations it can try. In this example, the query fails to find any Best Cases and returns “no cases” back to the subgoal. CBR-Task Mechanism Interrupt. When the query re- turns “no cases” back to the subgoal to find Best Cases, an interrupt is generated to handle the failure. The interrupt is caught by the ISR table related to the subgoal and a remedy to weaken the definition of Best Cases is tried. New Best Case Definition. The Best-Cases query is modified to remove Best Case criterion (3) above, to include as Best Cases those for which equally on-point competing cases may exist. The revised query is submitted and, this time, it returns a set of Best Cases. The subgoal is marked as satisfied and the next goal of the plan is activated. 70 Rissland The next subgoal analyzes each diagnosis used in the set of MQPCs to determine which symptoms of the problem case are and are not covered by the diagnosis. That is, a table is created in which each row contains a viable diagnosis and the columns are the problem case’s symptoms. If there is a MQPC for a diagnosis and the MOPC shares the problem case’s symptom, then the entry is marked. Since the current plan compares the strengths of the di- agnoses, the symptoms covered by spinal stenosis are com- pared to the symptoms covered by the alternative diagnoses. The strategy employed here is to conclude that if a symptom is only found in the MOPCs supporting spinal stenosis, then the importance of that symptom is elevated. Unfortunately, in this example, all of the symptoms covered by the spinal stenosis diagnosis are also covered by other diagnoses. Global Interrupt. Because there are no distinguishing symptoms, at this point an interrupt is generated signify- ing that the alternatives are too strong. The “too strong alternatives” interrupt is caught by the global ISR table and the corresponding remedy is tried, to try a report type based on the position’s own merits. Since the user ini- tially requested a comparison of her position against alter- natives, FRANK asks the user if it can make a switch from Diagnosis-with-Alternatives. The user agrees and FRANK selects the Diagnosis-Own-Merit report type. FRANK now selects and instantiates a plan associated with the Diagnosis- Own-Merit report type. Suppose that this time the plan does complete satisfactorily. The resulting data representation of the justification is used by the text generation module to create the actual report and present it to the user. elated Researc This work extends our previous work on case-based reason- ing, mixed-paradigm reasoning, and argumentation, par- ticularly our work on hybrid-reasoning systems that use a blackboard to incorporate a CBR component, including ABISS [Rissland et al., 19911 and STICKBOY [Rubin- stein, 19921. FRANK uses opportunistic control analo- gous to HEARSAY II [Erman et at., 19801 to better incor- porate both top-down and bottom-up aspects of justifica- tion than in our previous, rule-based approach to control in CABARET [Rissland and Skalak, 19911. FRANK also ex- tends our task orientation from mostly argumentative tasks, as in HYPO and CABARET, to more general forms of expla- nation, justification, and analysis. Other mixed-paradigm systems using blackboard architectures to incorporate cases and heterogeneous domain knowledge representations are the structural redesign program FIRST [Daube and Hayes- Roth, 19881, and the Dutch landlord-tenant law knowledge- based architectures PROLEXS [Walker et al., 19911 and EXPANDER [Walker, 19921. ANON [Owens, 19891 uses an integrated top-down and bottom-up process to retrieve similar cases. Abstract fea- tures are extracted from a current problem and each feature is used to progressively refine the set of similar cases. As the set of similar cases changes, it is used to suggest the ab- stract features that may be in the current problem and used for further refinement. TEXPLAN [Maybury, 19911, a planner for explanatory text, provides a taxonomy of generic text types, distin- guished by purpose and their particular effect on the reader. This system also applies communicative plan strategies to generate an appropriately formed response corresponding to a selected type of text. TEXPLAN is designed as an ad- dition to existing applications, rather than as an independent domain problem solver. While FRANK explains failures as part of the evaluation and reparation it performs at various levels, the explanation is not used to determine the appropriateness of a case as in CASEY [Koton, 19881 and GREBE [Branting, 19881, nor is it used to explain anomalies as in TWEAKER [Kass and Leake, 19881 and ACCEPTER [Kass and Leake, 19881. FRANK’s use of explanation in plan failure is similar to CHEF’s [Hammond, 19891 in that it uses the explanation of a failure as an index into the possible remedies. However, CHEF’s explanation is provided by a domain-dependent ca- sual simulation, whereas FRANK’s failure analysis is based on the generic performance of its own reasoning modules, such as the failure of the CBR module to retrieve an ade- quate collection of supporting cases. Our general focus in this paper has been the interaction be- tween a user’s high-level expository goal and its supporting subgoal tasks, such as to retrieve relevant cases. Having set ourselves two research goals in the introduction, we have shown first how the FRANK system, a hybrid black- board architecture, can create diagnostic reports by tailoring case-based reasoning tasks to the user’s ultimate goals and viewpoint. In particular, we have given an example of how FRANK uses feedback from tasks such as CBR to re-select a plan. Finally, in pursuit of our second research goal, we have demonstrated how FRANK can re-specify the way case retrieval is performed to satisfy a plan’s failed request for case support. eferences Ashley, Kevin D. 1990. Modelling Legal Argument: Rea- soning with Cases and Hypotheticals. M.I.T. Press, Cam- bridge, MA. Blackboard Technology Group, Inc., 1992. GBB Refer- ence: Version 2.10. Amherst, MA. Branting, L. Karl 1988. The Role of Explanation in Rea- soning from Legal Precedents. In Proceedings, Case- Based Reasoning Workshop, Clear-water Beach, FL. De- fense Advanced Research Projects Agency, Information Science and Technology Office. 94-103. Daube, Francois and Hayes-Roth, Barbara 1988. FIRST: A Case-Based Redesign System in the BBl Blackboard Architecture. In Rissland, Edwina and King, James A., editors 1988, Case-Based Reasoning Workshop, St. Paul, MN. AAAI. 30-35. Erman, Lee D.; Hayes-Roth, Frederick; Lesser, Vic- tor R.; and Reddy, D. Raj 1980. The Hearsay-II Speech- Case-Based Reasoning 71 Understanding System: Integrating Knowledge to Resolve Uncertainty. Computing Surveys 12(2):213-253. Hammond, Kristian J. 1989. Case-Based Planning: View- ing Planning as a Memory Task. Academic Press, Boston, MA. Kass, Alex M. and Leake, David B. 1988. Case-Based Reasoning Applied to Constructing Explanations. In Pro- ceedings, Case-Based Reasoning Workshop, Clearwater Beach, FL. Defense Advanced Research Projects Agency, Information Science and Technology Office. 190-208. Koton, Phyllis A. 1988. Using Experience in Learning and Problem Solving. Ph.D. Dissertation, Department of Electrical Engineering and Computer Science, MIT, Cam- bridge, MA. Maybury, Mark Thomas 1991. Planning Multisentential English Text using Communicative Acts. Ph.D. Disserta- tion, University of Cambridge, Cambridge, England. Owens, Christopher 1989. Integrating Feature Extraction and Memory Search. In Proceedings: The 11 th Annual Conference of The Cognitive Science Society, Ann Arbor, MI. 163-170. Rissland, Edwina L. and Skalak, David B. 1991. CABARET: Rule Interpretation in a Hybrid Architecture. International Journal of Man-Machine Studies 1(34):839- 887. Rissland, E. L.; Basu, C.; Daniels, J. J.; McCarthy, J.; Rubinstein, 2. B.; and Skalak, D. B. 1991. A Blackboard- Based Architecture for CBR: An Initial Report. In Pro- ceedings: Case-Based Reasoning Workshop, Washington, D.C. Morgan Kaufmann, San Mateo, CA. 77-92. Rubinstein, Zachary B. 1992. STICKBOY: A Blackboard- Based Mixed Paradigm System to Diagnose and Explain Back Injuries. Master’s thesis, Department of Computer and Information Science, University of Massachusetts, Amherst, MA. Statsky, William P. and Wernet, R. John 1984. CaseAnal- ysis and Fundamentals of Legal Writing. West Publishing, St. Paul, MN, third edition. Walker, R. F.; Oskamp, A.; Schrickx, J. A.; Opdorp, G. J. Van; and Berg, P. H.van den 1991. PROLEXS: Creating Law and Order in a Heterogeneous Domain. In- ternational Journal of Man-Machines Studies 35~35-47. Walker, Rob 1992. An Expert System Architecture for Het- erogeneous Domains: A Case-Study in the Legal Field. Ph.D. Dissertation, Vrije Universiteit te Amsterdam, Am- sterdam, Netherlands. 72 Rissland | 1993 | 11 |
1,310 | ovative Dorothy Neville & aniel S. W&i* Department of Computer Science and Engineering University of Washington Seattle, WA 98195 neville, weld @cs. washington.edu Abstract We present a new algorithm, SIE, for designing lumped parameter models from first principles. Like the IBIS system of Williams [1989, I990], SIE uses a qualitative representation of param- eter interactions to guide its search and speed the test for working designs. But SIE’s interuc- tion set representation is considerably simpler than IBIS’s space of potential and existing in- teractions. Furthermore, SIE is both complete and systematic - it explores the space of pos- sible designs in an nonredundant manner. Introduction A long standing concern of Artificial Intelligence has been the automation of synthesis tasks such as planning [Allen et al., 19901 and design. Of the many approaches to design (e.g., library design, pa- rameterized design, etc.) innovative (or first princi- ples) design has seemed to present the greatest com- binatorial challenge. In this paper, we extend the work of Williams [1989, 19901 on the IBIS innovai tive design system. Like IBIS we assume the lumped parameter model of components and connections that is common in system dynamics [Shearer et al., 19711. We take the problem of innovative design to be the following: e Input: 1. A set of possible components (described in terms of terminals, variables, and equations re- lating the variables). 2. Constraints on the number and type of legal connections between terminals. 3. A description of an existing, incomplete device (specified as a component-connection graph). *Thanks to Franz Amador and Tony Barrett for helpful discussions. We gratefully acknowledge Oren Etzioni’s emergency faxxing service. This research was funded in part by National Science Foundation Grant IRI-8957302, Office of Naval Research Grant 90-J-1904, and a grant from the Xerox corporation. 4. A set of equations that denote the desired be- havior of the complete design. o Output: A component connection graph which subsumes the existing device and whose equa- tions are consistent and imply the desired be- havior . In this paper, we present the Systematic Interac- tion Explorer (SIE), an algorithm which performs this task of design from first principles. While our algorithm is based on IBIS it has a number of ad- vantages over that algorithm: @ SIE is complete. o SIE is systematic - it explores the space without repetition. [McAllester and Rosenblitt, 19911. e SIE shares IBIS’s interaction-focused search, yet e SIE is small, simple, and easy to understand. In particular, this paper presents a way to per- form interaction-based invention without the com- plexity of IBIS’s space of existing inteTUctionS, space of potential interactions, and the complex links and mappings between spaces. As explained fully be- low, our interaction set representation is consider- ably simpler than IBIS’s space of potential and ex- isting interactions, allowing us to greatly simplify the whole design algorithm. In addition, our ap- proach results in complete and systematic explo- ration of the space of possible designs; we believe these properties yield greatly increased search effi- ciency. Although we remain unsure of the scaling potential for both IBIS and SIE, preliminary empir- ical results suggest that interaction-based focusing can reduce the search space by up to 95%. In the next section, we summarize recent work on design from first principles, concentrating on William’s IBIS algorithm. Then we describe the SIE algorithm and demonstrate it on the simple punch- bowl example. Following that we discuss implemen- tation status and give preliminary empirical results. We conclude with a discussion of limitations and plans for future work. Search 737 From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. Previous VVork While there is a vast literature on design compi- lation, library approaches, case-based design and other approaches with restricted aims, there has been little work on design from first principles - presumably due to the combinatorics involved. Roylance [1980] backward chains from the speci- fication equations using abstractions of primitive components, but assumes the purpose of each de- vice and so loses completeness. Ulrich’s [I988] schematic synthesis algorithm uses heuristic modi- fications to generate bond graphs from a specifica- tion consisting of the parameters to be related, an abstract characterization of the derivative or inte- gral relation between the parameters, and a specifi- cation of the lumped parameter model of the input and output. Rather than searching though the space of pos- sible components, Williams’ [1989] IBIS system searches through abstractions of this space. Specif- ically, IBIS constructs two graphs: the space of ex- isting interactions and the space of potential inter- actions. The former is a graph whose nodes denote the value of parameters of the existing components (e.g., the pressure at the bottom of the particular vat VI); hyperedges in the graph signify a set of pa- rameters that are related by an equation in a com- ponent description or by a connection law such as the generalized Kirchoff’s Current Law. The space of potential interactions is similar except nodes rep- resent classes of parameters (e.g., one class might represent parameters denoting flow through pipes) and hyperedges represent relations that could be added. The two graphs are linked with edges that connect existing parameters with their respective classes. The most elegant aspect of this data struc- ture is the way that the finite space of potential in- teractions represents an unbounded number of pos- sible additions to the existing structure, yet we ar- gue below that this very feature is also a weakness. Williams uses the interaction topology representa- tion to aid search in three ways: o Search control - search for interactions that are more likely to relate the parameters of the desired behavior first. o Hierarchical testing - only consider a device worth testing when there is a path connecting all the parameters of the desired behavior. o Verification - use information about the path connecting the parameters as a guide for verify- ing the desired behavior. The key assumption made by IBIS is that a finite representation (the space of potential interactions) of the unbounded set of addable components leads to efficient search, since “Path tracin in a small graph is fast” [Williams, 1990, p. 354 . However, ‘7 this ignores the effect of the resulting redundancy in search. The use of the interaction abstraction 738 Neville space causes IBIS to lose search’ in two ways: the property of systematic There is no coordination between the debug- ging process of refining an inconsistent candi- date and the process of generating and testing a new hyperpath from the original interaction spaces. This is crucial since ‘Several refinements are normally required for complex structures” [Williams, 1990, p. 3551. No systematic way is presented for adding mul- tiple components of a single type in service of a single objective. This can only be accom- plished by repeated cycles of search and refine- ment [Williams, 1990, p. 3541. Since SIE searches through the concrete space of possible design topologies rather than through the abstract space of interactions, there is no need for IBIS’s debugging-style refinements. This leads to a search we argue is both complete and systematic. Yet like IBIS, SIE uses the interactions of the various parameters both for search control and as a cheap method of partial design verification; thus SIE gets the same computational focus from its simple inter- action set representation as does IBIS from its space of existing and potential interactions. The SIE Design Algorithm Our technique includes two factors that simplify the design task: interaction set representation to guide search and test potential designs, and a systematic search algorithm. We discuss the details of these below, demonstrating the technique on Williams’ punchbowl example. Let V = -$, N, Z+ be a device, where C is a set of components, N is a set of nodes (where each node is a pair’ of component terminals signifying connec- tions between them), and Z is the set of interaction sets (explained below). The device can be purtiul if not all terminals are connected or complete if all terminals are connected to a node and the connec- tion graph is connected. For the punchbowl problem, the initial device consists of an unconnected bowl and vat: C = {vat, bowl} and N = {}. The key to our algorithm is Z, a set of parameter sets; two parameters share an interaction set if and only if a change in the value of one can affect the other, i.e. if there is an interac- tion path (series of equations) between them. Thus the sets in Z partition the device parameters into equivalence classes that interact causally through one or more equations. Interaction sets maintain information on which parameters can influence each ‘Completeness may be sacrificed also, but this is unclear. ‘The restriction to two terminals per node is relaxed in the discussion on implementation. other without the overhead of representing the de- tails on how they interact. Given the primitive de- vice equations of a container: Algorithm: SIE(--$, NJ>-, 0, S, Muxj 1. Vc(t) = H,(t) x areac Pdc(t) = fluid-density, x g x He(t) d (WN = Qt+,(t) + Qaot(c,(t) 2 pi(t)) = 5 (He(t)) x area, [area,] = + PI [f Zuid,densit yc] = [+] bl = [+I f’ermination:‘ If ICI 2 Max then signal fail- ure and backtrack. Else, If 0 is empty and Test(4, N,Z>-, S) = true then signal success and return design. Else, signal failure and back- track. SIE determines that the variables describing a con- tainer form two interaction sets. The derivatives of the fluid height and volume are related to the flow, while the fluid height is related to the pres- sure difference between the top and bottom of the cant ainer . The interaction sets corresponding to each unconnected component are easily generated from the primitive equations defining each compo- nent type. Since the punchbowl initially consists of two con- tainers, the interaction set initially consists of four parameter sets, two each for the vat and bowl. z = { $ (Hv), & (Vv), &top(v)> &bat(v)1 {Pd(vat), Hv} { $ (Hb), $ (Vi)> Qtop(b)> Qbotp)) {Pd(bow& Hi) We use a union-find algorithm to maintain con- sistency of the interaction sets when joining com- ponents. When a node connects two terminals, the effort parameters (e.g. voltage or pressure) asso- ciated with the terminals get equated and the flow parameters (e.g. current) get closed with Kirchoff’s Current Law (KCL). As far as the interaction sets are concerned, the only significant change has been a possible causal connection between these parame- ters so their respective interaction sets are unioned together. Specifying & Testing Behavior The desired behavior can also be considered as a set of parameters that interact in the completed device. Thus, interaction sets form a quick test of a new device’s utility: do all the desirable interac- tions (i.e. all the parameters in the desired behavior equations) actually interact (i.e. are they all in the same interaction set)?3 3There is a potential problem with this technique. Suppose that the desired equations are A + B = C + D then this could be solved by two parallel interactions A = C and B = D without all parameters joining a single interaction set. We can compensate for this of course with a weaker test on the interaction sets, but the focusing power is reduced. More research is neces- sary to formally prove necessary and sufficient interac- tion conditions for design validity. The IBIS algorithm has a corresponding problem - the number of hyper- paths that pairwise connect a set of parameters is vastly greater than the number of connected paths. Select Open Terminal: Let t be an open ter- minal in 0. Select Connecting Terminal: Either connect t to another terminal t’ in 0 or instantiate new component c with terminal set O,,, and choose t’ from O,,,. BACh’TRACK POINT: Each ex- isting computible open terminal and each possible new component and compatible terminal must be considered for completeness. Update Device: If both terminals were chosen from the existing 0, let C’ = 6. Else, let C’ = C U {c}. In either case, let N’ = n/u {(t, t’)}. Update Interaction Sets: If two terminals from existing components were connected, the interaction sets corresponding to the relevant pa- rameters of the terminals are replaced with their union. If a new component was added, all of its interaction sets are added to Z, then the relevant ones are joined to reflect the connection. Update Open Terminal Set: If both termi- nals were chosen from 0, let 0’ = 0 - {t , t’}. Else, let 0’ = 0 U O,,, - {tJ t’) I . Recursive call: SIE( -$’ , w, z’s, 0’, S, Max) Figure 1: The SIE Algorithm The desired behavior for the punchbowl is “[change] the height difference in the direction op- posite to the difference” [Williams, 1989, p. 591 which can be written as the following SRl equa- tion (in which square brackets denote the sign-of function): [& (Hv - Ha)] = [Ha - Hv] This equation relates the four parameters H,, 6, 4 (H,), and 6 (Hb). The first. test of a po- tentia design is to check the interaction sets of the device, ruling it out if the four parameters are not all in the same set. The quick test can definitively rule out some devices, but this is only a necessary condition. It is insufficient for complete verifica- tion. If a device passes the interaction set test, the detailed equations are generated and evaluated with respect to the desired behavior. Generating Designs The search algorithm takes a partial design 4, N, 2+, a list of open terminals 0, and the de- sired behavior specification S. It systematically generates new devices by considering an open ter- minal and considering all the things to which it can attach: all the compatible4 open terminals from the 4 Representing and reasoning about compatibility is Search 739 existing components, all compatible terminals from the set of possible components and the possibility of not attaching the terminal to anything. For reg- ularity, we consider this case as connecting the ter- minal to a special virtual terminal called an endcap, with exactly one terminal compatible with all ter- minal types. Figure 1 shows a non-deterministic, tail-recursive version of the algorithm. In practice, depth-first iterative deepening search can be used to implement nondeterminism. SIE can generate Williams’ solution for the punchbowl problem with four recursive calls, given the initial structure and desired behavior de- scribed previously and the open terminal set 0 = {top(wat), bot(vat), top(bowl), bot(bozol)}. First, SIE decides to connect terminal bot(vat) to a new in- stance of a pipe. A pipe is defined as having a pres- sure difference between the ends to be proportional to the flow through the pipe. Therefore the interac- tion set for a pipe is one set containing the variables pressure difference an the flow at each end. The resulting device is: C hf = (vat, bowl,pipe) = {(bot(vat), e&+))) z = {$ (KJ, & (Vu), &top(u), Qbot(u), P&, HUT P&y Qel(p)g Qez(p>l {$ (Hb), & (Va), &top(b), Qbot(b)l {Pdb, Hb) The open terminal list is 0 = {top(vat), top(bowZ), bot(bowl), ea(pipe)} and the second call to SIE chooses two open terminals from this set to connect, bot(bowl) and ez(pipe), giving: c = (vat, bowl,pipe) N = {(bot(uat), el(pipe)), (bot(bowZ), e2 ~P~P4~ z = Wu, K, &top(u), &bot(u), Pdu, Pd,, Qel(p)t &es(p), Hb, vb, &top(b), Qbot(b)r Pdb) The open ter- minal list is now {top(vat), top(bowl)}. The last two calls to SIE connect these in turn to a virtual endcap. With the open terminal list empty, the device is “complete” and ready to test. The inter- action set test returns true for this device - all four parameters in the desired behavior are in the same interaction set. Further mathematical testing determines behavior. that indeed this device has the desired Implementation Status & Potential The basic SIE algorithm has been completely im- plemented in Common Lisp on a Sun SPARC-IPX. However, since we do not have access to an imple- pljovide the greatest performance advantage when used to evaluate devices cross technologies, with hydraulic and electrical components, for example. We predict that devices whose components are con- tained within one technology, will not benefit as much since all parameters will quickly collapse to the same interaction set. We plan to investigate this hypothesis with further tests. an interesting topic in itself. Although we use a sim- 51n the future, we intend to connect SIE to a design ple type system that restricts terminal connections, verification system built on top of Mathematics [Wol- one could imagine a more sophisticated system such fram, 19881 and our PIKA simulator. [Amador et al., as Williams IOTA [Williams, 1989]. 19933 mentation of MINIMA, the final mathematical ver- ification of potential ‘solutions is done by hand.” We have tested SIE on several design problems in a domain which consists of a dozen fluid, mechan- ical and electrical components, including a turbine (with fluid and mechanical-rotation terminals) and a generator (with mechanical-rotation and electri- cal terminals). Our preliminary results are shown in figure 2. The problems in figure 2 are summarized as fol- lows: Punchbowl. This is the classical punchbowl example from [Williams, 19901 including the re- striction that containers may not be connected directly together, and the partial device require- ments that do not allow connections to be made to the tops of the existing containers. Dynamo 1. The initial device consists of an un- connected vat and a light bulb; the desired be- havior relates the flow of liquid through the bot- tom of the vat with the light output of the bulb. SIE’s solution connects the bottom of the vat to a turbine to a generator to the light bulb. The two correct solutions have the bulb’s electrical termi- nals reversed with respect to the polarity of the generator. Dynamo 2. The same example as the dynamo, but allowing up to 5 components in the device, to illlustrate to combinatorics involved with in- creasing search depth. The solutions include the two previous ones and many five component so- lutions that have an “extra” component, such as another light bulb in series with the original one. Dynamo 3. This dynamo example has the de- sired behavior that the flow of fluid though the vat influences the light output of two lighbulbs. The correct variations have the bulbs in series with the generator. Dynamo 4. Similar to the above example with two lightbulbs, the implemention is augmented to allow for three terminals to connect to a node. Thus there are two topologically distinct solu- tions: the bulbs can be in series or in parallel with the generator. Our experiments suggest that interaction sets 740 Neville DEVICES SATISFY Z SOLUTIONS MAX CPU Punchbowl 18 3 1 3 0.167 Dynamo 1 150 4 2 4 1.550 Dynamo 2 1640 124 42 5 23.850 Dynamo 3 891 16 8 5 13.617 Dynamo 4 2786 72 14 5 39.600 Figure 2: The number of possible devices created, those that pass the interaction set test, those that pass complete mathematical verification, maximum number of components (search depth), and SPARCstation CPU time in seconds. I More Elaborate Physical Models To evaluate this line of research, we need a clear understanding of the coverage of physical devices SIE can handle. So far we have limited ourselves to relatively simple devices with simple behavioral specifications and with one operating region. Space precludes a discussion of our algorithmic extensions to multiple behavioral regions, but see [Neville and Weld, 19921. For simplicity, we started with the requirement that at most two terminals could connect at a node. We have extended this to allow for an arbitrary number of common connections. This increases the types of devices SIE can handle, but induces a cor- respondingly high combinatorical cost. Note that allowing for three-terminal nodes in the dynamo example triples the amount of time needed and the number of designs tested, while it adds only one interesting solution, the six configurations with the bulbs attached in parallel to the generator. Heuris- tics and search control are expected to reduce the cost. We are currently investigating the addition of search control. Another important extension would be to in- corporate geometry. While the lumped parame- ter model is useful and expressive for many physi- cal processes, it fails to capture the geometric rea- soning needed to design mechanical devices such as linkages and transmissions. Our design algo- rithm, however, is well suited for generating devices consisting of kinematic pairs or possibly unity ma- chines [Subramanian et al., 19921. Capturing and testing geometric contraints and behavior would not be a straightforward application for interaction sets though; for analysis we would hope to draw on the ideas of Subramanian [1992] and Neville & Joskowicz [1992]. Combinatorics and SIE Scaling Potential The most crucial question to ask of any first prin- ciples design algorithm is combinatorial: how does the approach scale ? Suppose that there are C types of components, each with two terminals, and there are no restrictions on terminal connectivity except a limit of two terminals per node. Then there are O(Cn) connected device topologies with n symmet- ric parts. If more than two terminals can be con- netted at a node, the number of designs increases - with no limit on the number of terminals per node, then there are about rnnCn possible topolo- gies (where m denotes the number of nodes). Con- sidering an electric component set of identical bat- teries, resistors, capacitors and inductors, this sug- gests that there are about 17 million device topolo- gies with 6 components and 4 nodes. While this is clearly a large number, and would take 78 hours to search with our current implementation, it is re- assuring to note that existing chess machines can search this many board positions in under 10 sec- onds. Note that this analysis ignores the effect of inter- action representations on search. There are two ways that interaction sets increase the speed of SIE. Since the presence of all goal parameters in the same interaction set is a necessary (yet insuffi- cient) condition for design success, interaction sets provide fast, preliminary verification technique. Of course, by itself this results in no search space re- duction. The other way that interaction sets can be used is as a heuristic to guide the selection and connection of components in steps 2 and 3 of SIE (figure 1). Various heuristics are possible (maxi- mize size of resulting interaction sets, etc.) and they correspond to search strategies in IBIS’s inter- action spaces. 6 The question remains: how effective are heuristics based on interaction sets? We believe that this question can only be answered empirically. Our hope is that the benefit will equal or surpass the speedup we have achieved in design verification. 6To see this, note that the combinatorial analysis of the previous section applies to IBIS as it does to SIE. For a moment assume that IBIS used a completely in- stantiated (infinitely large) interaction graph instead of its finite space of potential interactions. Since each component is described by one or more equations, the number of hyperedges is no less than the number of possible components. This implies that the fundamen- tal idea of an interaction space results in no savings over search in component space - the only possible advan- tage comes from the use of a finite description. Yet (as we argued in section ), this requires multiple refine- ments and the loss of systematicity. Hence we believe that IBIS searches a space that is strictly larger than SE's space of components. Search 741 . Conclusion In this paper we have described SIE, a new algo- rithm for innovative design of lumped parameter models from first principles. Our approach is based on Williams ItiIS system and represents an incre- mental advance in the search aspects of that sys- tem. We have argued that (unlike IBIS) SIE is com- plete and systematic. Both algorithms are sound if the subsidiary verification algorithm is sound. We have implemented SIE and demonstrated that it runs fast enough to use it as a testbed for further research in automated design. We have demon- strated that hierarchical testing using interaction sets can eliminate up to 95 percent of the candi- date devices from further expensive testing; thus it is beneficial for some types of design problems. Our suspicion is that both IBIS’s interaction spaces and SIE’s interaction sets are only a par- tial solution to the combinatoric problems of design from first principles. We plan to continue with this research, using SIE as a testbed for search control heuristics in order to gain a better grasp of their power and the corresponding scalability of these innovative design algorithms. We suspect that in truly large design problems a first principles ,ap- preach must be coupled to a library of past experi- ence. One way to perform this is with a case-based approach that uses a modified first principles design algorithm to adapt past solutions to new problems. In [Hanks and Weld, 19921 we show how this can be done for the synthesis of partial order plans, re- taining soundness, completeness and systematicity. Since we expect that it would be easy to perform the same modification on SIE, the construction of an extensive design library and a good indexing sys- tem might result in a practical design system. References J. Allen, J. Hendler, and *A. Tate, editors. Reud- ings in Planning. Morgan Kaufmann, San Mateo, CA, August 1990. F. Amador, A. Finklestein, and D. Weld. Real- Time Self-Explanatory Simulation. AAA I-93, Submitted to 1993. Steven Hanks and Daniel Weld. Systematic adap- tation for case-based planning. In Proceedings of the First International Conference on AI Plunning Systems, June 1992. D. McAllester and D. Rosenblitt. Systematic Non- linear Planning. In Proceedings of AAAI-91, pages 634-639, July 1991. D. Neville and L. Joskowicz. A Representation Language for Conceptual Mechanism Design. In Proceedings of the 6th International Workshop on Qualitative Reasoning, August 1992. D. Neville and D. Weld. Innovative Design as Sys- tematic Search. In Working Notes of the AAAI Fall Symposium on Design from Physical Princi- ples, October 1992. G. Roylance. A Simple Model of Circuit Design. AI-TR-703, MIT AI Lab, May 1980. J. Shearer, A. Murphy, and Richardson H. In- troduction to System Dynamics. Addison-Wesley Publishing Company, Reading, MA, 1971. D. Subramanian, C. Wang, S. Stoller, and A. Ka- pur. Conceptual Synthesis of Mechanisms from Qualitative Specifications of Behavior. In S. Kim, editor, Creativity: Methods, Models, Tools. 1992. K. Ulrich. Computation and Pre-Parametric De- sign. AI-TR-1043, MIT AI Lab, September 1988. B. Williams. Invention from First Principles via Topologies of Interaction. Phd thesis, MIT Artif- ical Intelligence Lab, June 1989. B. Williams, Interaction-Based Invention: Design- ing Novel Devices from First Principles. In Pro- ceedings of AAAI-90, pages 349-356, July 1990. S. Wolfram. Mathematics: A System for Doing Methemutics by Computer. Addison-Wesley, Red- wood City, CA, 1988. 742 Neville | 1993 | 110 |
1,311 | Armand Prieditis Department of Computer Science University of California Davis, CA 95616 prieditis@cs.ucdavis.edu Abstract Admissible heuristics are worth discovering because they have desirable properties in various search algo- rithms. Unfortunately, effective ones-ones that are accurate and efficiently computable-are difficult for humans to discover. One source of admissible heuris- tics is from abstractions of a problem: the length of a shortest path solution to an abstracted problem is an admissible heuristic for the original problem because the abstraction has certain details removed. However, often too many details have to be abstracted to yield an efficiently computable heuristic, resulting in inac- curate heuristics. This paper describes a method to re- constitute the abstracted details back into the solution to the abstracted problem, thereby boosting accuracy while maintaining admissibility. Our empirical results of applying this paradigm to project scheduling sug- gest that reconstitution can make a good admissible heuristic even better. I Introduction Admissible (lower-bound) heuristics are worth discovering because they have desirable properties in various search algorithm. For example, they guarantee shortest path so- lutions in the ,4* [24] and 10,4* [19] algorithms and less expensively produced, but boundedly longer solutions in the dynamic weighting [26] and L4T [25] algorithms. Moreover, multiples of them can reduce an exponential average time complexity to a polynomial one with A* [4]. Unfortunately, effective (accurate and efficiently computable) admissible heuristics are difficult for people to discover. Several researchers have shown that admissible heuristics can be generated from abstractions (transformations that drop certain details) of a problem [12, 10, 25, 16, 23, 27, 28, 291. As Figure 1 shows, the length of a shortest path solution in the abstracted problem is the admissible heuristic. For such heuristics to be effective, the abstracted problem that generates them should be efficiently solvable and yet close to the original problem [32, 23, 13, 291. This technique typically yields efficiently computable, but 1 This work is supported by the National Science Foundation grant number IRI-9109796. B haskar Janakiraman Silicon Graphics Inc. Mountain View, CA 94039 bhaskar@mti.sgi.com inaccurate heuristics because efficiently solvable abstracted problems often ignore precisely those details that are central to solving the original problem. Abstracted Problem Original Problem Figure 1 The Length of a Shortest Path Solution to the Abstracted Problem = Admissible Heuristic for the Original Problem; Reconstitution Increases that Length, Thereby Boosting Accuracy of the Heuristic This paper describes a method called reconstitution that adds back such ignored details to the abstracted problem’s solution, thereby boosting accuracy heuristic while main- taining admissibility. As Figure 1 shows, the length of a shortest path solution in the abstracted problem is in- creased with reconstitution to yield a more accurate ad- missible heuristic. The ultimate goal of this research is to develop an automatic reconstitution system to shift some of the burden of discovery from humans to machines. 2 Project Scheduling As a vehicle for exploring reconstitution, we investigated project scheduling problems because they are of practi- cal importance and are difficult to solve without effective heuristics. A project scheduling problem consists of a fi- nite set of jobs, each with fixed integer duration, requiring one or more resources such as personnel or equipment, and each subject to a set of precedence relations, which specify allowable job orderings, and a set of mutual exclusion con- straints, which specify jobs that cannot overlap. No job can be interrupted once started. The objective is to minimize Search 743 From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. project duration. Since this objective arises in nearly ev- ery large construction project-from software to hardware to buildings-efficient algorithms that obtain that objective are desirable. Integer linear programming methods have been used to solve project scheduling problems for years [ 1, 3, 2 1, 111. However, these methods are computationally expensive, un- reliable, and applicable only to problems of small size. The underlying reason for the computational expense and lim- ited problem size is that such project scheduling problems are NP-hard [9]. As a result, scheduling problems are typ- ically solved by branch-and-bound algorithms with lower- bound duration estimates (admissible heuristics) to improve efficiency [3 1, 7, 21. The only published attempt at discovering admissible heuristics a scheduling domain yielded poor heuristics when abstraction alone was applied [22, 28J. Moreover, the particular scheduling problem (uniprocessor scheduling) to which it was applied did not allow concurrency, which is the essence of scheduling. 3 Key Definitions As shown in Figure 2, a scheduling problem can be repre- sented as graph with jobs as vertices, precedences as single- at-rowed edges, and mutual exclusions as double-arrowed edges. For example, the figure shows that job I must be completed before job J can start and that jobs J and Ii’ cannot overlap. The single number above each job repre- sents the job’s duration. For example, job J takes 10 units of time to complete. The letter to the left of each job rep- resents the resource that the job requires; one job’s use of a resource cannot overlap with another job’s use of that same resource. For example, jobs I and E, which both require resource S, cannot overlap with each other. A precedence graph is a directed acyclic graph consisting only of the precedence relations and no resource constraints. An earZy schedule graph is derived from the precedence graph, where each job is scheduled as early as possible. The numbers within the square brackets near each job in the figure represent the earliest start time and the earliest completion time of each job. The critical path is the longest path in the early schedule graph; it shows the earliest time by which all jobs can be completed. No job on the critical path can be delayed, although other jobs on the same early schedule can be delayed as long as they do not increase the critical path length. For example, if job J, which is on the critical path, starts later than 33 units of time, the entire project will be delayed. These jobs may have to be delayed in order to satisfy mutual exclusion constraints. The total completion time of an early schedule is therefore equal to the critical path length, which in our case is 43. An optimal schedule is an early schedule which takes the least total time among all possible schedules. Given only precedence constraints, finding an early schedule reduces to a topological sort of the precedence graph, which can be done in linear time of the number of precedence constraints [lS]. Finding the critical path in an early schedule takes linear time of the number of jobs. Therefore, if all other constraints such as mutual exclu- sion constraints and resource constraints can be recast as precedence constraints, the problem is easily solvable. For example, the mutual exclusion constraint between jobs J and I< can be recast in two ways: either J is completed before Ii’ or vice versa. Similarly, for resource constraints each pair of jobs sharing the same resource can be recast as a mutual exclusion constraint between the two jobs. Each mutual exclusion constraint can then be recast as one of two precedence constraints as previously described. Henceforth, we assume that all resource constraints have been recast as mutual exclusion constraints. 4 Branch and Bound Project Scheduling The idea of recasting mutual exclusion and resource con- straints as precedence constraints suggests the following simple combinatorial algorithm. Explore all recastings, one at a time, that do not create a cycle and find early sched- ules for all of these recastings; the early schedule with the minimum critical path length is the optimal one. Unfortu- nately, this brute-force algorithm is combinatorially explo- sive: n mutual exclusion constraints results in 2” possible recastings, which is clearly too large a space to explore exhaustively for large n. One way to reduce this combi- natorial explosion is to use a branch-and-bound algorithm with lower-bound estimates to prune certain recastings ear- lier. If the current duration + the lower-bound estimate ex- ceeds a user-supplied upper-bound, then that schedule can be pruned. - = precedence relation - = mutual exclusion constraint IO * I 0 = Job I with duration 10 uslng resource /i&221 9: earllest start tlme Is 12 and earllest completion Hme Is 21 = Job on crltlcal path Figure 2 A Project Scheduling Problem 744 Prieditis 1. Calculate the critical path (CP). 2. Traverse the CP backwards-from late to early jobs. Assume jobs encountered are j,, j,, . . . 3. As each job jk on CP is encountered, look for unsatisfied mutual exclusion constraints between j,, and some job jr, where j, is not on CP. Notice that traversing the graph backwards as in step 2 is more efficient than forwards because we have to reschedule only one job. 4. If in the given schedule, execution of jk overlaps with jr, then push j, to a later time so that there is no overlap. 5. Push other jobs later in time if necessary. This is done by listing all jobs j, such that j, must come before j, according to the prece- dence relation, and rescheduling j, so that j, completes before j,. Repeat this step until all the precedence relations are satisfied. 6. If the length of CP in the new schedule is greater than the original CP, then terminate the algorithm and return the new bound as original CP length + amount of overlap between j, and j,. 7. Else, repeat till a mutual exclusion constraint is found which increases CP length. 8. If no such constraint is found, return the CP length as the new bound. Figure 3 An Algorithm to Compute the RCP Heuristic The critical path estimate of an early schedule, which is efficiently computable, is clearly a lower-bound since any early schedule that satisfies part of the constraints is a lower bound on the completion time for any optimal schedule sat- isfying all constraints. Moreover, any additional constraint will not result in a decrease in the critical path length. No- tice that the critical path (CP) heuristic results from an ab- straction of the original problem: all mutual exclusion and resource constraints are ignored. Although the CP heuristic is admissible and easily com- putable and has proved to be valuable in evaluating overall project performance and identifying bottlenecks, it can be far from the actual project duration. In the worst case, it can underestimate the actual project duration by a factor of n., where n is the total number of jobs to be scheduled. This case arises when the only possible schedule is a se- rial schedule. For example, if a scheduling problem has no precedence constraints and has mutual exclusion constraints between every pair of jobs, then the only possible schedule will be a serial one. For this case, the CP heuristic will return length of the longest job, which underestimates the optimal duration by a factor of n.. Also, since the critical path estimate ignores the resource constraints, certain se- quencing decisions may be required in the actual schedule that increase the project duration well beyond the critical path es timate. 5 Reconstitution-based Heuristics in Project Scheduling What we would like is an admissible heuristic that is as easily computable as the critical path estimate, but that takes into account the resource and mutual exclusion constraints, which the critical path estimate ignores. We would like to reconstitute these ignored constraints back into the critical path somehow. The RCP (Reconstituted Critical Path) heuristic described below does exactly that. The basic idea behind the RCP heuristic is to extend the critical path by analyzing all unsatisfied mutual exclusion constraints between jobs in critical path and jobs not in critical path. When possible, all jobs with such unsatis- fied constraints are rescheduled at a later time while still preserving critical path length. If that is not possible, then the critical path length is increased by a time overlap un- derestimate between the jobs of each type. For example, consider the project scheduling problem in Figure 2, which has a critical path of J, F, C, B, A. First, we examine job J and check for any mutual exclusion constraints involving it. The only such constraint is the one with job Ii’. Next, we check if J overlaps with K, which in fact it does. The object now is to try to delay job Ii beyond the completion time of job J, which is at 43 time units. Delaying job Ii’ will necessarily increase the length of the critical path by 1 time unit. If the rest of the jobs were ignored, the RCP heuristic would return 44, which is the length of critical path (43) plus the overlap of the earliest start time of job J and the earliest completion time of Ii’ (34 - 33 = 1). The general algorithm is shown in Figure 3 and a pictorial definition of overlap is shown in Figure 4. I A I I B I 1’ .ch&l&i ._ 1 Figure 4 Overlap is the Minimum of Both Overlaps; Jobs A and B are Mutually Exclusive To see that the RCP heuristic is admissible, consider a job j, on the critical path which has a mutual exclusion constraint with job j,,. In the final schedule, either j, will be scheduled before j, or vice versa. Note that neither of the two jobs can be scheduled any earlier since the schedule is already an early schedule. If job j, cannot be scheduled after jl without increasing the critical path length in the current schedule by pushing jobs ahead which depend on .A,, , then neither can it be scheduled after j, in the final schedule. The reason is that precedence constraints are Search 745 always added and never removed at each iteration of the search algorithm and adding more precedence constraints cannot invert an existing scheduling order. If j, is scheduled after jm, then the critical path length will be increased by at least the minimum of the overlap between the earliest start time of jl and the earliest completion time of j, or the earliest start time of j, and the earliest completion time of j,. The reason is because job j, is on the critical path: starting it later affects the entire project duration. Although the RCP heuristic takes slightly longer to com- pute than the CP heuristic, it prunes more of the space than the CP heuristic. As we will see in the next section, the extra time taken in computing the heuristic is more than compensated by the time saved from pruning the search space. If the current critical path length is optimal, then computation of the RCP heuristic takes longer than that of the CP heuristic, since the algorithm has to examine all jobs on the critical path. The worst case complexity of comput- ing the RCP heuristic is 0(?z2) for 11. jobs, since at most O(n) jobs will be on the critical path and O(n) work will be required to process a mutual exclusion constraint involv- ing a job on the critical path. An analysis of the average computational complexity is, however, difficult since the heuristic depends on specific mutual exclusion constraints. The degree of complexity can be controlled by reconstitut- ing fewer mutual exclusion constraints if desired. The complexity of the RCP heuristic can be further re- duced by computing it incrementally. That is, the RCP on a successor state (one with a new precedence constraint added) can be computed more efficiently by reusing a por- tion of the RCP on the original state. Since new precedence constraints are added and never removed at each iteration of the search algorithm, the critical path up to the point in the graph where the new precedence constraint is added remains the same and the critical path need only be recom- puted from that point on. 6 Empirical Results To get some idea of the effectiveness of the RCP and CP heuristics, we implemented the IDA* algorithm [ 193, which is a standard branch-and-bound algorithm in which to evaluate admissible heuristics, in Quintus Prolog on a Sun Sparstation l+ and ran it on a set of random solv- able (i.e. no cycles) problem instances with various num- bers of jobs, mutual exclusion constraints, and precedence constraints. The algorithm works as follows. All partial schedules whose duration exceeds a certain threshold are pruned. Initially, the threshold is set to the value of the admissible heuristic on the initial state. If no solution is found within that threshold, then the algorithm repeats. On the next iteration, the threshold is set to the minimum of duration plus heuristic estimate over all the previously gen- erated partial schedules whose duration exceeds the thresh- old. One important property of IDA* is that it guarantees minimal duration solutions with admissible heuristics [ 191. A state consists of three items: 1. A precedence graph which includes original prece- dence constraints and a set of precedence constraints originating from mutual exclusion constraints which have so far been recast as one of two precedence con- straints. 2. An early schedule satisfying the precedence constraints. 3. A set of unsatisfied mutual exclusion constraints. The goal state is characterized by an empty mutual ex- clusion constraint set. A state transition is a recasting of a mutual exclusion constraint into one of two precedence con- straints followed by the generation of a new early schedule. Search proceeds from an initial schedule satisfying only the original precedence constraints. We ran two sets of experiments, each with a fixed the number of jobs and precedence constraints and a variable number of mutual exclusion constraints since problem com- plexity grows as the number of mutual exclusion constraints increases: one with 30 jobs with 112 precedence constraints and the other with 40 jobs with 128 precedence constraints. For the first set, we varied the number of mutual exclusion constraints between 0 and 25; for the second, between 10 and 40. We chose these problems because they were the largest ones we could generate that still could be solved in a reasonable amount of time on our machine. Table 1 summarizes the results of running IDA* on these two problem sets. For each problem set, the table lists the number of mutual exclusion constraints, the number of states expanded, the CPU time, and the amount of run-time memory used. As the table shows, for problems with few mutual exclusion constraints, the number of states expanded in both cases remain the same and CP consistently takes less time than RCP, since RCP does more work each time. However, for all problems where RCP resulted in a saving in terms of states expanded, RCP always takes less CPU time. RCP also uses slightly more run-time memory in all examples, but always within a factor of 4 when compared to CP. The breakdown between 15 and 20 mutual exclusions in the first data set and 20 and 30 in the second data sets may be sudden because a particular “hard problem” region threshold is cross in those ranges. We have yet to understand what aspect of a scheduling problem determines actual running time, but we suspect that there is some threshold of the average number of mutual exclusions per job that separates the hard from the easy problems. In summary, RCP works better than CP in all cases where the critical path length is not optimal, which is typically the case in real-world (non-artificial) problems, where it is highly probably that constraints other than precedence constraints play a major role in dictating the total project duration. Therefore, RCP will result in better performance in most real-world cases. We are currently running more extensive experiments. One major problem in this domain is that there is a lack of good benchmark hard problems and that “easy” problems might skew randomly generated data sets unless the easy ones are filtered out. We hope to produce a set of such good benchmark problems. 7 General Reconstitution This paper has described one instance of reconstitution: use 746 Prieditis an efficient algorithm to generate a optimal (shortest du- ration) solution to an abstract problem and then calculate how adding back certain constraints increases the duration of this solution. We have identified one other type of re- constitution, this one involving abstracted problems that are decomposable into independent subproblems. This type of reconstitution involves calculating how the sum of shortest path lengths for each of the independent subproblems in- creases when abstracted dependencies are added back. For example, the Manhattan distance heuristic for sliding block puzzles is derivable by ignoring (abstracting) the location of the blank-a shortest path solution to the abstracted prob- lem is the Manhattan Distance. Since the rectilinear tile distance to each tile’s final destination can be independently computed, the abstracted problem that generates Manhattan Distance can be decomposed into a set of independently solvable subproblems, one for each tile. If the bla added back to each subproblem then dependencies such as linear conflicts (i.e. two tiles that must pass through each other to their goal destination) can be efficiently detected and the solution path length can be boosted by two for such conflicts. We are currently implementing a general purpose reconstitution algorithm for decomposable abstracted prob- lems. 8 Related Work The idea that abstraction-derived heuristics can sometimes be made more effective by taking into account certain de- tails ignored by the abstracted problem was first expressed by Hansson, Mayer, and Yung [ 141. In particular, they hand-derived a new effective admissible sliding block puz- zle heuristic (the LC heuristic) by taking into account those linear tile conflicts (same row or column) ignored by the Manhattan Distance heuristic. We have extended this idea to a problem involving time (project scheduling) rather than solution path length. Instead of using an abstract solution as a heuristic mea- sure, others (such as[17]) use it as a skeleton for producing a solution; the remaining details are then filled in by re- 30 finement. One drawback of this method is that the abstract solution often cannot be refined-backtracking between the original and abstract level must continue until a refinable solution is found. As a result researchers have tried to find specific types of abstractions that can always be refined, or refined with little backtracking [ 181. However, guaranteed refinable abstractions are difficult to find. In general, heuristics are efficient approximations of Iookahead searches [20]. A heuristic approximates the search by either ignoring certain paths or adding additional paths to reduce search complexity. If a full-width lookahead search is used a heuristic for solving the original problem in an algorithm such as A*, then the complexity of A* will be worse than that of A* without any heuristics. What the heuristic does is trade off efficiency for accuracy in approxi- mating this full-width search, thus making it worthwhile for algorithms such as A* to use the heuristic. 9 Conclusions and Future Work This paper has described an instance of a general three step problem-solving paradigm: abstract, solve, reconsti- tute. Certain details of the original problem are removed by abstraction. Next, the abstracted problem is efficiently solved. Finally, the abstracted details are reconstituted back into this solution. This reconstituted solution is then used as a guide for solving the original problem. Our results of applying this paradigm to project scheduling, where recon- stitution was used to generate a novel effective admissible heuristic (FKP), suggest that reconstitution can make good admissible heuristics even better. This paradigm as applied to project scheduling has sev- eral shortcomings. First, complex project scheduling prob- lems often involve resource constraints with fixed limits for each job, typically specifying the number of fixed resource units that cannot be exceeded, rather than the absolute re- source constraints as in our model; it is not clear to us how to recast such resource constraints as mutual exclu- sion constraints. However, Davis and Heidorn [5] show a branch-and-bound solution to the problem. They describe Table 1 Comparative Performance Analysis of the CP and RCP Heuristics with IDA’ Search 747 a preprocessor algorithm that expands a job with duration Ic into a sequence of k unit duration jobs each successively linked with a “must immediately precede” precedence re- lation. After this expansion, a standard branch-and-bound project scheduling algorithm can be run. Unfortunately, such expansion can result in enormous project networks in projects with long duration jobs. A second shortcoming is that not all scheduling con- straints can be recast as precedence constraints. For exam- ple, a constraint that a particular job must start only after a certain time cannot be recast as a precedence constraint. Effective admissible heuristics that reflect such general con- straints would be an important contribution to scheduling. How does the amount reconstitution quantitatively relate to the accuracy of the resulting heuristics? How much re- constitution is enough ? This paper shows one data point at which reconstitution pays off. Since reconstitution is the inverse of abstraction, results that quantitatively link ab- stractness to the accuracy of the resulting heuristics should be applicable[6, 301. Finally, although this paper has described a method for generating better admissible heuristics from existing ones, the process of discovering heuristics such as the RCP heuristic is far from automatic. We are currently extend- ing this method to job-shop scheduling problems of the sort described in [8]. In a job-shop problem, n jobs are to be scheduled on rn machines with varying durations per job per machine. We hope to develop a set of general princi- ples that practitioners in the scheduling field can follow to derive effective heuristics and eventually to automate the discovery process. References 113 PI [31 [41 I51 PI [71 PI PI M. L. Balinski. Integer programming: Methods, uses, computation. Management Science, November 1965. C. E. Bell and K. Park. Solving resource-constrained project scheduling problems by A* search. Naval Research Logistics, 37:6 l- 84, 1990. J. D. Brand, W. L. Meyer, and L. R. Schaffer. The resource scheduling problem in construction. Technical Report 5, Dept. of Ciivil Engineering, University of Illinois, Urbana, 1964. S. Chenoweth and H. Davis. High-performance A* search with rapidly growing heuristics. In Proceedings IJCAI-12, Sydney, Aus- tralia, August 1991. International Joint Conferences on Artificial In- telligence. E. W. Davis and G. E. Heidom. An algorithm for optimal project scheduling under multiple resource constraints. Management Sci- ence, 17(12), August 1971. R. Davis and A. Prieditis. The expected length of a shortest path. Information Processing Letters, (To appear), 1993. M. Dincbas, H. Simonis, and P. V. Hententyck. Solving large combinatorial problems in logic programming. Journal of Logic Programming, 8( 1 and 2):75-93, 1990. M. S. Fox. Constraint-Directed Search: A Case Study of Job-Shop Scheduling. Pitman, 1984. M. Garey and D. Johnson. Computers and Intractability. W. H. Freeman, San Francisco, 1979. 1101 IllI 1121 [I31 t141 u51 iI61 1171 1181 [191 PO1 WI WI [231 [241 ~51 WI 1271 I281 WI 1301 [311 [321 J. Gaschnig. A problem-similarity approach to devising heuristics. In Proceedings IJCAf-6, pages 301-307, Tokyo, Japan, 1979. International Joint Conferences on Artificial Intelligence. D. Graham and H. Nuttle. A comparison of heuristics for a school bus scheduling problem. Transportation, 20(2):175-l 82, 1986. G. Guida and M. Somalvico. A method for computing heuristics in problem solving. Information Sciences, 19:251-259, 1979. 0. Hansson, A, Mayer, and M. Valtorta. A new result on the complexity of heuristic estimates for the A* algorithm. Artificial Intelligence, 55( 1). 1992. 0. Hansson, A. Mayer, and M. Yung. Criticizing solutions to relaxed models yields powerful admissible heuristics, 1992. To appear in Information Sciences. E. Horowitz and S. Sahni. Fundalmentals of Data Structures. Computer Science Press, Inc. Rockville, Maryland, 1978. D. Kibler. Natural generation of heuristics by transforming the problem representation. Technical Report TR-85-20, Computer Science Department, UC-Irvine, 1985. C. Knoblock. Learning abstraction hierarchies for problem solving. In Proceedings AAAI-90, Boston, MA, 1990. American Association for Artificial Intelligence. C. Knoblock. Search reduction in hierarchical problem-solving. In Proceedings AAAI-91, Anaheim, CA, 199 1. American Association for Artificial Intelligence. R. Korf. Depth-first iterative-deepening: An optimal admissible tree search. Artificial Intelligence, 27(2):97-109, 1985. R. Korf. Real-time heuristic search: New results. In Proceedings AAAI-88, St. Paul, MN, 1988. American Association for Artificial Intelligence. C. L. Moodie and D. E. Mandeville. Project resource balancing by assembly line balancing techniques. Journal of Industrial Engineering, July 1966. J. Mostow, T. Ellman, and A. Prieditis. A unified transformational model for discovering heuristics by idealizing intractable problems. In AAAI90 Workshop on Automatic Generation of Approximations and Abstractions, pages 290-301, July 1990. J. Mostow and A. Prieditis. Discovering admissible heuristics by abstracting and optimizing. In Proceedings IJCAI-II, Detroit, MI, August 1989. International Joint Conferences on Artificial Intelligence. N. J. Nilsson. Principles of Artificial Intelligence. Morgan Kauf- mann, Palo Alto, CA, 1980. J. Pearl. Heuristics: Intelligent Search Strategies for Computer Problem-Solving. Addison-Wesley, Reading, MA, 1984. I. Pohl. The avoidance of (relative) catastrophe, heuristic com- petence, genuine dynamic weighting and computational issues in heuristic problem solving. In Proceedings IJCAI-3, pages 20-23, Stanford, CA, August 1973. International Joint Conferences on Ar- tificial Intelligence. A. Prieditis. Discovering Effective Admissible Heuristics by Ab- straction and Speedup: A Transformational Approach. PhD thesis, Rutgers University, 1990. A. Prieditis. Machine discovery of effective admissible heuristics. In Proceedings IJCAI-12, Sydney, Australia, August 1991. Intema- tional Joint Conferences on Artificial Intelligence. A. Prieditis. Machine discovery of effective admissible heuristics. Machine Learning, 1993. To appear. A. Prieditis and R. Davis. Quantitatively relating accuracy to abstractness of abstraction-derived heuristics. Artijicial Intelligence, (Submitted), 1993. F. J. Radermacher. Scheduling of project networks. Journal of Operations Research, 4( 1):227-252, 1985. M. Valtorta. A result on the computational complexity of heuristic estimates for the A* algorithm. Information Sciences, 34:47-59, 1984. 748 Prieditis | 1993 | 111 |
1,312 | Iterative Weaken lieies Foster John Provost Department of Computer Science University of Pittsburgh foster@cs.pitt .edu Abstract Decisions made in setting up and running search programs bias the searches that they perform. Search bias refers to the definition of a search space and the definition of the program that navigates the space. This pa- per addresses the problem of using knowledge regarding the complexity of various syntac- tic search biases to form a policy for select- ing bias. In particular, this paper shows that a simple policy, iterative weakening, is opti- mal or nearly optimal in cases where the bi- ases can be ordered by computational com- plexity and certain relationships hold between the complexity of the various biases. The re- sults are obtained by viewing bias selection as a (higher-level) search problem. Iterative weakening evaluates the states in order of in- creasing complexity. An offshoot of this work is the formation of a near-optimal policy for selecting both breadth and depth bounds for depth-fist search with very large (possibly unbounded) breadth and depth. Introduction For the purposes of this paper, search bias refers to the definition of a search space and the def- inition of the program that navigates the space (cJ, inductive bias in machine learning [Mitchell, 19801, [Utgoff, 19841, [RendelI, 19861, [Provost, 19921). Bias choices are purely syntactic if they are not based on domain knowledge, otherwise they are semantic. In this work, except where I refer to the incorporation of knowledge into the search program (e.g., the addition of heuristics), bias refers to syntactic bias choices. The choice of a depth-first search is a coarse-grained choice; the choice of a maximum depth of d is a finer- grained choice. Search policy refers to the strat- egy for making bias choices based on underlying assumptions and knowledge (c$, inductive policy [Provost & Buchanan, 1992a]). This paper ad- dresses the problem of selecting from among a set of bias choices, based solely on complexity knowl- edge. I show that in certain cases, optimal or near- optimal policies can be formed. The problem is attacked by viewing bias selec- tion as a (higher-level) state-space search problem, where the states are the various biases and the goal is to find a bias which is satisfactory with re- spect to the underlying search goal (e.g., a search depth sufficient for finding the lower-level goal). For the purposes of the current exposition, let us assume that no knowledge is transferred across biases, i.e., the search with one bias has no effect on the search with another bias. So, the higher- level problem is a search problem where the cost of evaluating the various states is not uniform, and we know (at least asymptotically) the complex- ity of the evaluation of each state. I will refer to the worst-case time complexity of searching with a given bias as the complexity of that bias. Using worst-case time complexity side-steps the problem that some problems may be inherently “easier” than others with a given bias, and allows biases to be ordered independently of the distribution of problems that the search program will encounter. If we view the strength of a bias to be analogous to the complexity of that bias, we can define the policy iterative weakening to be: evaluate the bi- ases in order of increasing complexity. (The term iterative weakening is borrowed from the iterative deepening of [Korf, 19851 and iterative broadening of [Ginsberg and Harvey, 19901, which are special cases of the general technique). In cases where the states (biases) can be grouped into equivalence classes based on complexity, where there is an ex- ponential increase in complexity between classes, and where the rate of growth of the cardinality of Search 749 From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. the classes is not too great (relatively), iterative weakening can be shown to be a near-optimal pol- icy with respect to the complexity of only evaluat- ing the minimum complexity goal state (optimal in simple cases). Consider an example from machine learning as search, where being able to select from among dif- ferent search biases is particularly important. The complexity of a non-redundant search of a space of conjunctive concept descriptions with maximum length k is polynomial in the number of features and is exponential in k. Given a fixed set of fea- tures, iterative weakening would dictate searching with k = 1, k = 2, . . . . until a satisfactory concept description is found. However, the size of an ideal feature set might not be manageable. In many chemistry domains the properties and structure of chemicals provide a very large set of features for learning; for exam- ple, in the Meta-DENDRAL domain the task is to learn cleavage rules for chemical mass spectrome- try [Buchanan & Mitchell, 19781. In such domains with effectively infinite sets of features, knowledge may be used to order the features by potential rel- evance. However, it may not be known a priori how many of the most relevant features will be necessary for satisfactory learning. Many existing learning programs represent con- cept descriptions as sets of rules, each rule being a conjunction of features (e.g., ‘[Quinlan, 19871, [Clark & Niblett, 19891, [Clearwater & Provost, 19901). Th e s p ace of conjunctive rules can be or- ganized as a search tree rooted at the rule with no features in the antecedent, where each child is a specialization of its parent created by adding a single conjunct. A restriction on the depth of the search tree restricts the maximum complexity of the description language (the number of con- juncts in a rule’s antecedent). A restriction on the breadth of the search restricts the list of fea- tures considered. A depth-first search of this space would not only face the classic problem of deter- mining a satisfactory search depth (see Section 3), but also the problem of (simultaneously ) deter- mining a satisfactory search breadth. In Section 5 I develop a near-optimal policy for selecting both the depth and the breadth of a depth-fist search. Optimal Policies The heuristic behind iterative weakening policies is by no means new. As mentioned above, and discussed further below, iterative deepening and iterative broadening are special cases of the gen- eral technique. Simon and Kadane [Simon and Kadane, 19751 h s ow that in cases where knowledge is available regarding the cost of a search and the probability of the search being successftil, that an “optimal” strategy is to perform the searches in order of increasing probability/cost ratio. In the case where the probability distribution is uniform (or is assumed to be because no probability infor- mation is available), this reduces to a cheapest- first strategy. Slagle [Slagle, 19641 also discusses what he calls ratio-procedures, where tasks are car- ried out in order of the ratio of benefit to cost, and shows that these “often serve as the basis of a minimum cost procedure” (p.258). However, the problem addressed in this paper is a different one from that addressed by Simon and Kadane and Slagle. Their work showed that the cheapest-fist strategy was a minimum cost strategy with respect to the other possible order- ings of the biases. In this paper, the term optimal will be used to denote a policy where the asymp- totic complexity is no worse than that of a policy that knows a priori the minimum cost bias that is sufficient for finding the (lower-level) goal. To illustrate, given n search procedures, pl, ~2, . . . , p,,, previous work addressed finding an ordering of the pi’s such that finding the goal will be no more expensive than any other ordering of the pi’s. In contrast, I address the problem of ordering the pi’s such that 6nding the goal will be as inexpensive (or almost as inexpensive) as only using p;, , the minimum-cost search procedure. This paper shows that in some cases the cheapest-fist strategy is almost as good (asymp- totically) as a strategy that knows the right bias a priori. The implications are that in these cases, it is a better investment to apply knowledge to re- duce the complexity of the underlying task (e.g., by introducing heuristics based on the semantics of the domain) than to use it to aid in the selection of (syntactic) search bias (discussed more below). A Single Dimensionall Space Let us assume the states of our (higher-level) search space can be indexed by their projection onto a single dimension, and that the projection gives us integer values. In a machine learning context this could be the case where the differ- ent biases are different types of hypothesis-space search, different degrees of complexity of the de- scription language (e.g., number of terms in the antecedent of a rule), different search depths, etc. From now on, let us refer to the states (biases) by 750 Provost their indices, i.e., i denotes the state that gives value i when pro jetted onto the dimension in ques- tion. Without loss of generality, let us assume that ir 5 i2 implies that the complexity of evalu- ating ir is less than or equal to the complexity of evaluating i2. Let c(i) denote the complexity of evaluating i. Iterative weakening is a rather simple policy in these cases. It specifies that the states should be evaluated in order of increasing i. It may seem that iterative weakening is a very wasteful pol- icy, because a lot of work might be duplicated in evaluating all the states. However, if c(i) is ex- ponential in i, then the arguments of [Korf, 19851 apply. Korf shows that iterative deepening, itera- tive weakening along the search-depth dimension, is an optimal policy with respect to time, space, and cost of solution path. In short, since the cost of evaluating i increases exponentially, the com- plexity of iterative deepening differs from that of searching with the correct depth by only a con- stant factor. Thus “knowing” the right bias buys us nothing in the limit. This paper concentrates solely on time complexity. Theorem: (after [Korf, 19851) Iterative weak- ening is an asymptotically optimal policy, with re- spect to time complexity, for searching a single- dimensional space where the cost of evaluating state i is O(b’). Iterative Broadening is a similar technique in- troduced in [Ginsberg and Harvey, 19901, where the dimension in question is the breadth of the search. In this case, the complexity increases only polynomially in i, however the technique is shown to still be useful in many cases (a characteriza- tion of when iterative broadening will lead to a computational speedup is given). Theorem: (after [Ginsberg and Harvey, 19901) Iterative weakening is an asymptotically near- optimal policy, with respect to time complexity, for searching a single-dimensional space where the cost of evaluating state i is O(id). (It is within a dth-root factor of optimal-see [Provost, 19931.) A similar technique is used in [Linial, et al., 19881 for learning with an infinite VC dimension. If a concept class C can be decomposed into a se- quence of subclasses C = 61 U C2 U . . . such that each C; has VC dimension at most i, then itera- tive weakening along the VC dimension is shown to be a good strategy (given certain conditions). Thus, previous work helps us to characterize the usefulness of iterative weakening along a single di- mension. However, in specifying a policy for bias selection there may be more than one dimension along which the bias can be selected. The rest of this paper considers multi-dimensional spaces. Multi-Dimensional S Consider the general problem of a search where the states have different costs of evaluation (in terms of complexity). We want to find a good pol- icy for searching the space. Let each state be in- dexed according to its projection onto multiple di- mensions, and let us refer to the state by its vector of indices Z(assume, for the moment, that there is a one-to-one correspondence between states and indices). Let c(Z) be the complexity of evaluat- ing state Z. Iterative weakening specifies that the states (biases) should be evaluated by increasing complexity. Let us consider some particular state complexity functions. (For clarity I will limit the remaining discussion to two dimensions, but men- tion results for n dimensions. A more detailed treatment can be found in [Provost, 19931). al Searches nsider a particular c(Z): c(i, j) = bi + bj. This is the complexity function for the situation where two (depth-first) searches must be performed, and both subgoals must be discovered before the searcher is sure that either is actually correct. How well will iterative weakening do on this problem? The following theorem shows that it is nearly optimal-within a log factor. For the rest of the paper, let ii = (i,, j,) denote the minimum- co goal state, and let b > 1. &ion: Given a search problem where the complexity of evaluating state (i, j) is bi + bj, any asymptotically optimal policy for searching the space must have worst-case time complexity O(b”‘), where m = max(i,, j,) (the complexity of evaluating the minimum-complexity goal state). Theorem: Given a search problem where the complexity of evaluating state (i, j) is bi+bj, itera- tive weakening gives a time complexity of O(mb”), where m = max(i,, j,). Proofi In the worst case, iterative weakening evaluates all states Tsuch that c(Z) 5 c(i;), where t 29 = (ig, j,) is the (minimum-complexity) goal state. Thus the overall complexity of the policy . ls’ T;1 c(i) = x bi + bi. CWMG >I {(i,j)lb'+ti<b'g+big} The terms that make up the sum can be grouped into equivalence classes based on complexity. Let Search 751 a term b” be in class ck. Then the overall com- plexity becomes: 21 lb k Ck > k=l where I&] denotes the cardinality of the set of equivalent terms. The question remains as to the number of such terms (complexity of b”). The answer is the number of vectors (i, j) whose max- imum element is k, plus the number of vectors (i, j) whose minimum element is k. The number of such vectors is 2m, so the overall complexity is: CT=1 2mbk, which is: O(mbm). Corollary: Given a search blem. where the complexity of evaluating state (i, j) is b’ + b3, iter- ative weakening is within a log factor of optimal. Proof: The optimal complexity for this prob- lem is O(N) = O(bm); iterative weakening has complexity O(mbm) = O(N log N). For n dimensions, the proximity to being op- timal is dependent on n. In general, for such searches iterative weakening is O(m”-’ bm) = O(N(log N)“-l). (See [Provost, 19931.) If we have more knowledge about the problem than just the complexity of evaluating the various states, we can sometimes come up with a better policy. In this case, the policy that immediately springs to mind is to let i = j and search to depth i = 1,2, . . . in each (lower-level) space. This is, in fact, an optimal policy; the amount of search performed is m x 2b” = O(bm). k=l We have, in effect, collapsed the problem onto a single dimension. The particular extra knowledge we use in specifying this optimal policy is that a solution found in state & will also be found in & if & is componentwise less than or equal to &. (As is the case for a pair of depth-first searches.) A Search within a Search Let us consider a search problem where the com- plexity of evaluating state (i, j) is b’+J. This com- plexity function is encountered when evaluating the state involves a search within a search. For example, consider a learning problem where there is a search for an appropriate model, with a search of the space of hypotheses for each model (e.g., to evaluate the model). Iterative weakening is once again competitive with the optimal policy. Proposit ion: Given a search problem wkere the complexity of evaluating state (i, j) is b’+J, any asymptotically optimal policy for searching the space must have worst-case time complexity 0 ( bm), where m = i, + j, (the complexity of eval- uating the minimum-complexity goal state). Theorem: Given a search problem where the complexity of evaluating state (i, j) is bi+j, itera- tive weakening gives a time complexity of O(mb”), wherem=&+j,. Proof: Similar to previous proof. Note that in this case, the cardinality of the set of equivalent terms is equal to the number of vectors (i, j) whose components sum to k, which is k-l (given positive components). Thus the overall complexity of the policy is: CpY!l(k - l)bk, which is: O(mb”). Corollary: Given a search problem *where the complexity of evaluating state (i, j) is b’+J, itera- tive weakening is within a log factor of optimal. Proof: The optimal complexity for this prob- lem is O(N) = 0 (b”); iterative weakening has complexity O(mb”) = O(N log N). 4 For n dimensions, the proximity to being optimal is dependent on n. In general, for such searches iterative weakening has complexity O(mnW1bm) = O(N(log N)“-‘). (See [Provost, 19931.) In this case, the policy of collapsing the space and iteratively weakening along the dimension i = j does not produce an optimal policy. If we let m=i,+j,,intheworstcase,asm-too,thei= j policy approaches b” times worse than optimal. Important: Relative Growth As we have seen from the preceding examples, in general, the important quantity is the growth of the complexity function relative to the growth of the number of states exhibiting a given complex- ity. In the cases where there is but one state for each complexity (e.g., iterative deepening, it- erative broadening) we have seen that the faster the rate of growth of the complexity function, the closer to optimal. In multidimensional cases, as the dimensionality increases the policy becomes further from optimal because the number of states of a given complexity increases more rapidly. Let us now consider a multidimensional problem with a (relatively) faster growing complexity func- tion, namely c(Z) = bij. This is another function where the strategy of choosing i = j and searching i = 1,2, . . . is not optimal, even if we have the extra knowledge outlined above. If we let m = $js, in the worst case, as m --+ 00 the ratio of the overall complexity of the i = j policy to the optimal ap- proaches brnzmm (very much worse than optimal). 752 Provost However iterative weakening does very well- even better than in the previous case. The follow- ing theorem shows that in this case, it is within a root-log factor of being an optimal policy. Proposition: Given a search problem where the complexity of evaluating state (i, j) is b’J, any asymptotically optimal policy for searching the space must have worst-case time complexity O(bm), where m = iajs (the complexity of evalu- ating the minimum complexity goal state). Theorem: Given a search problemSwhere the complexity of evaluating state (i, j) is baJ, iterative weakening gives a time complexity of 0 (fib”), where m = iaja. Proof: Similar to previous proofs. Note that in this case, the cardinality of the set of equiva- lent terms is equal to the number of factors of k, which is bounded by A. Thus the overall com- plexity of the policy is: 5 Cpzl fib”, which is: o(fib”> Corolla : Given a search problem+where the complexity of evaluating state (i, j) is b’J, iterative weakening is within a root-log factor of optimal. Proof: The optimal complexity for this prob- lem is O(N) = 0 (b”); iterative weak complexity O(fib”) = O(Ndi). For n dimensions, the proximity to being opti- mal is again dependent on n. In general, for such searches iterative weakening can be shown to have complexity O(ml”gnbm ) = O(N(logN)l”sn). (See [Provost, 19931.) The above results suggest that this bound may not be tight. The general problem can be illustrated with the following schema: Complexity(IW) = x 4) L c(ig)):l < c(ig) - l{+(i) 5 c(ii)}I - which is the complexity of evaluating the goal state, multiplied by the number of states with equal or smaller complexity. This gives slightly looser upper bounds in some cases, but illustrates that there are two competing factors involved: the growth of the complexity and the growth of the number of states. As we have seen, in some cases domain knowledge can be used to reduce the num- ber of states bringing a policy closer to optimal. Knowledge Can Reduce No. of States: Combining Broadening and Deepening For rule-space searches such as those defined for the chemical domains mentioned in Section 1, we want to select both a small, but sufhcient set of features (search breadth) and a small, but sufli- cient rule complexity (search depth). Ginsberg and Harvey write, ‘LAn attractive feature of it- erative broadening is that it can easily be com- bined with iterative deepening . . . any of (the) fixed depth searches can obviously be performed using iterative broadening instead of the simple depth-fist search” ([Ginsberg and Harvey, 19901 p. 220). This is so when the breadth bound is known a priori. It will be effective if the breadth bound is small. When neither exact breadth or depth is known a priori, and the maxima are very large (or infinite), we are left with the problem of designing a good policy for searching the (high- level) space of combinations of b, the breadth of a given search, and d the depth of a given search. The complexity of evaluating a state in this space is O(bd). Strict iterative weakening would specify that we order the states by this complex- ity, and search all states such that bd 5 bad9 (the goal state). We begin to see two things: (i) the analysis is not going to be as neat as in the pre- vious problems, and (ii) as d grows, there will be a lot of different values of b to search. The second point makes us question whether the policy is go- ing to be close to optimal; the first makes us want to transform the problem a bit anyway. In this problem we can use the knowledge that a state Zis a goal state if 7is componentwise greater than or equal to zg. Since bd can be written as rldlos@), our intuition tells us that it might be a good idea to increment b exponentially (in pow- ers of 2). We can then rescale our axes for easier analysis. Let & = log(b), and consider integer val- ues of 6. We now have the problem of searching a space where the complexity of searching state (i, j) is bij. We know that iterative weakening is a near-optimal policy for such a space. Un- fortunately, the overshoot along the b dimension gets us into trouble. Given that the complex- ity of evaluating the (minimum-complexity) goal state is O(2dg10s(bg) ), the first “suflicient” state reached using our transformed dimensions would be ( gg, da), where b, = ]log( b,)] . The difference in complexity between evaluating the minimum- complexity goal and the new goal is the difference between 0(2dPos(b)J) and 0( 2d10s(b)), which in the worst case approaches a factor of 2d. The solution to this problem is to decrease the - step size of the increase of db. A satisfactory step size is found by collapsing the space onto the ( sin- Search 753 gle) dimension k = d log( b), and only considering integer values of k. Because we are now looking at stepping up a complexity of 0(2rd’“s(‘J~) (rather than 0(2dr10g@)l)), the overshoot of the minimum- complexity goal state is never more than a factor of 2, which does not affect the asymptotic com- plexity. Using iterative weakening along this new axis brings us to within a log factor of optimal. Proposition: Given a depth-first search prob- lem where the complexity of evaluating state (b, d) is bd (for possibly unbounded b and d), any asymp- totically optimal policy for searching the space must have worst-case time complexity 0(2m), where m = 43 l%(b) Oh e complexity of evaluating the minimum-complexity goal state). Theorem: Given a depth-first search problem where the complexity of evaluating state (b, d) is bd, iterative weakening in integer steps along the dimension k = d log(b) gives a time complexity of 0 ( 6a2fi), where rit = [da log( b9)1 . Proof: In the worst case, iterative weakening evaluates all states k such that k is an integer and c(k) 5 c( IL,1 ), where c(k) is the complexity function along the k axis, and k, is the (minimum- complexity) goal state. Thus the overall complex- ity of the policy is: x 44 {klc(k)sc( [ksl),k is an integer} = Ixl 2”. {k12k<2*,k is an integer} The states that make up the sum can be grouped into equivalence classes based on complexity. Let a state (b,d) b e in class CI, iff d log (b) = k (for integer k). Then the overall complexity becomes: cp!, l&12”, where I& I denotes the cardinality of the set of equivalent states. The question remains as to the number of states with complexity of 2”. The answer is the number of vectors (b, d) where dlog(b) = k (for integer k) . Since d is an inte- ger, the number of such vectors is at most k, so the overall complexity is: 5 CF!, k2”, which is: O(ti294 Corollary: Given a depth-first search problem where the complexity of evaluating state (b, d) is bd, iterative weakening in integer steps along the dimension k = dlog(b) is within a log factor of optimal. Proof: The optimal complexity for this prob- lem is O(N) = 0(2m) where m = da log(b,); it- erative weakening has a complexity of O(ti2”)). Since rit = [ml, rit 5 m + 1. So iterative weak- ening has a complexity of O((m + 1)2(m+1)) = O(mzm) = O(N log N). When IW is not a Good Policy Several problem characteristics rule out iterative weakening as a near-optimal policy. The smaller the relative growth of the complexity of the states (wrt. the growth of the number of states with a given complexity), the farther from optimal the policy becomes. For example, in one dimension, if c(i) = i then the optimal policy is O(i) whereas iterative weakening is O(i2) even when there is only one state per equivalence class. On the other hand, the rate of growth may be large, but so too might the size of the class of states with the same complexity. In the previous sections, we saw equivalence classes of states with cardinalities whose growth was small compared to the growth of the class complexities. If, instead of counting the number of factors of k or the number of pairs that sum to k , we had an exponential or combi- natorial growth in the size of the classes, iterative weakening would fail to come close to optimal. (The bd problem was one where the number of terms grew rapidly.) One reason for a very large growth in the size of the equivalence classes is a choice of dimensions where there is a many-to-one mapping from states into state vectors. Thus, in the chemical domains, for iterative weakening to be applicable it is essen- tial to be able to order the terms based on prior relevance knowledge. The ordering allows a policy to choose the fist b terms, instead of all possible subsets of b terms. Conclusions The simple policy of iterative weakening is an asymptotically optimal or near-optimal policy for searching a space where the states can be or- dered by evaluation complexity, and they can be grouped into equivalence classes based on com- plexity, where the growth rate of the complexi- ties is large and the growth rate of the size of the classes is small (relatively). This has important implications with respect to the study of bias selection. If the bias selec- tion problem that one encounters fits the criteria outlined above, it may not be profitable spend- ing time working out a complicated scheme (e.g., using more domain knowledge to guide bias selec- tion intelligently). The time would be better spent 754 Provost trying to reduce the complexity of the underly- ing biases (e.g., using more domain knowledge for lower-level search guidance). On the other hand, if the complexity of the biases is such that itera- tive weakening can not come close to the optimal policy, it might well be profitable to spend time building a policy for more intelligent navigation of the bias space. For example, domain knowledge learned searching with one bias can be used to re- strict further the search with the next bias (see [Provost & Buchanan, 1992b]). This paper assumed that the problem was to choose from a fixed set of biases. Another ap- proach would be to try to 6nd a bias, not in the initial set, that better solves the problem. By re- ducing the complexity of the underlying biases, as mentioned above, one is creating new (perhaps semantically based) biases with which to search. Even if iterative weakening is an optimal policy for selecting from among the given set of biases, a better bias might exist that is missing from the set. (As a boundary case of a semantically based bias, consider this: once you know the answer, it may be easy to prune away most or all of the search space.) The dual search problem and combining deep- ening and broadening are examples of when addi- tional knowledge of relationships between the bi- ases can be used to come up with policies closer to optimal than strict iterative weakening. In these cases, knowledge about the subsumption of one bias by another is used to collapse the bias space onto a single dimension. In the former case, it- erative weakening along the single dimension was then an optimal policy. In the breadth and depth selection problem, the knowledge about the sub- sumption of biases is sufficient to give a near- optimal policy. Utilizing more knowledge at the bias-selection level will not help very much unless the complexity of the underlying biases is reduced first. Acknowledgments I would like to thank Bruce Buchanan, Haym Hirsh, Kurt Van Lehn, Bob Daley, Rich Korf, Jim Rosenblurn, and the paper’s anonymous reviewers for helpful comments. This work was supported in part by an IBM graduate fellowship and funds from the W.M. Keck Foundation. References Buchanan, B . , and T. Mitchell, 1978. Model- directed Learning of Production Rules. In Wa- terman and Hayes-Roth (eds.), Pattern Directed Inference Systems, 297-312. Academic Press. Clark, P., and T. Niblett, 1989. The CN2 Induc- tion Algorithm. Machine Learning, 3: 261-283. Clearwater, S., and F. Provost, 1990. RL4: A Tool for Knowledge-Based Induction. In Proceed- ings of the 2nd Int. IEEE Conf. on Tools for Artificial Intelligence, 24-30. IEEE C .S. Press. Ginsberg, M., and W. Harvey, 1990. Iterative Broadening. Zn Proc of the Eighth National Conf on Artificial Intelligence, 216-220. AAAI Press. Korf, R., 1985. Depth-First Iterative Deepen- ing: An Optimal Admissible Tree Search. Artifi- cial Intelligence 27: 97-109. Linial, N, Y. Mansour, R. Rivest, 1988. Results on learnability and the VC dimension. In Broc of the 1988 Wkshp on Camp Learning Theory, 51-60. Morgan Kaufmann. Mitchell, T., 1980. The Need for Biases in Learning Generalizations (Tech Rept CBM-TR- 117). Dept of Comp Sci, Rutgers University. Provost, F., 1992. Policies for the Selection of Bias for Inductive Machine Learning. Ph.D. The- sis. Dept of Comp Sci, Univ of Pittsburgh. Provost, F., 1993. Iterative Weakening: Opti- mal and Near-Optimal Policies for the Selection of Search Bias. Tech Rept ISL-93-2, Intelligent Sys- tems Lab, Dept of Comp Sci, Univ of Pittsburgh. Provost, F., and B. Buchanan, 1992a. Inductive Policy. In Proc. of the Tenth National Conf on Artificial Intelligence, 255-261. AAAI Press. Provost, F., and B. Buchanan, 1992b. Induc- tive Strengthening: the effects of a simple heuris- tic for restricting hypothesis space search. In K. Jantke (Ed.), A na o aca and Inductive Inference 1 g’ 1 (Lecture Notes in Artificial Intelligence 642: 294- 304. Berlin: Springer-Verlag. Quinlan, J., 1987. Generating Production Rules from Decision Trees. In Proceedings of the Tenth International Joint Conference on Artificial Intel- ligence, 304-307. Morgan Kaufmann. Rendell, L., 1986. A General Framework for In- duction and a Study of Selective Induction. Ma- chine Learning 1: 177-226. Boston, MA: Kluwer. Simon, H., and J. Kadane, 1975. Optimal Problem-Solving Search: All-or-None Solutions. Artificial Intelligence 6: 235-247. North-Holland. Slagle, J., 1964. An Efficient Algorithm for Finding Certain Minimum- Cost Procedures for Making Binary Decisions. Journal of the ACM 11 (3), 253-264. Utgoff, P., 1984. Shift of Bias for Inductive Con- cept Learning. Ph.D. thesis, Rutgers University. Search 755 | 1993 | 112 |
1,313 | Pruning uplieate Nodes in th-First Larry A. Taylor and Richard E. Korf * Computer Science Department University of California, Los Angeles Los Angeles, CA 90024 ltaylor@cs.ucla.edu Abstract Best-first search algorithms require exponential memory, while depth-first algorithms require only linear memory. On graphs with cycles, how- ever, depth-first searches do not detect duplicate nodes, and hence may generate asymptotically more nodes than best-first searches. We present a technique for reducing the asymptotic complexity of depth-first search by eliminating the generation of duplicate nodes. The automatic discovery and application of a finite state machine (FSM) that enforces pruning rules in a depth-first search, has significantly extended the power of search in sev- eral domains. We have implemented and tested the technique on a grid, the Fifteen Puzzle, the Twenty-Four Puzzle, and two versions of Rubik’s Cube. In each case, the effective branching factor of the depth-first search is reduced, reducing the asymptotic time complexity. Introduction-The Problem Search techniques are fundamental to artificial intel- ligence. Best-first search algorithms such as breadth- first search, Dijkstra’s algorithm [Dijkstra, 19591, and A* [Hart et al., 19681, all require enough memory to store all generated nodes. This results in exponential space complexity on many problems, making them im- pra:tical. I,1 contrast, depth-first searches run in space linear in the depth of the search. However, a major disad- vantage of depth-first approaches is the generation of du$icate nodes in a graph with cycles [Nilsson, 19801. More than one combination of operators may produce the same node, but since depth-first search does not store the nodes already generated, it cannot detect the duplicates. As a result, the total number of nodes gen- erated by a depth-first search may be asymptotically more than the number of nodes generated by a best- first search. *This research was partially supported by NSF Grant #KU-9119825, and a grant from Rockwell International. Figure 1: The grid search space, explored depth-first to depth 2, without pruning. To illustrate, consider a search of a grid with the operators: Up, Down, Left and Right, each moving one unit. A depth-first search to depth T would visit 4’ nodes (figure l), since 4 operators are applicable to each node. But only O(r2) distinct junctions are visited by a breadth-first search. Thus, a depth-first search has exponential complexity, while a breadth- first search has only polynomial complexity. To reduce this effect, we would like to detect and prune duplicate nodes in a depth-first search. Unfor- tunately, there is no way to do this on an arbitrary graph without storing all the nodes. On a randomly connected explicit graph, for example, the only way to check for duplicate nodes is to maintain a list of all the nodes already generated. A partial solution to this problem is to compare new nodes against the cur- rent path from the root [Pearl, 19841. This detects duplicates in the case that the path has made a com- plete cycle. However, as we saw in the grid example, duplicates occur when the search explores two halves of a cycle, such as up-left and left-up. Only a frac- tion of duplicates can be found by comparing nodes on the current path [Dillenburg and Nelson, 19931. Other node caching schemes have been suggested [Ibaraki, 19781 [Sen and Bagchi, 19891 [Chakrabarti et a/., 19891 [Elkan, 19891, but their utility depends on the imple- mentation (costs per node generation). 756 Taylor From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. . . . . . . . . . . . . . ,. l . . b . . . . . . . . . . . . . . . . . . . . . . . . 1: ,..... , . . . . . . *. . ; . 1 t . :A + . . . . . I-J . ,....., - . . . . . . . . i 1 ,..... 4 . . . . . *. . . . . . . . . . . . . . . . . . -1; ’ . l . - . . 1 . . . . . . . . . . . . . ..i . . . . . . . . . . . . . . . . , Figure 2: The grid search space, explored depth-first to depth 2, with pruning. We present a new technique for detecting duplicate nodes that does not depend on stored nodes, but on another data structure that can detect duplicate nodes that have been generated in the search’s past, and nodes that will be generated in the future. This tech- nique uses limited storage efficiently, uses only con- stant time per node searched, and reduces the effective branching factor, hence reducing the asymptotic time complexity of the search. The FSM Pruning Rule Mechanism Exploiting Structure We take advantage of the fact that most combinato- rial problems are described implicitly. If a problem space is too large to be stored as an explicit graph, then the it must be generated by a relatively small de- scription. This means that there is structure that can be exploited. Precisely the problems that generate too many nodes to store are the ones that create duplicates that can be detected and eliminated. For this paper, a node in the search space is represented by a unique vector of values. For example, the grid problem, the operator se- quence Left-Right will always produce a duplicate node. Rejecting inverse operator pairs, including in addition Right-Left, Up-Down, and Down-Up, reduces the branching factor by one, and the complexity from O(47 to O(3’). I nverse operators can be eliminated by a finite state machine (FSM) that remembers the last operator applied, and prohibits the application of the inverse. Most depth-first search implementations al- ready use this optimization, but we carry the principle further. Suppose we restrict the search to the following rules: go straight in the X-direction first, if at all, and then straight in the Y-direction, if at all, making at most one turn. As a result, each point (X,Y) in the grid is generated via a unique path: all Left moves or all Right moves to the value of X, and then all Up moves or all Down moves to the value of Y. Figure 2 shows a search to depth two carried out with these rules. Figure 3 Ll down Figure 3: FSM corresponding to the search with FSM pruning, eliminating duplicate nodes. shows an FSM that implements this search strategy. The search now has time complexity O(r2), reducing the complexity from exponential to quadratic. Each state of this machine corresponds to a different last move made. The FSM is used in a depth-first search as follows. Start the search at the root node as usual, and start the machine at the START state. For each state, the valid transitions are given by the arrows which specify the possible next operators that may be applied. For each new node, change the state of the machine based on the new operator applied to the old state. Operators that generate duplicate nodes do not appear. This prunes all subtrees below such redundant nodes. The time cost of this optimization is negligible. Next, we present a method for automatically learn- ing a finite state machine that encodes such pruning rules from a description of the problem. Learning the FSM The learning phase consists of two steps. First, a small breadth-first search of the space is performed, and the resulting nodes are matched to determine a set of oper- ator strings that produce duplicate nodes. The opera- tor strings represent portions of node generation paths. Secondly, the resulting set of strings is used to create the FSM which recognizes the strings as a set of key- words. If we ever encounter a string of operators from this set of duplicates, anywhere on the path from the root to the current node, we can prune the resulting node, because we are guaranteed that another path of equal or lower cost exists to that node. Exploratory Breadth-first Search Suppose we apply the search for duplicate strings to the grid space. In a breadth-first search to depth 2, 12 distinct nodes are generated, as well as 8 duplicate nodes, including 4 copies of the initial node (see figure 4(a)). We need to match strings that produce the same nodes, and then make a choice between members of the matched pairs of strings. We can sort the nodes by their representa- Search 757 0 Left =-0 0 Right =-0 0 Figure 4: Tree and trie pair. (a) A tree corresponding to a breadth-first search of the grid space. Duplicates indicated by (*). (b) A t rie data structure recognizing the duplicate operator strings found in the grid exam- ple. The heavy circles represent rejecting (absorbing) tions to make matches, and use the cost of the operator strings to make the choices between the pairs. If we order the operators Right, Left, Up, Down, then the operator sequences that produce duplicate nodes are: Right-Left, Left-Right, Up-Right, Up-Left, Up-Down’, Down-Right, Down-Left, and Down-Up (figure 4(a)). This exploratory phase is a breadth-first search. We repeatedly generate nodes from more and more costlier paths, making matches and choices, and eliminating duplicates. The breadth-first method guarantees that duplicates are detected with the shortest possible op- erator string that leads to a duplicate, meaning that no duplicate string is a substring of any other. The exploratory phase is terminated by the exhaustion of available storage (in the case of the examples of this paper, disk space). Construction of the FSM The set of duplicate op- erator strings can be regarded as a set of forbidden words. If the current search path contains one of these forbidden words, we stop at that point, and prune the rest of the path. Thus, we want to recognize the occur- rence of these strings in the search path. The problem is that of recognizing a set of keywords (i.e., the set of strings that will produce duplicates) within a text string (i.e., the string of operators from the root to the current node). This is the bibliographic search problem. Once the set of duplicate strings to be used is determined, we apply a well known algorithm to automatically create an FSM which recognizes the set [Aho et al., 19861. In this algorithm, a trie (a transition diagram in which each state corresponds to a prefix of a keyword) is con- structed from the set of keywords (in this case, the du- plicate operator strings). This skeleton is a recognition machine for matching keywords that start at the be- ginning of the ‘text’ string. The remaining transitions for mismatches are calculated by a ‘failure transition function’. The states chosen on ‘failure’ are on paths with the greatest match between the suffix of the failed string and the paths of the keyword trie. A trie constructed from the duplicate string pairs from the grid example is shown in figure 4(b). A ma- chine for recognizing the grid space duplicate string set is shown in figure 3. Notice that the arrows for re- jecting duplicate nodes are not shown. As long as the FSM stays on the paths shown, it is producing original (non-duplicate) strings. The trie for the keywords used in its construction contains these rejected paths. Learning phase requirements All nodes previ- ously generated must be stored, so the space require- ment of the breadth-first search is O(bd), where b is the branching factor, and d is the exploration depth. The actual depth employed will depend on the space available for the exploration phase. The exploration terminates when bd exceeds available memory or disk space. Duplicate checking can be done at a total cost of O(N log N) = O(bd log bd) = O(dbd logb) if the nodes 758 Taylor are kept in an indexed data structure, or sorting is employed. This space is not needed during the actual problem-solving search. The time and space required for the construction of the FSM using a trie and ‘failure transition function’ is at most O(Z), where I is the sum of the lengths of the keywords [Aho et al., 19861. The breadth-first exploration search is small com- pared to the size of the depth-first problem-solving search. Asymptotic improvements can be obtained by exploring only a small portion of the problem space, as shown by the grid example. Furthermore, the ex- ploratory phase can be regarded as creating compiled knowledge in a pre-processing step. It only has to be done once, and is amortized over the solutions of mul- tiple problem instances. The results presented below give examples of such savings. Using the FSM Incorporating a FSM into a problem-solving depth-first search is efficient in time and memory. For each op- erator application, checking the acceptance of the op- erator consists of a few fixed instructions, e.g., a table lookup. The time requirement per node generated is therefore O( 1). Th e memory requirement for the state transition table for the FSM is O(I), where d is the to- tal length of all the keywords found in the exploration phase. The actual number of strings found, and the quality of the resulting pruning, are both functions of the problem description and the depth of the duplicate exploration. Necessary Conditions for Pruning We must be careful in pruning a path to preserve at least one optimal solution, although additional optimal solutions may be pruned. The following conditions will guarantee this. If A and B are operator strings, B can be designated a duplicate if: (1) the cost of A is less than or equal to the cost of B, (2) in every case that B can be applied, A can be applied, and (3) A and B always generate identical nodes, starting from a common node. If these conditions are satisfied, then if B is part of an optimal solution, then A must also be part of an optimal solution. We may lose the possibility of finding multiple solutions of the same cost, however. In all the examples we have looked at so far, all op- erators have unit cost, but this is not a requirement of our technique. If different operators have different non-negative costs, we have to make sure that given two different operator strings that generate the same node, the string of higher cost is considered the du- plicate. This is done by performing a uniform-cost search [Dijkstra, 19591 to generate the duplicate op- erator strings, instead of a breadth-first search. So far, we have assumed that all operators are ap- plicable at all times. However, the Fifteen Puzzle con- tains operator preconditions. For example, when the blank is in the upper left corner, moving the blank Left or Up is not valid. Such preconditions may be dealt with in two steps. For purposes of the exploratory search, a generalized search space is created, which allows all possible oper- ator strings. For the Fifteen Puzzle, this means a rep- resentation in which all possible strings in a 4x4 board are valid. This can be visualized as a “Forty-eight Puz- zle” board, which is 7x7, with the initial blank position at the center. The second step is to test the preconditions of matched A and B strings found in the exploratory search. B is a duplicate if the preconditions of A are implied by the preconditons of B. For the Fifteen puz- zle, the preconditions of a string are embodied in the starting position of the blank. If the blank moves far- ther in any direction for string A than for string B, then there are starting positions of the blank for which B is valid, but not A. In that case, B is rejected as a duplicate. A similar logical test may be applied in other domains. A more complete description is given in [Taylor, 19921. Experimental Results The Fifteen Puzzle Positive results were obtained using the FSM method combined with a Manhattan Distance heuristic in solv- ing random Fifteen Puzzle instances. The Fifteen Puzzle was explored breadth-first to a depth of 14 in searching for duplicate strings. A set of 16,442 strings was found, from which an FSM with 55,441 states was created. The table representation for the FSM required 222,000 words. The inverse op- erators were found automatically at depth two. The thousands of other duplicate strings discovered repre- sent other non-trivial cycles. The branching factor defined as limd,, N(d)/N(d - l), where N(d)?s the number of nodes are generated at depth d. The branching fac- tor in a brute-force search with just inverse operators eliminated is 2.13. This value has also been derived an- alytically. Pruning with an FSM, based on a discovery phase to depth 14, improved this to 1.98. The mea- sured branching factor decreased as the depth of the discovery search increased. Note that this is an asymp- totic improvement in the complexity of the problem solving search, from O(2.13’) to 0(1.98d), where d is the depth of the search. Consequently the proportional savings in time for node generations increases with the depth of the solution, and is unbounded. For exam- ple, the average optimal solution depth for the Fifteen puzzle is over 50. At this depth, using FSM pruning in brute-force search would save 97.4% of nodes gener- ated. Iterative-deepening A* using the Manhattan Dis- tance heuristic was applied to the 100 random Fifteen Puzzle instances used in [Korf, 19851. With only the inverse operators eliminated, an average of 359 million Search 759 nodes were generated for each instance. The search employing the FSM pruning generated only an aver- age of 100.7 million nodes, a savings of 72%. This compares favorably with the savings in node generations achieved by node caching techniques ap- plied to IDA *. The FSM uses a small number of in- structions per node. If it replaces some method of in- verse operator checking, then there is no net decrease in speed. The method of comparing new nodes against those on the current search path involves a significant increase in the cost per node, with only a small increase in duplicates found, compared to the elimination of inverse operators [Dillenburg and Nelson, 19931. Al- though it pruned 5% more nodes, the path comparison method ran 17% longer than inverse operator check- ing. MREC [S en and Bagchi, 19891, storing 100,000 nodes, reduced node generations by 41% over IDA*, but ran 64% slower per node [Korf, 19931. An imple- mentation of MA* [Chakrabarti et al., 19891 on the Fifteen Puzzle ran 20 times as slow as IDA*, making it impractical for solving randomly generated problem instances [Korf, 19931. The creation of the FSM table is an efficient use of time and space. Some millions of nodes were gener- ated in the breadth-first search. Tens of thousands of duplicate strings were found, and these were encoded in a table with some tens of thousands of states. How- ever, as reported above, this led to the elimination of billions of node generations on the harder problems. In addition to the experiments finding optimal solu- tions, weighted iterative-deepening A* (WIDA”) was also tested [Korf, 19931. This is an iterative-deepening search, with a modified evaluation function, f(z) = g(4 + wh(4, h w ere g(z) is length of the path from the root to node 5, h(z) is the heuristic estimate of the length of a path from node z to the goal, and w is the weight factor [Pohl, 19701. Higher weighting al- lows suboptimal solutions to be found faster. This is the expected effect of relaxing the optimality criteria. A second effect that beyond a certain value, increas- ing w increased the number of nodes generated, rather than decreasing it [Korf, 19931. At w = 3.0, for in- stance, an average of only 59,000 nodes were generated. However, above w = 7.33, the number of nodes gener- ated again increased. For a run at w = 19.0, WIDA* without pruning generates an average of 1.2 million nodes for each of 100 random puzzle instances. With FSM pruning, the average number of nodes generated is reduced to 5,590, a reduction of 99.4%. The results reported here support the hypothesis that this effect is caused by duplication of nodes through the combina- torial rise of the number of alternative paths to each node. The Twenty-Four Puzzle The FSM method was also employed on the Twenty- Four Puzzle. To date, no optimal solution has been found for a randomly generated Twenty-Four Puzzle instance. The exploration phase generated strings up to 13 operators long. A set of 4,201 duplicates strings was created, which produced a FSM with 15,745 states, with a table implementation of 63,000 words. Weighted Iterative-Deepening A* (WIDA”) was ap- plied to 10 random Twenty-Four Puzzle instances. Previously, average solution lengths of 168 moves (with 1000 problems, but at w = 3.0), were the shortest solu- tions found to that time [Korf, 19931. With FSM dupli- cate pruning in WIDA*, the first ten of these problems yielded solutions at w = 1.50 weighting. They have an average solution length of 115, and generated an av- erage of 1.66 billion nodes each. These solutions were found using the Manhattan Distance plus linear con- flict heuristic function [Hansson et al., 19921, as well as FSM pruning. Without pruning, time limitations would have been exceeded. The effectiveness of duplicate elimination can be measured at w = 3.0 with and without FSM pruning. With Manhattan Distance WIDA* heuristic search, an average of 393,000 nodes each were generated for 100 random puzzle instances. With Manhattan Distance WIDA* plus FSM pruning, an average of only 22,600 were generated, a savings of 94.23%. Rubik’s Cube For the 2x2x2 Rubik’s cube, one corner may be re- garded as being fixed, with each of the other cubies participating in the rotations of three free faces. Thus, there are nine operators. The space was explored to depth seven, where 31,999 duplicate strings were dis- covered. An FSM was created from this set which had 24,954 states. All of the trivial optimizations were dis- covered automatically as strings of length two. In a brute-force search, there would be a branching factor of 9. Eliminating the inverse operators and consecu- tive moves of the same face, this is reduced to 6. With the FSM pruning based on a learning phase to depth seven, a branching factor of 4.73 was obtained. For the full 3x3x3 cube, each of six faces may be ro- tated either Right, Left, or 180 degrees. This makes a total of 18 operators, which are always applicable. The space was explored to depth 6, where 28,210 duplicate strings were discovered. An FSM was created from this set which had 22,974 states. All of the trivial optimiza- tions were discovered automatically as strings of length two. A number of interesting duplicates were discov- ered at depths 4 and 5 representing cycles of length 8. For the Rubik’s cube without any pruning rules, the branching factor is 18 in a brute-force depth-first search (no heuristic). By eliminating inverse opera- tors, moves of the same face twice in a row, and half of the consecutive moves of non-intersecting faces, the branching factor for depth first search is 13.50. With FSM pruning based on a learning phase to depth seven, a branching factor of 13.26 was obtained, 760 Taylor Related Work Learning duplicate operator sequences can be com- peed to the learning of concepts in explanation-based lea,rning (EBL) [M in t on, 19901. These techniques share an exploratory phase of learning, capturing informa- tion from a small search which will be used in a larger one. The purpose of these operator sequences, how- ever, is not accomplishment of specific goals, but the avoidance of duplicate nodes. In machine learning terms, we are learning only one class of control infor- mation, i.e., control of redundancy [Minton, 19901. We are learning nothing about success or failure of goals, or about goal interference, at which EBL techniques are directed. The introduction of macros into a search usually means the loss of optimality and an increase in the branching factor; eliminating duplicate sequences pre:;erves optimality, and reduces the branching factor. EBL-aided searches may be applied to general sets of -,perators, while the FSM technique of this paper is limited to domains which have sets of operators that may be applied at any point. Several EBL techniques have used finite state au- tomata for automatic proof generation [Cohen, 19871, or have used similar structures for the compaction of applicability checking for macros [Shavlik, 19901. Conclusions We have presented a technique for reducing the number of cluplicate nodes generated by a depth-first search. The FSM method begins with a breadth-first search to identify operator strings that produce duplicate nodes. These redundant strings are then used to automatically generate a finite state machine that recognizes and re- jects the duplicate strings. The FSM is then used to generate operators in the depth-first search. Producing the FSM is a preprocessing step that does not affect the cor,tplexity of the depth-first search. The additional time overhead to use the FSM in the depth-first search is rlegligible, although the FSM requires memory pro- portional to the number of states in the machine. This technique reduces the asymptotic complexity of depth- first search on a grid from O(3’) to O(p2). On the Fifteen Puzzle, it reduces the brute force branching factor from 2.13 to 1.98, and reduced the time of an IDA* search by 70%. On the Twenty-Four Puzzle, a similar FSM reduced the time of WIDA* by 94.23%. It reduces the branching factor of the 2x2x2 Rubik’s Cube from 6 to 4.73, and for the 3x3x3 Cube from 13.50 to 13.26. References Aho, A. V.; Sethi, R.; and Ullman, J. D. 1986. Com- pi!ers: Principdes, Techniques, and Tools. Addison- Wesley, Reading, Mass. Chakrabarti, P. P.; Ghose, S.; Acharya, A.; and de SZ rkar, S. C. 1989. Heuristic search in restricted mem- or I. Artificial InteZZigence 41~197-221. Cohen, W. W. 1987. A technique for generalizing number in explanation-based learning. Technical Re- port ML-TR-19, Computer Science Department, Rut- gers University, New Brunswick, NJ. Dijkstra, E. W. 1959. A note on two problems in con- nexion with graphs. Numerische Mathematik 1:269- 271. Dillenburg, J. F. and Nelson, P. C. 1993. Improving the efficiency of depth-first search by cycle elimina- tion. Information Processing Letters (forthcoming). Elkan, C. 1989. Conspiracy numbers and caching for searching and/or trees and theorem-proving. In Pro- ceedings of IJCAI-89. 1:341-346. Hansson, 0.; Mayer, A.; and Yung, M. 1992. Criticiz- ing solutions to relaxed models yields powerful admis- sible heuristics. Information Sciences 63(3):207-227. Hart, P. E.; Nilsson, N. J.; and Raphael, B. 1968. A formal basis for the heuristic determination of mini- mum cost paths. IEEE Transactions on Systems Sci- ence and Cybernetics 4(2):100-107. Ibaraki, T. 1978. Depth-m search in branch and bound algorithms. Internationad Journad of Computer and Information Science 7~315-343. Korf, R. E. 1985. Depth-first iterative deepening: An optimal admissible tree search. Artificial InteZZigence 27197-109. Korf, R. E. 1993. Linear-space best-first search. Ar- tificiad InteZZigence. To appear. Minton, S. 1990. Q uantitative results concerning the utility of explanation-based learning. Artificial Intel- Zigence 42(2-3):363-91. Nilsson, N. J. 1980. PrincipZes of Artificial InteZZi- gence. Morgan Kaufman Publishers, Inc., Palo Alto, Calif. Pearl, J. 1984. Heuristics. Addison-Wesley, Reading, Mass. Pohl, I. 1970. Heuristic search viewed as path finding in a graph. Artificial Intelligence 1:193-204. Sen, A. K. and Bagchi, A. 1989. Fast recursive formu- lations for best-first search that allow controlled use of memory. In Proceedings of IJCAI-89. 1:297-302. Shavlik, Jude W. 1990. Acquiring recursive and iter- ative concepts with explanation-based learning. Ma- chine Learning 5:39-70. Taylor, Larry A. 1992. Pruning duplicate nodes in depth-first search. Technical Report CSD-920049, UCLA Computer Science Department, Los Angeles, CA 90024-1596. Search 761 | 1993 | 113 |
1,314 | Conjunctive Width al Constraint Satisfaction Richard J. Wallace and Eugene C. Freuder* Department of Computer Science University of New H‘ampshire, Durham, NH 03824 USA rjw@cs.unh.edu; ecf@cs.unh.edu Abstract A constraint satisfaction problem may not admit a complete solution; in this case a good partial solution may be acceptable. This paper presents new techniques for organizing search with branch and bound algorithms so that maximal partial solutions (those having the maximum possible number of satisfied constraints) can be obtained in reasonable time for moderately sized problems. The key feature is a type of variable-ordering heuristic that combines width at a node of the constraint graph (number of constraints shared with variables already chosen) with factors such as small domain size that lead to inconsistencies in values of adjacent variables. Ordering based on these heuristics leads to a rapid rise in branch and bound’s cost function together with local estimates of future cost, which greatly enhances lower bound calculations. Roth retrospective and prospective algorithms based on these heuristics are dramatically superior to earlier branch and bound algorithms developed for this domain. 1 Introduction Constraint satisfaction problems (CSPs) involve finding values for problem v‘ariables subject to restrictions on which combinations of values are <allowed. They <are widely used in AI, in <areas r‘anging from planning to machine vision. M,a.ny CSP applications settle for p‘arti‘al solutions, where some constr,aints remain unsatisfied, either because the problems ‘are overconstrained or because complete solutions require too much time to compute. In fact, such applications gener,aIly settle for suboptimal partial solutions; obtaining a solution optimally close to a complete solution an be extremely difficult even for small problems (Freuder & Wallace 1992). In this paper we describe techniques that permit solving m,a.ny moderately sized optimization problems within practic‘al time bounds. (Sm,aller problems an be solved quickly enough for re‘al-time applications.) Maximal constraint satisfaction problems require solutions that optimize the number of -satisfied *This material is based on work supported by the National Science Foundation under Grant No. IRI-9207633. constraints. Branch and bound methods (cf. Reingold et al. 1977) can be combined with constraint satisfaction techniques to find maximal solutions (Freuder & Wallace 1992) ‘and, unlike hill climbing techniques, branch and bound can gu‘ar‘antee an optimal solution. This paper presents new se‘arch order heuristics for bmnch <and bound m,a.xim,al constmint satisfaction se‘arch. These heuristics permit formulation of algorithms that Lare in mMy cases markedly superior to previously studied branch <and bound m,axim,al constmint satisfaction ,algorithms. The design ‘and application of these heuristics embody three key features (using concepts discussed at greater length in subequent sections): *Conjunctive ordering heuristics based on width: Width at a node in M ordered constmint graph has been shown to be an effective heuristic for CSPs (Dechter & Meiri 1989; there called “cardinality”). Here we show that combining this with other heuristic factors can produce conjunctive heuristics whose power is “greater th,a.n the sum of their p‘arts”. This is because, in addition to providing the successive filtering that would be expected to improve performance, certain heuristics function synergistically with width to yield a marked reduction in the se‘arch space through effective tightening of bounds. *Use of information Rained in preprocessing: The . effectiveness of these ‘algorithms ‘aIlso depends strongly on measures of <arc (in)consistency obtained for each v‘alue of each v,a.riable during preprocessing. This information is gained cheaply, in one pass through the problem before se,arch begins. It ccan then used for ordering both v‘alues ‘and v,ariables as well as in the calculation of bounds to determine whether a given value is selected during se‘arch. While the latter procedure can only be used with retrospective algorithms, e.g., backm,arking, ordering based on these me(asures CM ,also be used with prospective <algorithms such (as forward checking. *Effective lower bound culculution: Earlier ‘aalyses of techniques for calculating lower bounds on the cost of a solution (Freuder & W‘allace 1992; Shapiro & H,ar,a.lick 19X1) did not explore the means by which a given component of this calculation could be m‘aximized. In p,articul,ar, they did not consider the possibility of increasing the cost at the current node of the se,arch tree ‘as rapidly as possibly while ut the sume time m‘aximizing 762 Wallace From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. the components of the lower bound that <are based on future (uninstantiated) v‘ariables. Conjunctive width heuristics promote this dual effect, thus providing a kind of ‘one-two punch’ that raises the lower bound quickly enough to enhance pruning dr,amatic,ally. From another viewpoint, this is an extension of the F,a.il First principle (Haralick & Elliott 19X0) to the more subtle problem of lower bound calculation. Somewhat surprisingly, for some classes of problems this allows a retrospective ‘algorithm based on backm,arking ‘and loc‘al lower bound calculations (RPQ, [Freuder & Wallace 19921) to outperform a prospective algorithm, forward checking, that uses extended lower bound calculations based on <all future v‘ariables. In the remaining sections we elucidate these properties and investigate them experiment‘ally. In p,articul,ar, we present: (i) a set of experiments with sp,arse r‘andom problems that demonstrates the marked superiority of the new ‘algorithms over the best m,axim,al constmint satisfaction algorithms tested previously ‘and elucidates the basis of this superior perform‘ance. (ii) a second set of experiments that ex‘amines the mnge of problem parameters for which the new algorithms are superior. The parameters of greatest interest are the relative number of constraints between variables ‘and the average tightness of the constr‘aints (a tight constr‘aint has few acceptable value-p‘airs). Section 2 describes the basic features of all algorithms tested in this study. Section 3 describes results of experiments with an initial set of sp,arse problems. Sections 4 ‘and 5 analyze the factors that m‘ake the new algorithms effective. Section 6 examines the range of problem p‘arameters over which the new algorithms are superior to others. Section 7 gives our conclusions. escription of the Algorithms The aIgorithms discussed in this paper <are aI based on depth-first search with backtracking. These algorithms try to find a v,alue for each v‘ariable in the problem drawn from its domuin, or set of allowable values, so that the number of constr,aints among v‘ariables that c,a.nnot be satisfied is minimized. (In this work only bin‘ary constmints are considered.) Since these <are br‘anch ,and bound algorithms, they use a cost function related to constraint failure; here, this is simply the number of violated constmints in the solution, &led the distunre from a complete solution. This number is compared with the distance of the best solution found so far to determine whether the current v<alue can be included in the present partial solution. The distance of this best solution is, therefore, M upper hound on the ,a.llowable cost, while the distance of the current partial solution is M elementary form of lower hound (i.e., the cost of a solution that includes the values chosen so f‘ar c‘annot be less than the current distance). Implementing this strategy in its simplest form results in a basic branch ‘and bound ‘algorithm, P-BB. More sophisticated strategies ‘are possible that ‘are M,alogous to those used to solve CSPs. Here, we consider, (i) a backm‘arking ‘a.naIogue (P-BMK) that saves the increment in distance previously derived for a value, ,anca.logous to the “m‘ark” stored by ordin‘ary backm‘ark; this increment c(an be added to the current distance to limit unnecess,ary repetition of constraint checking, (ii) a forw‘ard checking ,an,alogue (P-EFC) in which constr‘aint fczilures induced by v,alues already selected <are counted for each v‘alue of the future variables and can be added to the current distance to determinine whether the bound is exceeded; in addition, the sum of the sm‘allest counts for each dom‘ain of the uninstantiated v‘ariables (other than the one being considered for instantiation) is added to the current distance to form a lower bound b,ased on the entire problem. In this paper we <are most concerned with versions of P-EFC which incorporate some form of dynamic se‘arch rearmngement. (These algorithms <are described more fully in [Freuder & WaIlace 19921.) P-BMK (as well as P-BB) c<an incorporate tallies of constmint violations based on <arc consistency checking prior to se‘arch (ACCs, for “‘arc consistency counts”); specific‘ally, the ACC for a given v‘alue is the number of domains that do not support it. These ‘are used in lower bound calculations, much as the counts used in the forw‘ard checking ‘analogues. In addition, the vaIues of each domain can be ordered by increasing ACCs so that values with the most support are selected first. Freuder ‘and Wallace (1992) call the resulting algorithm RPO, in reference to its retrospective, prospective ‘and ordering components. The new algorithms combine the procedures just described with v‘ariable orderings based on two or three factors. The first is always the width at a node of the constraint ,yruph, that represents the bin‘ary relations between variables (represented &as nodes). Width at a node is defined as the number of Larcs between a node ,and its predecessors in M ordering of the nodes of a graph (Freuder 1982). The associated heuristic, mcaximum (node) width, is the selection of the v,ariable with the greatest number of constr‘aints in common with the v‘ariables already chosen, as the next to instantiate. In a conjunctive heuristic, ties in m‘aximum width care broken according to a second heuristic, here, either minimum dom‘ain size, maximum degree of the node ,associated with the v,ariable, or kargest meM ACC for a domain. Ties in both factors CM be broken with a third heuristic which is <also one of those just mentioned. In subsequent sections, the order in which these heuristics (are applied is indicated by slashes between them: thus, in the width/domain-size/degree heuristic, ties in m‘aximum width &are broken by choosing as the next v,ariable one with the sm‘allest dom‘ain size, and further ties are broken by choosing the node of largest degree. It is important to note that the first v‘ariable to be instantiated is always chosen according to the second or third heuristic, since at the beginning of se<arch all widths are zero. Search 763 3 Initial Experiments In our initial experiments we naturally wanted problems for which the new algorithms would be likely to excel. Earlier evidence suggested that local lower bound calculation would be especially effective with problems that had sparse constraint graphs. At the scame time, it wcas expected that P-EFC would not do cas well because, due to the spcarseness of the problem, the minimum counts for most future dom‘ains would be zero. In addition, we wanted problems with m,arked heterogeneity in the size of their dom‘ains and constraints, to insure a f,?ir ‘amount of inconsistency between adjacent v‘ariables. With these ends in mind, random problems were generated <as follows. Number of v‘ariables (n) was fixed at either 10, 15 or 20. To obtain sparse problems, the number of constraints was set to be (n-l) + ceiling(n/2); therefore, the average degree of nodes in the constraint graph was three. Dom‘ain size ‘and constraint satisfiability (number of acceptable v‘alue p<airs) were determined for each v‘ariable and constraint by choosing a number between one <and the m‘aximum value allowed, and then choosing from the set of possible elements until the requisite number had been selected. The m‘aximum dom‘ain size was set at 9, 12 or 15 for lo-, 15 and 20-variable problems, respectively. Note that domain sizes and constraint satisfiabilities will approximate a rectangukar distribution, giving problems that ,are heterogeneous intern‘ally, but with have simikv average ch‘aracteristics. In addition, if two domains had only one element, there could be no effective constraint between them; to avoid this, domain sizes were chosen first, so that this condition could be disallowed when selecting constraints. Twenty- five problems were generated for each value of n. The me,a.n distance for m,aximal solutions wcas ,always 1.2-1.6. To measure performance we used constraint checks and nodes se,a.rched. When the two <are correlated or when the former measure predominates, that measure is reported alone. Constraint checks done during preprocessing are ‘always included in the total. Figure 1 shows the mean performance on these problems for algorithms described in e‘arlier work (Freuder & Wallace 1992; Shapiro & H,aralick 198 1) and for a conjunctive width heuris& that uses smallest domain size to break ties. First of all, note that P-EFC with dyn‘amic setarch rearrangement outperforms P-EFC based on random (lexical) ordering (the best algorithm in our earlier work) by two orders of magnitude for larger problems. This is in contrast to the results of Shapiro ,a.nd H‘aralick, who found only a slight improvement; however, their problems were more homogeneous ‘and had complete constraint graphs. Results for P-BB <and RPO are only given for lo- Land 15 variable problems because of the difficulty in obtaining data for the full s‘ample of larger problems. For P-BB, we estimate that it would take between 100 million ‘and one billion constraint checks on average to solve the 20- v‘ariable problems. In general, the conjunctive width algorithm outperforms 764 Wallace all others. It requires lo2 constraint checks on average to find <an m‘aximal solution for lo-variable problems, <and for 20-v,ariable problems its perform,ance is still on the order of lo4 constraint checks. In contrast, dyn‘amic forward checking requires 10s constraint checks for karger problems. And this is far superior to the performance of the other algorithms. At 20 v‘ariables the r,ange in performance covers 4-S orders of magnitude. Problem Size (Number of Variables) A P-Em’-dy Figure 1. Perform‘ance of v‘arious algorithm on sp,arse random problems of v(arying size. Table 1 shows results for RPO with conjunctive width heuristics based on two or three heuristic factors, for lS- and 20-variable problems. Dyn‘amic P-EFC algorithms, some of which use v‘alue ordering based on ACCs <are ‘also shown. The most effective combinations of conjunctive heuristics used mean ACC as the first tie bre‘aker, although at 20 v‘ariables domain size was ,almost as efficient. Degree, used in this capacity, produced results that were of the s‘arne order <as dyn‘amic P-EFC based on domain size, although this heuristic was sometimes effective <as a second tie breaker. Conjunctive width heuristics were ‘also effective in combination with P-EFC, and the addition of v‘alue ordering b,ased on ACCs led to further enh‘ancement. The latter effect was not as karge or consistent when value ordering was used with dyn‘amic se‘arch order based on dorn‘ain size; for one 20-v‘ariable problem performance wLas made dramati&ly worse, which accounted for most of the incre‘ase in the mecan (cf. Table 1). As a consequence, for karger problems dynamic setarch re<arr‘angement b‘ased on conjunctive width heuristics was m‘arkedly superior to the ckassical strategy of search rearrangement b‘ased on domain size. Timing data was also collected for these heuristics (on a Macintosh SE/30; it was not possible to collect data for all of the we‘aker heuristics, but they were cle‘arly inefficient overall). In general the pattern of results for this measure resembles that for constraint checks. However, the difference between RPO ‘and P-EFC for a given width heuristic is generally greater than the difference in constraint checks, which indicates that the latter algorithms performed more background work per constraint check. 4 Factorial Analyses In these new algorithms several factors work together to produce the impressive perform,ance observed. In this section we will attempt to sep‘arate the effects of these factors experimentally. The purpose of the first series of tests was to demonstrate that each of the sep‘arate factors did affect perform‘ance ‘and to determine the effects of combining them. This required a factorial approach. To this end, P- BB ‘and P-BMK were both tested in their basic form, then with either the dom,ain v‘alue ordering based on ACCs or with ACCs used for lower bound calculations during search, ‘and then with both factors. In addition, several ordering heuristics were tested in combination with the other factors. Finally, P-EFC with a fixed variable ordering wcas tested with the same v‘ariable order heuristics, as well as v,a.lue ordering based on ACCs. These tests were done with the lo-variable problems since there were m,any conditions to run; in addition, basic features of these comp‘arisons care shown more cle‘arly with these sm‘aller problems. Table 1 Mean Constraint Checks (‘and Times) to Find a Maximal Solution for Conjunctive Width Heuristics Number of Variables (measure) 15 20 20 heuristics (103 ccks) (103 ccks) (sets) --------__-------______----------- width/dam-sz 27.1 62.8 488 width/mean-ACC 5.7 46.0 362 width/dam-sz/mean-ACC 18.1 40.3 325 width/dam-sz/degree 21.6 36.5 293 width/mean-ACC/dom-sz 4.9 35.6 240 width/mean-ACC/degree 5.6 36.3 343 width/degree/dam-sz 36.5 493.7 -- width/degree/mean-ACC 42.5 330.2 -- dynamic P-EFC - dom-sz 20.9 336.9 3098 dynam. P-EFC - dom-sz (val) 16.7 722.4 -- dynam. P-EFC - wid/dom/AC 65.1 83.0 -- dyn. P-EFC -wid/dm/AC(val) 46.0 53.7 584 dynam. P-EFC -wid/AC/dom 28.6 187.1 -- dyn. P-EFC -wid/AC/dm (val) 7.0 38.9 274 Of the many interesting features of the results (see Tables 2a ‘and 2b) these will be noted: (i) each of the factors, v,a.riable ordering, value ordering and use of ACCs during se,a.rch, has ,an effect on performance (comp‘are entries along a row); moreover, these effects can be combined to produce more impressive perfornmnce th,an is obtained in isolation. This is true for P-BMK <as well as the basic branch ‘and bound (P-BB). Table 2a Factorial Analysis of Branch ‘and Bound V,atGnts (Mecan Constraint Checks in 1000s) _____________-----------_---~~~ --- Variable 0rder P-RR RR/WI RR/ACC RR/ACC/v lexical 69.8 42.9 17.1 9.5 domsz 21.0 16.0 5.9 6.7 width 6.4 3.4 1.8 1.2 mean-ACC 31.6 12.2 20.1 2.3 domfwid 14.6 12.7 4.7 4.3 do m/ACC 14.2 13.6 4.9 5.1 widfdom 3.7 2.5 1.5 0.8 wid/ACC 2.5 1.1 1.8 0.7 _____________-------_-_-____~~~~__ Table 2b Factorial Analysis of Backmark ‘and P-EFC V‘ari~ants (Me,a.n Constr‘aint Checks in 1000s) ________------------~~~~~~~~~~~ --- Var. Order RM RM/v RM/AC RM/AC/v EFC EFC/v lexical 17.1 10.6 6.8 4.5 9.6 5.5 do msz 1.4 1.8 1 .O 1.2 1.0 1.1 width 2.7 1.6 1.1 0.8 2.5 1.6 mean-ACC 9.6 4.5 6.3 0.8 1.0 0.6 dom/wid 1.4 1.5 0.9 0.8 0.7 0.8 dom/ACC 1.5 1.7 1 .O 1 .l 0.9 0.7 wid/dom 1.2 1.1 0.8 0.6 1.1 1.1 wid/ACC 0.7 0.6 0.8 0.5 0.8 0.6 _____----------------------------- (ii) For P-BB, width at a node is superior to the other variable ordering heuristics tested. In addition, conjunctive heuristics improve perform‘ance for both width and domain size, with the former retaining its superiority. (iii) For P-BMK the superiority of width is not as app‘arent when constraint checks ‘alone <are considered. However, the meMs for the domain size heuristic do not reflect the karger number of nodes se‘arched. (BMK avoids many constmint checks in this case with its table lookup techniques.) At the same time, conjunctive width heuristics are consistently superior to conjunctive dom,ain heuristics. (iv) Width ‘and conjunctive width heuristics are more consistently enh,a.nced by the combination of ACCs ‘and Search 765 value ordering than other variable ordering heuristics. In this combination, they ‘also usu~ally outperform forward checking based on the same heuristic. For l‘arger problems, the trends observed with smaller problems were greatly exacerbated, ‘and at 20-variables it was difficult to obtain data for <all problems with RPO based on domain size or mecan ACCs. (For both simple and conjunctive two-tiered heuristics based on domain size or mean ACC, the number of constraint checks and nodes searched reached eight or nine orders of magnitude for some problems.) For this reason, these heuristics will not be discussed further. In addition, for larger problems the simple width heuristic is less effective overall (for 20-variable problems it was not appreciably better than dynamic forw,a.rd checking), ‘and it is here that conjunctive heuristics based on width ‘afford signifiant improvement. There is ‘also a minority of the problems of karger size that cannot be solved without <an extraordin,ary ‘amount of effort by either the ‘pure algorithms’ or by some of the partial combinations; however, at least one of the strategies that make up a combination is usually very effective. Both effects <are observed for two difficult problems in the 20- variable group (Table 3; for problem #3, the width/domLain-size/me‘an-ACC ordering gave the sca.me results <as width/domain-size). In both cases P-BMK based on width <alone does poorly. For problem #3, either conjunctive ordering or use of ACCs has some effect, but not enough to m,a.ke the problem easy; however, ordering domain values m,akes the problem trivial. In contrast, this strategy h,as ‘almost no effect on problem #lO when used alone; conjunctive ordering ‘and ACCs <are both effective, and in combination with these strategies, value ordering affords further improvement, so that this problem also becomes relatively easy to solve. Table 3 Factorial Analysis for Individual 20-Rariable Problems Problem #3 Problem #lO width w id/dam width wid/do wJdJ ACC -----_________--____-----~~-----_------__ RMK RT 84E6 36E6 18E6 4E5 9E4 CCK 15986 2E6 6286 lE6 3E5 /val RT 4E2 2E2 18E6 4E5 4E4 CCK lE3 8E2 60E6 lE6 lE5 /ACC RT 53E6 31E6 2E6 lE5 4E4 CCK 45E6 2E6 4E6 3E5 9E4 RF’0 RT lE2 lE2 2E6 lE5 lE4 CCK 2E2 3E2 4E6 2E5 2E4 -----___--____--____~----~~------------__ Combination algorithms care therefore effective in pcart because they ‘cover ,all the bases’. However, this does not explain the peculi‘ar effectiveness of the conjunctive width 766 Wallace heuristics, and there is still the question of how the different factors actually enhance bmnch and bound. These questions are treated in the next section. 5 Analysis of Individual Strategies The analyses in this section assess the effects of different factors on the upper and lower bounds that <are used during se‘arch, as well as interactions between factors with respect to these bounds. For ease of an‘alysis ‘and exposition this discussion is restricted to retrospective <algorithms. Value ordering based on ACCs should insure that good solutions (are found sooner, which will reduce the upper bound more quickly. This is confirmed by the results in Figure 2, which also shows that this reduction is independent of either the v‘ariable orderings or the inclusion of ACCs in the lower bound. With this form of value ordering, a m‘a.ximaI solution is obtained by the fifth try in <all but two of the 25 ten-v,ariable problems. Simikar results were found with larger problems. 7 6 P-HE! wid 3 ‘S 5 - P-EWACC w/AC/d 3 cz 4 % 8 3 No Value (kdering Value Ordering 0; I, , , , I I I I 1st 2nd 3rd 4th 5th . 1st 2nd3rd 4th 5tt Successive (ith) solution Figure 2. Me,an distance (upper bound) ‘after successive solutions found during se‘arch, with <and without v<a.lue ordering by increasing AC counts (lo-ariable problems). When a solution before the fifth was n-axirnaI, its distance was used in later group me(ans.e 15variable problems. V‘ariiable ordering ccan influence lower bound calculations by affecting either the increase in distance (current cost) or the projected costs based on ACCs (or FC counts). To see this we will consider three orderings: 1exia.l (a mndom ordering), ordering by selecting the v,ariable with the highest meM ACC (a straightforward Fail First strategy), and the width/dom,ain-size ordering. Figure 3 shows me,an ACC as a function of position in the v‘ariable ordering for the lo-v,ariable problems. This is the expected v,alue to be added to the current distance in calculating the lower bound. As expected, lexical ordering shows no trend in this measure, while ordering by meM ACC shows a steep decline from the first position to the last. The width/dom,ain-size ordering shows a sh‘arp rise from the first to the second position, reflecting the fact that the smallest domains ca.re almost completely supported; thereafter, there <are no obvious trends for the entire set of problems. . . . 1.2 -@- Mn.ACC 1 - Wid/Dom 0.8 0.6 0.4 0 ’ 1st ‘2nd ” 3rd ’ 4th ’ 5th ‘7th - 8th * 9th ‘10th - Position in Variable ( Ordering Figure 3. Mean ACCs at successive positions in variable ordering for three ordering heuristics, based on the 1 O-variable problems. & lexical 191 Mn. ACC ---. Width/Don1 0.4 0.35 0.3 0.25 0.2 0.15 0.1 0.05 0 2nd 3rd 4th 5th 6th 7th 8th 9th 10th Position in Variable Ordering Figure 4. Mean ACCs at successive positions in v,ariable ordering due to v‘ariables selected e‘arlier in the search order for three ordering heuristics, b,ased on the lo- variable problems. Figure 4 shows a further analysis for the same problems, the mean ACC at each position due to variables that precede the one in question. This is the expected increase in distance that will be found during consistency checking for a value from the domain of this v‘ariable. The important points ‘are, (i) for 1exic‘a.l ordering, ACCs <are not translated into immediate increases in the distance until late in se<arch, (ii) for meLan ACC ordering, there is relatively little transformation of this sort (<as a proportion of the mean ACC) ‘and none late in se‘arch. To a great extent, this heuristic puts ‘all its eggs in one basket’ by n-mximizing the increase in the lower bound due to the ACC; if this strategy fails, setarch is likely to be very inefficient, (iii) width/domain-size ordering gives a relatively high rate of transformation of this sort throughout the search. Since the average ACC for this ordering heuristic is also relatively high throughout se,arch, this heuristic tends to increase the lower bound through effects on both current distance ‘and prospective components (giving a “one-two punch” via this bound). These ‘analyses indicate that conjunctive width orderings ‘are effective when they place v‘ariables likely to produce inconsistencies ‘and ACCs (e.g., those with sm,all dom‘ain sizes) just ‘ahead of variables that ‘at-e adjacent ‘and <are, therefore, likely to have inconsistent values. This sets up the one-two punch effect described above. 6 Varying ararueter Values The methods described in (Freuder & Wallace 1992) were used to generate random problems varying in number of arcs in the constraint graph as well as tightness of constraints and domain size. These methods generate problems for which the expected v‘alue of these p‘ar‘ameters GNI be specified, ‘and values tend to be normtally distributed <around the expectations. In this discussion, number of (arcs will be described in terms of the average degree of a node in the constraint graph. All problems had 20 v,a.riables; the expected degree wcas 3,4,5 or 7; the expected domain size was either 2 or 4; ‘and for each problem density (expected degree) there were three values of tightness that differed by 0.1, beginning with the smallest value for which it was relatively easy to get problems with no solutions. In the latter case, the mecan distance for m‘aximal solutions wLas 2-3; for the m,a.ximum tightness the mecan distance w(as 7- 12. RPO with width/domain-size/mean-ACC was comp‘ared with four versions of dyn‘arnic forw<ard checking, using either se‘arch recarr‘angement based on dom‘ain size or on the width/dom,ain-size/me‘a.n-ACC heuristic <and with or without prior v‘alue ordering based on ACCs. (Note. With this method, it w<as possible to generate constraints with all possible p‘airs; this happened with some of the smaller v,a.lues for tightness. However, there were never more th‘an a few such constraints in MY problem.) For problems of expected degree 3 (equal to that of the initi‘al set), RPO was always better th,a.n dyn‘amic P-EFC based on domain size, usually by M order of magnitude (e.g., 7774 ccks on average vs. 66,035 for tightness = 0.7). Hence, the effectiveness of RPO does not depend on such pecukarities of the first set of problems as inclusion of v‘ariables with very sm,all dom‘ain sizes and small maximal dist‘ances. Problem difficulty increased with constraint tightness; for the most difficult problems RPO ‘and dyn,a.mic P-EFC using width/domain-size/mecul-ACC and value ordering had about the scame efficiency (71,036 vs. 68,502 me<an ccks, respectively, for tightness = 0.8). Dynamic P-EFC based on domain size did better with respect to RPO as the density of the constraint graph incre,ased; however, the point of crossover between them v‘aried depending on constraint tightness. For problems of low tightness (easier problems) the crossover w<as between 4 <and 5 degrees; for problems with tighter constraints it was ‘around 5 degrees. There w<as evidence that dynamic P- Search 767 EFC b<ased on width/domain-size/mean-ACC outperforms dyn,amic P-EFC based on domain size up to <an average degree of 7 (e.g., for the tightest constr‘aint at degree 5, mecan ccks were 220,865 Md 348,074, respectively, for these algorithms). Thus, algorithms with conjunctive heuristics outperform ‘all others on r‘andom problems with average degree I 4. Within this r,ange, forward checking tends to be the best algorithm for problems with higher density and greater average constraint tightness, and RPO is best for problems of lower density and looser constraints. As density (average degree) increases, dynamic P-EFC based on domain size eventually predominates; this occurs sooner for problems with looser constraints (which <are easier problems for these algorithms). In contrast to Experiment 1 this standard algorithm was also improved when values were ordered according to ACCs. 7 Conclusions Conjunctive width heuristics CM enh‘ance br,a.nch ‘and bound algorithms based on either prospective or retrospective strategies. In combination with preprocessing techniques, the resulting algorithms outperform other branch &and bound algorithms by at least one order of magnitude on a karge class of problems. Maximal solutions can now be obtained for some problems of moderate size (20-30 v,ariables, depending on specific p~atameter values) in reasonable times. Since branch and bound can return the best solution found so far at any time during se‘arch preuder & Wallace 921, these new ‘algorithms may also perform well in relation to existing ca.lgorithms that find nonoptim,al solutions (e.g., [Feldm,an & Golumbic 90]), for ,a.n even larger class of problems. References Dechter, R., and Meiri, I. 1989. Experimental evaluation of preprocessing techniques in constmint satisfaction problems, In Proceedings ZJCAI-89, Detroit, MI, p. 271-277. Feldman, R., ‘and Golumbic, MC. 1990. Optimization algorithms for student scheduling via constraint satisfiability. Comput. J., 33: 356-364. Freuder, E.C. 1982. A sufficient condition for backtrack- free search. J. Assoc. Comput. Mach., 29: 24-32. Freuder, E., and Wallace, R.J. 1992. P‘artial constraint satisfaction. Artif. Intell., 58: 21-70. H,aralick, R.M., cand Elliott, G.L. 1980. Increasing tree search efficiency for constmint satisfaction problems. Artific. Intell., 14: 263-313. Reingold, E.M., Nievergelt, J., and Deo, N. 1977. Combinatorial Algorithms. Theory und Pructice. Englewood Cliffs, NJ: Prentice-Hall. Shapiro, L., and Haralick, R. 1981. Structural descriptions ‘and inexact matching. IEEE Truns. Pattern Anal. Machine Intell., 3: 504-519 768 Wallace | 1993 | 114 |
1,315 | Depth-First vs. est-First Search: New Results * Weixiong Zhang and Richard E. Korf Computer Science Department University of California, Los Angeles Los Angeles, CA 90024 zhang@cs.ucla.edu, korf@cs.ucla.edu ABSTRACT Best-first search (BFS) expands the fewest nodes among all admissible algorithms us- ing the same cost function, but typically re- quires exponential space. Depth-first search needs space only linear in the maximumsearch depth, but expands more nodes than BFS. Us- ing a random tree, we analytically show that the expected number of nodes expanded by depth-first branch-and-bound (DFBnB) is no more than O(d s N), where d is the goal depth and N is the expected number of nodes ex- panded by BFS. We also show that DFBnB is asymptotically optimal when BFS runs in exponential time. We then consider how to select a linear-space search algorithm, from among DFBnB, iterative-deepening (ID) and recursive best first search (RBFS). Our exper- imental results indicate that DFBnB is prefer- able on problems that can be represented by bounded-depth trees and require exponential computation; and RBFS should be applied to problems that cannot be represented by bounded-depth trees, or problems that can be solved in polynomial time. 1 Introduction and Overview A major factor affecting the applicability of a search algorithm is its memory requirement. If a problem is small, and the available memory is large enough, then best-first search (BFS) may be used. BFS main- tains a partially expanded search graph, and expands a minimum-cost frontier node at each cycle until an opti- mal goal node is chosen for expansion. One important property of BFS is that it expands the minimum num- ber of nodes among all admissible algorithms using the same cost function [2]. H owever, it typically requires *This research was supported by NSF Grant No. IRI- 9119825, a grant from Rockwell International, and a GTE fellowship. exponential space, making it impractical for most ap- plications. Practical algorithms use space that is only linear in the maximum search depth. Linear-space algo- rithms include depth-first branch-and-bound (DFBnB), iterative-deepening [4], and recursive best-first search [6, 71. DFBnB starts with an upper bound on the cost of an optimal goal, and then searches the entire state space in a depth-first fashion. Whenever a new solution is found whose cost is less than the best one found so far, the upper bound is revised to the cost of this new solu- tion. Whenever a partial solution is encountered whose cost is greater than or equal to the current upper bound, it is pruned. DFBnB expands more nodes than BFS. In particular, when the cost function is monotonic, in the sense that the cost of a child is always greater than or equal to the cost of its parent, DFBnB may expand nodes whose costs are greater than the optimal goal cost, none of which are explored by BFS. To avoid expanding nodes that are not visited by BFS, iterative-deepening (ID) [4] may be adopted. It runs a series of depth-first iterations, each bounded by a cost threshold. In each iteration, a branch is eliminated when the cost of a node on that path exceeds the cost threshold for that iteration. When the cost function is not monotonic, however, ID may not expand newly visited nodes in best-first order. In this case, recursive best-first search (RBFS) [6, 73 may be applied, which always expands unexplored nodes in best-first order, using only linear space, (cf. [6, 71 for details.) Another advantage of RBFS over ID is that the former expands fewer nodes than the latter, up to tie-breaking, when the cost function is monotonic. Both ID and RBFS suffer from the overhead of expand- ing many nodes more than once. Some of these algorithms have been compared before. Wah and Yu [14] argued that DFBnB is comparable to BFS if the cost function is very accurate or very inac- curate. Using an abstract model in which the num- ber of nodes with a given cost grows geometrically, Vempaty et al [13] compared BFS, DFBnB and ID. Search 769 From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. Their results are based on the solution density, the ra- tio of the number of goal nodes to the total number of nodes with the same cost as the goal nodes, and the heuristic branching factor, the ratio of the number of nodes with a given cost to the number of nodes of the next smaller cost. They concluded that: (a) DFBnB is preferable when the solution density grows faster than the heuristic branching factor; (b) ID is preferable when the heuristic branching factor is high and the solution density is low; (c) BFS is useful only when both the solution density and the heuristic branching factor are very low, provided that sufficient memory is available. Using a random tree, in which edges have random costs, and the cost of a node is the sum of the costs of the edges on the path from the root to the node, Karp and Pearl[3], and McDiarmid and Provan [lo, 111 showed that BFS expands either an exponential or quadratic number of nodes in the search depth, depending on cer- tain properties. On a random tree with uniform branch- ing factor and discrete edge costs, we argued in [15] that DFBnB also runs in polynomial time when BFS runs in quadratic time. Kumar originally observed that ID per- forms poorly on the traveling salesman problem (TSP), compared to DFBnB [13] (cf. Section 4.2 as well). Although DFBnB is very useful for problems such as the TSP, it is not known how many more nodes DF- BnB expands than BFS on average. Using a random tree, we analytically show that the expected number of nodes expanded by DFBnB is no more than O(d . N), where d is the goal depth and N is the expected num- ber of nodes expanded by BFS (Section 2). We compare BFS, DFBnB, ID and RBFS under the tree model, and demonstrate that DFBnB runs faster than BFS in some cases, even if the former expands more nodes than the latter (Section 3). The purpose is to provide a guideline for selecting algorithms for given problems. Finally, we consider how to choose linear-space algorithms for two applications, lookahead search on sliding-tile puzzles, and the asymmetric TSP (Section 4). Our results in Sections 2 and 3 are included in [16]. 2 Analytic Results: DFBnB vs. BFS Search in a state space is a general model for prob- lem solving. While a graph with cycles is the most general model of a state space, depth-first search ex- plores a state space tree, at the cost of regenerating the same nodes arrived at via different paths. This is a fundamental difference between linear-space algo- rithms, which cannot detect duplicate nodes in general, and exponential-space algorithms, which can. Associ- ated with a state space is a cost function that estimates the cost of a node. Alternatively, a cost can be associ- ated with an edge, representing the incremental change to a node cost when the corresponding operator is ap- plied. A node cost is then computed as the sum of the edge costs on the path from the root to the node, or the sum of the cost of its parent node and the cost of the TJb, d- T(b,d,c) c+e J Figure 1: Recursive structure of a random tree edge from the parent to the child. Therefore, we intro- duce the following tree model, which is suitable for any combinatorial problem with a monotonic cost function. Definition 2.1 A random tree T(b, d, c) is a tree with depth d, root cost c, and independent and identi- caddy distributed random branching factors with mean b. Edge costs are independently drawn from a non-negative probability distribution. The cost of a non-root node is the sum of the edge costs from the root to that node, plus the root cost c. An optimal goal node is a node of minimum cost at depth d. Lemma 2.1 Let NB(b, d) be the expected number of nodes expanded by BFS, and ND(b,d, a) the expected number of nodes expanded by DFBnB with initial upper bound a, on T(b, d, 0). As d 4 00, No(b, d, oo) 5 (b - 1) E NB(b, i) + (d - 1) a= 1 Proof: As shown in Figure 1, the root of T(b, d, c) has X: children, nl, n2, . . . . nk, where b is a random variable with mean b. Let ei be the edge cost from the root of T(b, d, c) to the root of its i-th subtree Ti(b, d- 1, c+ei), for i = 1,2, . . . . Ic. The children of the root are gener- ated all at once and sorted in nondecreasing order of their costs. Thus er 5 e2 5 . . . 5 ek , arranged from left to right in Figure 1. For convenience of discus- sion, let ND(~, d, c, QI) be the expected number of nodes expanded by DFBnB on T(b, d, c) with initial upper bound a’. We first make the following two observations. First, subtracting the root cost from all nodes and the upper bound has no affect on the search. Therefore, the number of nodes expanded by DFBnB on T(b, d, c) with initial upper bound o is equal to those expanded on T(b, d, 0) with initial upper bound o - c. That is ND(b, d, C, a) = ND (b, d, 0, a - C) ND@, 4 c, 00) = ND@, 4 km> > (1) Secondly, because a larger initial upper bound causes at least as many nodes to be expanded as a smaller upper bound, the number of nodes expanded by DFBnB on T(b, d, c) with initial upper bound Q is no less than the number expanded with initial upper bound Q’ 5 (x. That is No(hd,v') L N~(hd,c,~), for a’ 5 o (2) 770 Zhang Now consider DFBnB on T(b, d, 0). It first searches the subtree Tr(b, d - 1, er) (see Figure l), expanding ND@, d - 1, el , 00) expected number of nodes. Let p be the minimum goal cost of 2-i (b, d - 1,O). Then the minimum goal cost of Tl(b, d - 1, el) is p + el, which is the upper bound after searching Tl(b, d - 1, er). Af- ter Tr(b, d - 1, ei) is searched, subtree Ts(b, d - 1, e2) will be explored if its root cost e2 is less than the cur- rent upper bound p + el, and the expected number of nodes expanded is ND (b, d - 1, e2, p + ei), which is also an upper bound on the expected number of nodes ex- panded in Ti (b, d - 1, ed), for i = 3,4, . . . , k. This is be- cause the upper bound can only decrease after searching T2(b, d - 1, e2) and the edge cost ei can only increase as i increases, both of which cause fewer nodes to be expanded. Since the root of T(b, d, 0) has b expected number of children, we write ND@, 4 O,m) 5 ND@, d- 1, el, m)+ (b-l)Ni&d-1,e2,p+el)+1 where the 1 is for the expansion of the root of T(b, d, 0). By (l), we have ND@, 4 0,~) L ND(b, d - l,o, m)+ (b- l)ND(b,d- l,O,p+el -e2)+ 1 Since p + el - ez < p for el < es, by (2), we write ND(b, 4 O,m) I ND(b, d - 17% co)+ (b - l)ND(b, d - l,O,p) + 1 (3) Now consider ND (b, d - 1, 0, p), the expected number of nodes expanded by DFBnB on T(b, d - 1,O) with initial upper bound p. If T(b, d - 1,O) is searched by BFS, it will return the optimal goal node whose expected cost is p, and expand NB(~, d - 1) nodes on average. When T(b, d - 1,0) is searched by DFBnB with upper bound p, only those nodes whose costs are strictly less than p will be expanded, which also must be expanded by BFS. We thus have ND(W-- WP) 5 N&d-- 1) (4 Substituting (4) into (3), we then write N&b, d, 0,~) L ND@, d - ho, ++ (b - l)NB(b, d - 1) + 1 I: ND@, d - 2,0, m) + (b - l)(NB(b, d - 1) + NB(b, d - 2)) + 2 5 . . . L ND(b,O,O,~)+ d-l (b - 1) c N&b, i) + (d - 1) (5) d=l This proves the lemma since ND (b, 0, 0,~) = 0. 0 Theorem 2.1 ND(b, d, 00) < O(d.NB(b, d-l)), where ND and NB are defined in Lemma 2.1. probability of zero-cost edge, p O t : 1.0 -- 0.8 - 0.6 - 0.4 - 0.2 - BPS runs in quadratic time. DFBnB runs in cubic time. bPc9 1 transition boundary mean branching 0.0 1 : 1 I I I I I factor b 0 5 10 15 20 Both BFS and DFBnB are asymptotically optimal, running in exponential time. Figure 2: Complexity regions of tree search. Prooj It directly follows Lemma 2.1 and the fact that cf..,’ NB(b, i) < (d - l)NB(b, d - l), since NB(b, i) < NB(b,d - 1) for all i < d - 1. 0 McDiarmid and Provan [lo, 111 showed that if ~0 is the probability of a zero-cost edge, then the average com- plexity of BFS on T(b, d, c) is determined by bpo, the expected number of children of a node whose costs are the same as their parents. In particular, they proved that as d + 00, and conditional on the tree not becom- ing extinct, the expected number of nodes expanded by BFS is: (a) O(pd) when bpo < 1, where 1 < p < b is a constant; (b) O(d2) when bpo = 1, and (c) O(d) when bpo > 1. Theorem 2.2 The expected number of nodes expanded by DFBnB on T(b,d,O), as d + 00, conditionat on T(b, d, 0) being infinite, is: (a) O(pd), when bpo < 1, where 1 < ,8 < b is a constant; (b) O(d3) when bpo = 1, and (c) O(d2) when bpo > 1, where po is the probability of a zero-cost edge. Proof: To use McDiarmid and Provan’s result on BFS [lo, 111, we have to consider the asymptotic case when d + 00. Generally, searching a deep tree is more difficult than searching a shallow one. In particular, NB(b, i) < NB(b,2i), for all integers i. Therefore, by Lemma 2.1 and McDiarmid and Provan’s result, when bpo < 1 and d + 00, ND(b, d, 0) < 2(b - 1) 2 NB(b, i) + (d - 1) a’= Ld/2J = 2(b - 1) z 0(/3”) + (d - 1) = O(pd) a’= Ld/2J Search 771 The other two cases directly follow from Theorem 2.1 and McDiarmid and Provan’s results on BFS. 0 This theorem significantly extends and tightens our previous result in [15], which stated that the aver- age complexity of DFBnB is O(dm+l) on a random tree with constant branching factor, discrete edge costs m 19% ‘“, m} and bpo 2 1. Theorem 2.2 indicates that DFBnB is asymptotically optimal as the depth of the tree grows to infinity when bpo < 1, since it expands the same order of nodes as BFS in this case, and BFS is op- timal [2]. In addition, this theorem shows that, similar to BFS, the average complexity of DFBnB experiences a transition as the expected number of same-cost chil- dren bpo of a node changes. Specifically, it decreases from exponential (bpo < 1) to polynomial (bpo 1 1) with a transition boundary at bpo = 1. These results are summarized in Figure 2. - 3 Experimental Results 3.1 Comparison of Nodes Expanded We now experimentally compare BFS, DFBnB, ID and RBFS on random trees. We used random trees with uniform branching factor, and two edge cost distri- butions. In the first case, edge costs were uniformly selected from (0, 1,2,3,4}. In the second case, zero edge costs were chosen with probability po = l/5, and non-zero edge costs were uniformly chosen from (1,2,3, . ..216 - 1). The comparison of these algorithms on trees with continuous edge cost distributions is sim- ilar to that with the second edge cost distribution and bpo < 1, because a continuous distribution has po = 0, and thus bpo < 1. We chose three branching factors to present the results: b = 2 for an exponential com- plexity case, b = 5 for the transition case (bpo = l), and b = 10 for an easy problem. The algorithms were run to different depths, each with 100 random trials. The results are shown in Fig. 3. The curves labeled by BFS, DFBnB, ID, and RBFS are the average numbers of nodes expanded by BFS, DFBnB, ID, and RBFS, respectively. The upper bound on DFBnB is based on Lemma 2.1. The experimental results are consistent with the ana- lytical results: BFS expands the fewest nodes among all algorithms, RBFS is superior to ID, and DFBnB is asymptotically optimal when bpo < 1 and tree depth grows to infinity. When bpo > 1 (Fig. 3(c) and 3(f)), ID and RBFS are comparable to BFS. Moreover, when bpo 2 1 (Fig. 3(b), 3(c), 3(e) and 3(f)), DFBnB is worse than both ID and RBFS. In these cases, the overhead of DFBnB, the number of nodes expanded whose costs are greater than the optimal goal cost, is larger than the re- expansion overheads of ID and RBFS. When bpo < 1 (Fig. 3(a) and 3(d)), however, DFBnB outperforms both ID and RBFS. In addition, when bpo < 1 and the edge costs are discrete (Fig. 3(a)), the DFBnB, ID and RBFS curves are parallel to the BFS curve for large search depth d. Thus DFBnB, ID and RBFS are asymptotically optimal, and this confirms our analysis of ID and RBFS in [16]. However, when bpo < 1 and edge costs are chosen from a large range (Fig. 3(d)), the slopes of the ID and RBFS curves are nearly twice the slope of the BFS curve, in contrast to the DFBnB curve that has the same slope as the BFS curve. This confirms our analytical result that ID expands O(N2) nodes on average when edge costs are continuous, where N is the expected number of nodes expanded by BFS [16]. This also indicates that in this case, RBFS has the same unfavorable asymptotical complexity as ID. In summary, for problems that can be formulated as a tree with a bounded depth, and require exponential computation (bpo < l), DFBnB should be used, and for easy problems (bpo 2 l), RBFS should be adopted. 3.2 Comparison of Running Times Although BFS expands fewer nodes than a linear-space algorithm, the former may run slower than the latter. Fig. 4(a) shows one example where the running time of BFS increases faster than that of DFBnB: a random binary tree in which zero edge costs were chosen with probability po = l/5, and non-zero edge costs were uni- formly chosen from { 1,2,3, . ..216- 1). The reason is the following. The running time of DFBnB is proportional to the total number of nodes generated, since nodes can be generated and processed in constant time. The other linear-space algorithms also have this feature. The time of BFS to process a node, however, increases as the log- arithm of the total number of nodes generated. To see this, consider the time per node expansion in BFS as a function of search depth. BFS has to use a priority queue to keep all nodes generated but not expanded yet, which is exponential in the search depth, say yd, when bpo < 1. To expand a node, BFS first has to se- lect the node with the minimum cost from the priority queue, and then insert all newly generated nodes into the queue. If a heap is used, to insert one node or delete the root of the heap takes time logarithmic in the total number of nodes in the heap, which is ln(rd) = O(d). This means that BFS takes time linear in the search depth to expand a node. Fig. 4(b) illustrates the av- erage time per node expansion for both BFS and DF- BnB in this case. Therefore, for some problems, BFS is not only unapplicable because of its exponential space requirement, but also suffers from increasing time per node expansion. 4 Comparison on Real Problems 4.1 Loolcahead Search on Sliding-Tile Puzzles A square sliding-tile puzzle consists of a k x k frame holding k2 - 1 distinct movable tiles, and a blank space. Any tiles that are horizontally or vertically adjacent to the blank may move into the blank position. Examples of sliding-tile puzzles include the 3 x 3 Eight Puzzle, the 772 Zhang Edge costs uniformly chosen from C&1,2,3,4} # of nodes exnanded L 0 10 20 30 40 50 (a) b=2 # of nodes expanded I 1 I I I I search depth I I 50 100 150 200 (b) b=§ # of nodes expanded I I I I 1 t/ I I search depth I I 0 50 150 200 Edge costs from {0,1,2,3, . . . ,2 16-1} with p ,=1/5 # of nodes expanded # of nodes expanded # of nodes expanded I I I I I I r I I I I I I I I I I 10’ 1 -I I I I 0 10 20 30 40 50 60 0 50 100 150 200 0 50 100 150 200 (cl) b=2 (e) b=5 (f) b=lO Figure 3: Average number of nodes expanded. 4 x 4 Fifteen Puzzle, the 5 x 5 Twenty-four Puzzle, and the 10 x 10 Ninety-nine Puzzle. A common cost func- tion for sliding-tile puzzles is f(n) = g(n) + h(n), where g(n) is the number of moves from the initial state to node n, and h(n) is the Manhattan distance from node n to the goal state, which is the sum of the number of moves along the grid of all tiles to their goal positions. Given an initial and a goal state of a sliding-tile puz- zle, we are asked to find a sequence of moves that maps the initial state into the final state. To find such a se- quence with minimum number of moves is NP-complete for arbitrary size puzzles [12]. In real-time settings, however, we have to make a move with limited computation. One approach to this prob- lem, called fixed-depth lookahead search, is to search from the current state to a fixed depth, and then make a move toward a minimum cost frontier node at that depth. This process is then repeated for each move until a goal is reached [5]. Our experiments show that ID is slightly worse than RBFS for lookahead search, as expected. Figure 5 com- pares DFBnB and RBFS. The horizontal axis is the lookahead depth, and the vertical axis is the number of nodes expanded, averaged over 200 initial states. The results show that DFBnB performs better than RBFS on small puzzles, while RBFS is superior to DFBnB on large ones. The reason is briefly explained as follows. Moving a tile either increases or decreases its Manhat- tan distance h by one. Since every move increases the g value by one, the cost function f = g + h either in- creases by two or stays the same. The probability that the cost of a child state is equal to the cost of its par- ent is approximately 0.5 initially, i.e. po x 0.5. In addition, the average branching factors b of the Eight, Fifteen, Twenty-four, and Ninety-nine Puzzles are ap- proximately 1.732,2.130,2.368, and 2.790, respectively, i.e. b grows with the puzzle size. Thus, bpo increases with the puzzle size as well, and lookahead search is Search 773 time (sec.) 140 120 100 80 60 40 20 0 20 30 40 50 (a) running time Figure 4: (b) time per node expansion Running time and time per node expansion. easier on large puzzles by Theorem 2.2. As shown in Section 3, DFBnB will do better with smaller branching factors. Unfortunately, the problem of finding a shortest solu- tion path cannot be represented by a bounded-depth tree, since the solution length is unknown in advance. Without cutoff bounds, DFBnB cannot be applied in these cases. This limits the applicability of DFBnB, and distinguishes DFBnB from BFS, ID and RBFS. In these cases, RBFS is the algorithm of choice, since ID is worse than RBFS, as verified by our experiments. 4.2 The Asymmetric TSP Given n cities (1,2, . . . . n) and a cost matrix (CQ) that defines a cost between each pair of cities, the traveling salesman problem (TSP) is to find a minimum-cost tour that visits each city once and returns to the starting city. When the cost from city i to city j is not neces- sarily equal to that from city j to city i, the problem is the asymmetric TSP (ATSP). Many NP-complete com- binatorial problems can be formulated as ATSPs, such as vehicle routing, no-wait workshop scheduling, com- puter wiring, etc. [8]. The most efficient approach known for optimally solv- ing the ATSP is subtour elimination [l], with the solu- tion to the assignment problem as a lower-bound func- tion. Given a cost matrix (cd,j), the assignment prob- lem (AP) [9] t is o assign to each city i another city j, with ci,j as the cost of this assignment, such that the # of nodes expanded lo6 lo5 lo4 103 lo2 10 I search depth 60 80 Figure 5: Lookahead search on sliding-tile puzzles. total cost of all assignments is minimized. The AP is a generalization of the ATSP with the requirement of a single complete tour removed, allowing collections of subtours, and is solvable in O(n3) time [9]. Subtour elimination first solves the AP for the n cities. If the solution is not a tour, it then expands the problem into subproblems by breaking a subtour (cf. [l] for details), and searches the space of subproblems. It repeatedly checks the AP solutions of subproblems and expands them if they are not complete tours, until an optimal tour is found. The space of subproblems can be repre- sented by a tree with maximum depth less than n2. We ran DFBnB, ID and RBFS on the ATSP with the elements of cost matrices independently and uniformly chosen from (0, 1,2,3, . . . . r} , where r is an integer. Fig- ures 6(a) and 6(b) h s ow our results on loo-city and 300-city ATSPs. The horizontal axes are the range of intercity costs r, and the vertical axes are the num- bers of tree nodes generated, averaged over 500 trials for each data point. When the cost range T is small or large, relative to the number of cities n, the ATSP is easy or difficult, respectively [15, 171. Figure 6 shows that ID cannot compete with RBFS and DFBnB, espe- cially for difficult ATSPs when r is large. RBFS does poorly on difficult ATSPs, since in this case the node costs in the search tree are unique [15, 171, which causes significant node regeneration overhead. 5 Conclusions We first studied the relationship between the average number of nodes expanded by depth-first branch-and- bound (DFBnB), and best-first search (BFS). In par- ticular, we showed analytically that DFBnB expands no more than O(Ca . N) nodes on average for finding a minimum cost node at depth d of a random tree, where N is the average number of nodes expanded by BFS on 774 Zhang # of nodes generated I I I I I I I I llOO- 900 - 700 - 500 - 300 - loo- I I I I I cost range r 1 10 10” 10” 10” 10’ 10” 10’ (a) 1000city ATSP # of nodes generated 1500l- I P --I 1000 500 0 10 10” ld lb’ 10’ lo6 10’ (b) 300-city ATSP Figure 6: Performance on the ATSPs. the same tree. We also proved that DFBnB is asymp- totically optimal when BFS runs in exponential time. We then considered how to select a linear-space algo- rithm, from among DFBnB, iterative-deepening (ID) and recursive best-first search (RBFS). We also showed that DFBnB runs faster than BFS in some cases, even if the former expands more nodes than the latter. Our results on random trees and two real problems, looka- head search on sliding-tile puzzles and the asymmet- ric traveling salesman problem, show that (a) DFBnB is preferable on problems that can be formulated by bounded-depth trees and require exponential compu- tation; (b) RBFS should be applied to problems that cannot be represented by bounded-depth trees, or prob- lems that can be solved in polynomial time. References [l] Balas, E., and P. Toth, “Branch and bound methods,” The Traveling Salesman Problems, E.L. PI PI PI PI PI PI PI PI Lawler, et al. (eds.) John Wiley and Sons, 1985, pp.361-401. Dechter, R., and J. Pearl, “Generalized best-first search strategies and the optimality of A*,” JACK, 32 (1985) 505-36. Karp, R.M., and J. Pearl, “Searching for an optimal path in a tree with random cost,” Artificial InteZEi- gence, 21 (1983) 99-117. Korf, R.E., “Depth-first iterative-deepening: An optimal admissible tree search,” Artificial Intelbi- gence, 27 (1985) 97-109. Korf, R.E., “Real-time heuristic search,” Artificial Intelligence, 42 (1990) 189-211. Korf, R.E., “Linear-space best-first search: Sum- mary of results,” Proc. lO-th National Conf. on AI, AAAI-92, San Jose, CA, July, 1992, pp.533-8. Korf, R.E., “Linear-space best-first search,” Artifi- cial Intelligence, to appear. Lawler, E.L., et ad., The Traveling Salesman Prob- lems, John Wiley and Sons, 1985. Martello, S., and P. Toth, “Linear assignment prob- lems ,” Annab of Discrete Mathematics, 31 (1987) 259-82. [lo] McDiarmid, C.J.H., “Probabilistic analysis of tree search,” Disorder in Physical Systems, G.R. Gum- mett and D.J.A. Welsh (eds), Oxford Science Pub., 1990, pp.249-60, [ll] McDiarmid, C.J.H., and G.M.A. Provan, “An expected-cost analysis of backtracking and non- backtracking algorithms,” Proc. 1%th Intern. Joint Conf. on AI, IJCAI-91, Sydney, Australia, Aug. 1991, pp.172-7. [12] Ratner, D., and M. Warmuth, “Finding a short- est solution for the NxN extension of the 15-puzzle is intractable,” Proc. 5th National Conf. on AI, AAAI-86,) Philadelphia, PA, 1986. [13] Vempaty, N.R., V. Kumar, and R.E. Korf, “Depth- first vs best-first search,” Proc. 9-th National Conf. on AI, AAAI-91,) CA, July, 1991, pp.434-40. [14] Wah, B.W., and C.F. Yu, “Stochastic model- ing of branch-and-bound algorithms with best-first search,” IEEE Trans. on Software Engineering, 11 (1985) 922-34. [15] Zhang, W., and R.E. Korf, “An average-case anal- ysis of branch-and-bound with applications: Sum- mary of results,” Proc. lo-th National Conf. on AI, AAAI-92,) San Jose, CA, July, 1992, pp.545-50. [16] Zhang, W., and R.E. Korf, “Performance of linear- space branch-and-bound algorithms,” submitted to Artificial Intelligence, 1992. [17] Zhang, W., and R.E. Korf, “On the asymmetric traveling salesman problem under subtour elimina- tion and local search,” submitted, March, 1993. Search 775 | 1993 | 115 |
1,316 | Rens Bod Department of Computational Linguistics University of Amsterdam - Spuistraat 134 NL-1012 VB Amsterdam rens @ alflet .uva.nl Abstract In Data Oriented Parsing (DOP), an annotated language corpus is used as a virtual stochastic grammar. An input string is parsed by combining subtrees from the corpus. As a consequence, one parse tree can usually be generated by several derivations that involve different subtrees. This leads to a statistics where the probability of a parse is equal to the sum of the probabilities of all its derivations. In (Scha, 1990) an informal introduction to DOP is given, while (Bod, 1992) provides a formalization of the theory. In this paper we show that the maximum probability parse can be estimated in polynomial time by applying Monte Carlo techniques. The model was tested on a set of hand-parsed strings from the Air Travel Information System (ATIS) corpus. Preliminary experiments yield 96% test set parsing accuracy. Motivstion As soon as a formal grammar characterizes a non-trivial part of a natural language, almost every input string of reasonable length gets an unmanageably large number of different analyses. Since most of these analyses are not perceived as plausible by a human language user, there is a need for distinguishing the plausible parse(s) of an input string from the implausible ones. In stochastic language processing, it is assumed that the most plausible parse of an input string is its most probable parse. Most instant&ions of this idea estimate the probability of a parse by assigning application probabilities to context free rewrite rules (Jelinek, 1990), or by assigning combination probabilities to elementary structures (Resnik, 1992; Schabes, 1992). There is some agreement now that context free rewrite rules are not adequate for estimating the probability of a p-se, since they cannot capture syntactic/lexical context, and hence cannot describe how the probability of syntactic structures or lexical items depends on that 778 Bod context. In stochastic tree-adjoining grammar (Schabes, 1992), this lack of context-sensitivity is overcome by assigning probabilities to larger structural units. However, it is not always evident which structures should be considered as elementary structures. In (Schabes, 1992) it is proposed to infer a stochastic TAG from a large training corpus using an inside-outside-like iterative algorithm. Data Oriented Parsing (DOP) (S&a, 1990; Bod, 1992), distinguishes itself from other statistical approaches in that it omits the step of inferring a grammar from a corpus. Instead an annotated corpus is directly used as a stochastic grammar. An input string is parsed by combining subtrees from the corpus. In this view, every subtree can be considered as an elementary structure. As a consequence, one parse tree can usually be generated by several derivations that involve different subtrees. This leads to a statistics where the probability of a parse is equal to the sum of the probabilities of all its derivations. It is hoped that this approach can accommodate all statistical properties of a language corpus. Let us illustrate DOP with an extremely simple example. Suppose that a corpus consists of only two trees: NP NP VP John I Peter Suppose that our combination operation (indicated with 0) consists of substituting a subtree on the leftmost identically labeled leaf node of another subtree. Then the sentence Maty likes Susan can be parsed as an S by combining the following subtrees from the corpus. From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. 0 But the same parse tree can also be derived by combining other subtrees, for instance: or S 0 NP NP 0 VP o NP I likes Thus, a parse can have several derivations involving different subtrees. These derivations have different probabilities. Using the corpus as our stochastic grammar, we estimate the probability of substituting a certain subtree on a specific node as the probability of selecting this subtree among all subtrees in the corpus that could be substituted on that node. The probability of a derivation can be computed as the product of the probabilities of the subtrees that are combined. For the example derivations above, this yields: P(1 s t example) = l/20 * l/4 * l/4 = l/320 P(2ndexample) = l/20 * 114 * l/2 = l/160 P(3rd example) = 2/20 -J: l/4 * l/8 * l/4 = l/1280 This example illustrates that a statistical language model which defines probabilities over parses by taking into account only one derivation, does not accommodate all statistical properties of a language corpus. Instead, we will define the probability of a parse as the sum of the probabilities of all its derivations. Finally, the probability of a string is equal to the sum of the probabilities of all its parses. We will show that conventional parsing techniques can be applied to DOP, but that this becomes very inefficient, since the number of derivations of a parse grows exponentially with the length of the input string. However, we will show that DOP can be parsed in polynomial time by using Monte Carlo techniques. An important advantage of using a corpus for probability calculation, is that no training of parameters is needed, as is the case for other stochastic grammars (Jelinek et al., 1990; Pereira and Schabes, 1992; Schabes, 1992). Secondly, since we take into account all derivations of a parse, no relationship that might possibly be of statistical interest is ignored. The Model As might be clear by now, a DOP-model is characterized by a corpus of tree structures, together with a set of operations that combine subtrees from the corpus into new trees. In this section we explain more precisely what we mean by subtree, operations etc., in order to arrive at definitions of a parse and the probability of a parse with respect to a corpus. For a treatment of DOP in more formal terms we refer to (Bod, 1992). Subtree A subtree of a tree T is a connected subgraph S of T such that for every node in S holds that if it has daughter nodes, then these are equal to the daughter nodes of the corresponding node in T. It is trivial to see that a subtree is also a tree. In the following example T1 and TZ are subtrees of T, whereas T3 isn’t. Statistically-Based NLB 779 ? vANp John I I likes Maly T’l s T2 VP T3 s - &+A NC-% NP VP A V NP I v NP I likes John NP The general definition above also includes subtrees consisting of one node. Since such subtrees do not contribute to the parsing process, we exclude these pathological cases and consider as the set of subtrees the non-trivial ones consisting of more than one node. We shall use the following notation to indicate that a tree t is a non-trivial subtree of a tree in a corpus C t & c =&f 3 T E C: t is a non-trivial subtree of T Operations In this article we will limit ourselves to the basic operation of substitution. Other possible operations are left to future research. If t and u are trees, such that the leftmost non-teminal leafof t is equal to the root of u, then tou is the tree that results from substituting this non-terminal leaf in t by tree u. The partial function 0 is called substitution. We will write (tw)w as tww, and in general C.((tl otz)ot3)o..)otn as tlotpt30...otm The restriction leftmost in the definition is motivated by the fact that it eliminates different derivations consisting of the same subtrees. Parse Tree T is a parse of input string s with respect to a corpus C, iff the yield of T is equal to s and there are subtrees tl,..., tn e C, such that T = tl o... o&,. The set of parses of s with respect to C, is thus given by: Parses(s, C) = {T lyield(n = s A 3 tl,..., tn E CJ T = tlo...otn} The definition correctly includes the trivial case of a subtree from the corpus whose yield is equal to the complete input string. Derivation A derivation of a parse T with respect to a corpus C, is a tuple of subtrees (tl,..., tJ such that tl,..., tn & C and tp...ot, = T. The set of derivations of T with respect to C, is thus given by: Derivations(T,C) = {(tl,..., tn) I tl,..., tn e C A tp...otn = T) Probability Subtree. Given a subtree tl E C, a function root that yields the root of a tree, and a node labeled X, the conditional probability P(t=tl I root(t)=X) denotes the probability that tl is substituted on X. If root(tl) f X, this probability is 0. If root(tl) = X, this probability can be estimated as the ratio between the number of occurrences of tl in C and the total number of occurrences of subtrees t’ in C for which holds that root(tl) = X. Evidently, & P(t=ti I root(t)=X) = I holds. Derivation. The probability of a derivation (tl,...,tn) is equal to the probability that the subtrees tl,..., tn are combined. This probability can be computed as the product of the conditional probabilities of the subtrees t1 ,..., tD Let M(x) be the leftmost non-terminal leaf of tree x, then: P((tl,.*.,t~) = P(t=tlIroOt(t)=S) * L?& to 11 P(t=ti I root(t) = lnl(ti-1)) Parse. The probability of a parse is equal to the probability that any of its derivations occurs. Since the derivations are mutually exclusive, the probability of a parse T is the sum of the probabilities of all its derivations. Let Derivations(T,C) = {dl,...,dJ, then: P(T) = & P(di). The conditional probability of a parse T given input string s, can be computed as the ratio between the probability of T and the sum of the probabilities of all parses of s. String. The probability of a string is equal to the probability that any of its parses occurs. Since the parses are mutually exclusive, the probability of a string s can be computed as the sum of the probabilities of all its parses. Let Parses(s,C) = {Tl,...,T,), then: P(s) = Ci P(G). It Gill be ShOWll that & P&J = I holds. 780 Bod Monte Carlo Parsing It is easy to show that an input string can be parsed with conventional parsing techniques, by applying subtrees instead of rules to the input string (Bod, 1992). Every subtree t can be seen as a production rule root(t) i, f, where the non-terminals of the yield of the right hand side constitute the symbols to which new rules/subtrees are applied. Given a polynomial time parsing algorithm, a derivation of the input string, and hence a parse, can be calculated in polynomial time. But if we calculate the probability of a parse by exhaustively calculating all its derivations, the time complexity becomes exponential, since the number of derivations of a parse of an input string grows exponentially with the length of the input string. Nevertheless, by applying Monte Carlo &echniques (Hammersley and Handscomb, 1964), we can estimate the probability of a parse and make its error arbitrarily small in polynomial time. The essence of Monte Carlo is very simple: it estimates a probability distribution of events by taking random samples. The larger the samples we take, the higher the reliability. For DOP this means that, instead of exhaustively calculating all parses with all their derivations, we randomly calculate N parses of an input string (by taking random samples from the subtrees that can be substituted on a specific node in the parsing process). The estimated probability of a certain parse given the input string, is then equal to the number of times that parse occurred normalized with respect to N. We can estimate a probability as accurately as we want by choosing N as large as we want, since according to the Strong Law of Large Numbers the estimated probability converges to the actual probability. From a classical result of probability theory (Chebyshev’s inequality) it follows that the time complexity of achieving a maximum error e is given by O(Em2). Thus the error of probability estimation can be made arbitrarily small in polynomial time - provided that the parsing algorithm is not worse than polynomial. Obviously, probable parses of an input string are more likely to be generated than improbable ones. Thus, in order to estimate the maximum probability parse, it suffices to sample until stability in the top of the parse distribution occurs. The parse which is generated most often is then the maximum probability parse. We now show that the probability that a certain parse is generated by Monte Carlo, is exactly the probability of that parse according to the DOP-model. First, the probability that a subtree t E C is sampled at a certain point in the parsing process (where a non-terminal X is to be substituted) is equal to P( t I root(t) = X ). Secondly, the probability that a certain sequence t1 ,...,tn of subtrees that constitutes a derivation of a parse T, is sampled, is equal to the product of the conditional probabilities of these subtrees. finally, the probability that any sequence of subtrees that constitutes a derivation of a certain parse T, is sampled, is equal to the sum of the probabilities that these derivations are sampled. This is the probability that a certain parse T is sampled, which is equivalent to the probability of Taccording to the DOP-model. We shall call a parser which applies this Monte Carlo technique, a Monte Carlo parser. With respect to the theory of computation, a Monte Carlo parser is a probabilistic algorithm which belongs to the class of Bounded error pkobabilistic Polynomial time (BP’ algorithms. BPP-problems are characterized by de following: it may take exponential time to solve them exactly, but there exists an estimation algorithm with a probability of error that becomes arbitrarily small in polynomial time. For our experiments we used part-of-speech sequences of spoken-language transcriptions from the Air Travel Information System (ATIS) corpus (Hemphill et al., 1990), with the labeled-bracketings of those sequences in the Penn Treebank (Marcus, 1991). The 750 labeled- bracketings were divided at random into a DOP-corpus of 675 trees and a test set of 75 part-of-speech sequences. The following tree is an example from the DOP-corpus, where for reasons of readability the lexical items are added to the part-of-speech tags. ( @ (Np *) (VP (VB Show) W PP me)) W W (PDT U) (DT the) (JJ nons (NM flights) (PP (PP (IN from) W (Np D~1a-W cpp 0-0 to) W W Denver)))) (ADJP (JJ early) (PP (IN in) W @‘I’ eel w mom@!)))))) .) As a measure for parsing accuracy we took the percentage of the test sentences for which the maximum probability parse derived by the Monte Carlo parser (for a sample size N) is identical to the Treebank parse. It is one of the most essential features of the DOP approach, that arbitrarily large subtrees are taken into consideration. In order to test the usefulness of this feature, we performed different experiments constraining the depth of the subtrees. The depth of a tree is defined as the length of its longest path. The following table Statistically-Based NLP 781 shows the results of seven experiments. The accuracy refers to the parsing accuracy at sample size N= 100, and is rounded off to the nearest integer. I 96% Parsing accuracy for the ATIS corpus, sample size N= 100. The table shows that there is a relatively rapid increase in parsing accuracy when enlarging the maximum depth of the subtrees to 3. The accuracy keeps increasing, at a slower rate, when the depth is enlarged further. The highest accuracy is obtained by using all subtrees from the corpus: 72 out of the 75 sentences from the test set areparsedcorrectly. In the following figure, parsing accuracy is plotted against the sample size N for three of our experiments: the experiments where the depth of the subtrees is constrained to 2 and 3, and the experiment where the depth is unconstrained. (The maximum depth in the ATIS corpus is 13.) sample size N Parsing accuracy for the ATIS corpus, with depth I 2, with depth 5 3 and with unbounded depth. yo ;3 3 In (Pereira and Schabes, 1992), 90.36% bracketing accuracy was reported using a stochastic CFG trained on bracketings from the ATIS corpus. Though we cannot make a direct comparison, our pilot experiment suggests that our model may have better performance than a stochastic CFG. However, there is still an error rate of 4%. Although there is no reason to expect 100% accuracy in the absence of any semantic or pragmatic analysis, it seems that the accuracy might be further improved. Three limitations of the current experiments are worth mentioning, First, the Treebank annotations are not rich enough. Although the Treebank uses a relatively rich part-of- speech system (48 terminal symbols), there are only 15 non-terminal symbols. Especially the internal structure of noun phrases is very poor. Semantic annotations are completely absent. Secondly, it could be that subtrees which occur only once in the corpus, give bad estimations of their actual probabilities. The question as to whether reestimation techniques would further improve the accuracy, must be considered in future research. Thirdly, it could be that our corpus is not large enough. This brings us to the question as to how much parsing accuracy depends on the size of the corpus. For studying this question, we performed additional experiments with different corpus sizes. Starting with a corpus of only 50 parse trees (randomly chosen from the initial DGP-corpus of 675 trees), we increased its size with intervals of 50. As our test set, we took the same 75 p-o-s sequences as used in the previous experiments. In the next figure the parsing accuracy, for sample size N = 100, is plotted against the corpus size, using all corpus subtrees. 0 0 0 0 300 400 corpus size Parsing accuracy for the ATIS corpus, with unbounded depth 782 Bod The figure shows the increase in parsing accuracy. For a corpus size of 450 trees, the accuracy reaches already 88%. After this, the growth decreases, but the accuracy is still growing at corpus size 675. Thus, we would expect a higher accuracy if the corpus is further enlarged. Conclusions and Future esearch We have presented a language model that uses an annotated corpus as a virtual stochastic grammar. We restricted ourselves to substitution as the only combination operation between corpus subtrees. A statistical parsing theory was developed, where one parse can be generated by different derivations, and where the probability of a parse is computed as the sum of the probabilities of all its derivations. It was shown that the maximum probability parse can be estimated as accurately as desired in polynomial time by using Monte Carlo techniques. The method has been succesfully tested on a set of part-of-speech sequences derived from the ATIS corpus. It turned out that parsing accuracy improved if larger subtrees were used. We would like to extend our experiments to larger corpora, like the Wall Street Journal corpus. This might raise computational problems, since the number of subtrees becomes extremely large. Methods of constraining the number of subtrees, without loosing accuracy, should be investigated. Furthermore, in order to tackle the problem of data sparseness, the possibility of abstracting from corpus data should be included, but statistical models of abstractions of features and categories are not yet available. Acknowledgements. The author is very much indebted to Remko Scha for many valuable comments on earlier versions of this paper. The author is also grateful to Mitch Marcus for supplying the ATIS corpus. References R. Bod, 1992. “A Computational Model of Language Performance: Data Oriented Parsing”, Proceedings COUNG92, Nantes. J.M. Hammersley and D.C. Handscomb, 1964. Monte Carlo Metho&, Chapman and Hall, London. C.T. Hemphill, J.J. Godfrey and G.R. Doddington, 1990. “The ATIS spoken language systems pilot corpus”. DARPA Speech and Natural Language Workshop, Hidden Valley, Morgan Kaufinann. F. Jelinek, J .D. Lafferty and R.L. Mercer, 1990. Basic Methods of Probabilistic Context Free Grammars, Technical Report IBM RC 16374 (#72684), Yorktown Heights. M. Marcus, 1991. “Very Large Annotated Database of American English”. DAZWA Speech and Natural Language Workshop, Pacific Grove, Morgan Kaufmann. F. Pereira and Y. Schabes, 1992. “Inside-Outside Reestimation from Partially Bracketed Corpora”, Proceedings ACL 92, Newark. P. Resnik, 1992. ‘Probabilistic Tree-Adjoining Grammar as a Framework for Statistical Natural Language Processing”, Proceedings COLING92, Names. R. Scha, 1990. “Language Theory and Language Technology; Competence and Performance” (in Dutch), in Q.A.M. de Kort & G.L.J. Leerdam (eds.), Computertoepassingen in de Neerlandistiek, Ahnere: Landelijke Vereniging van Neerlandici (LVVN- jaarboek). Y. Schabes, 1992. “Stochastic Lexicalized Tree- Adjoining Grammars”, Proceedings CQLlNG92, Names. Statistically-Based NLP 783 | 1993 | 116 |
1,317 | Equations for Part-of-Speech Tagging Eugene Charniak and Curtis Hendrickson and Neil Jacobson and Mike Perkowitz* Department of Computer Science Brown University Providence RI 02912 Abstract We derive from first principles the basic equations for a few of the basic hidden-Markov-model word taggers as well as equations for other models which may be novel (the descriptions in previous papers being too spare to be sure). We give performance results for all of the mod- els. The results from our best model (96.45% on an unused test sample from the Brown corpus with 181 dis- tinct tags) is on the upper edge of reported results. We also hope these results clear up some confusion in the literature about the best equations to use. However, the major purpose of this paper is to show how the equa- tions for a variety of models may be derived and thus encourage future authors to give the equations for their model and the derivations thereof. Introduction The last few years have seen a fair number of papers on part-of-speech tagging - assigning the correct part of speech to each word in a text [1,2,4,5,7,8,9,10]. Most of these systems view the text as having been produced by a hidden Markov model (HMM), so that the tagging problem can be viewed as one of deciding which states the Markov process went through during its generation of the text. (For an example of a system which does not take this view, see [2].) Unfortunately, despite the obvi- ous mathematical formulation that HMM’s provide, few of the papers bother to define the mathematical model they use. In one case this has resulted in a confusion which we address subsequently. In most every case it has meant that large parts of the models are never de- scribed at all, and even when they are described the English descriptions are often vague and the occasional mathematical symbol hard to interpret as one is lacking a derivation of the equations in which it should rest. In this paper we hope to rectify this situation by show- ing how a variety of Markov tagging models can be de- rived from first principles. Furthermore, we have imple- mented these models and give their performance. We do not claim that any of the models perform better than *This research was supported in part by NSF contract IRI-8911122 and ONR contract NO01491-J-1202. 784 Charniak taggers reported elsewhere, although the best of them at 96.45% is at the upper end of reported results. However, the best taggers all perform at about the same level of accuracy. Rather our goal is to systematize the “seat of the pants” knowledge which the community has already accumulated. One place where we might be breaking new ground is in techniques for handling the sparse-data problems which inevitably arise. But even here it is hard to be sure if our techniques are new since previous authors have barely mentioned their sparse-data tech- niques, much less formalized them. We believe that pro- viding a clean mathematical notation for expressing the relevant techniques will take this area out of the realm of the unmentionable and into that of polite scientific discussion. The Simplest Model We assume that our language has some fixed vocabulary, { wl, w2,. . . , ww }. This is a set of words, e.g., {a, aard- vark, . . . . zygote ). We also assume a fixed set of parts of speech, or tags, {t’, t2, . . .t’}, e.g., {adjective, adverb, ..‘, verb}. We consider a text of n words to be a se- quence of random variables VVI,~ = Wi lV2 . . . IV,. Each of these random variables can take as its value any of the possible words in our vocabulary. More formally, let the function V(X) denote the possible values (outcomes) for the random variable X. Then V(Wi) = {w’, w2,. . . ww}. We denote the value of Wi by 201, and a particular se- quence of n values for l~Vi,~ by ~1,~. In a similar way, we consider the tags for these words to be a sequence of n random variables 7’1,~ = TiT2, . . . , Tn. A particu- lar sequence of values for these is denoted as t~,~, and the ith one of these is ti . The tagging problem can then be formally defined as finding the sequence of tags ti ,ra which is the result of the following function: I(w~,~) def arg max h,,d(Tl,n) JV’l,n = h,n I W,n = WI+) (1) In the normal way we typically omit reference to the ran- dom variables themselves and just mention their values. In this way Equation 1 becomes: 7(wl,n) = argylyP(tl,n I ~1,~) (2) , From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. We now turn Equation 2 into a more convenient form. T(qn) = argf211; P(h,n, w,n) p(wl,n) (3) , = wy-y;P(tl,n, Wl,ta). , (4 In going from Equation 3 to 4 we dropped P(w~,~) as it is constant for all ti,, . Next we want to break Equation 4 into “bite-size” pieces about which we can collect statistics. To a first approximation there are two ways this can be done. The first is like this: P(fl,n 9 W,n) = P(w)P(h I w)P(w2 I h, w) qt2 I t1, w,2) * * * P(tn I h,n-1, Wl,?a-1) qwn I t l,n, W,n-1) (5) = P(Wl)P(h I w) n qwi I t 1$-l, w,i--1) i=2 P(ts 1 tl,i-1, w,i) (6) n = P(‘wi I h,i-1, w,i-1) i=l qti I h,i-1, Wl,i) (7) Here we simplified Equation 6 to get Equation 7 by suit- ably defining terms like tl,o and their probabilities. We derived Equation 7 by first breaking out P(wl) from P(tl,n, Win). In a similar way we can first break out P(tl), giving this: n P(tl,n, W,7a) = qti 1 b,i-1, w,i-1) a=1 P(wi I tl,i, qi-1) (8) All of our models start from Equations 7 or 8, or, when we discuss equations which smooth using word morphol- ogy, modest variations of them. Up to this point we have made no assumptions about the probabilities we are dealing with, and thus the prob- abilities required by Equations 7 and 8 are not empir- ically collectible. The models we develop in this paper differ in just the assumptions they make to allow for the collection of relevant data. We call these assumptions “Markov assumptions” because they make it possible to view the tagging as a Markov process. We start with the simplest of these models (i.e., the one based upon the strongest Markov assumptions). We start with Equation 7 and make the following Markov assumptions: qwi I h,i-1, w,i-1) = P(w I w,i-1) (9) P(h I t l,i-1, Wl,a 3 = P(t, 1 Wa) (10) Substituting these equations into Equation 7, and sub- stituting that into Equation 4 we get: ~(Wl,n) = arg ;a$ P(w I w,i-1) ’ i=l P(fi I Wi) (11) n p&i I w) (12) Equation 12 has a very simple interpretation. For each word we pick the tag which is most common for that word. This is our simplest model. Estimation of Before one can use such a model however, one still needs to estimate the relevant parameters. For Equation 12 we need the probabilities of each possible tag for each possible word: P(t” ] &). The most obvious way to get these is from a corpus which has been tagged by hand. Fortunately there is such a corpus, the Brown Corpus [6] and all of the statistical data we collect are from a subset of this corpus consisting of 90% of the sentences chosen at random. (The other 10% we reserve for testing our models.) So, let C(t”, wj) be the number of times the word w3 appears in our training corpus with the tag t”. In the obvious way C(wj) = Cd C(t”, 203). Then one approximation to the statistics needed for Equation 12 is the following estimate: est C(ti) W’) P(t” I UFi) = C(wj) * However, Equation 13 has problems when the training data is not complete. For example, suppose there is no occurrence of word w u? First, the quotient in Equation . 13 is undefined. As this is a problem throughout this paper we henceforth define zero divided by zero to be zero. But this still means that P(t” I w”) is zero for all t”. We solve this problem by adding further terms to Equation 13. We model this after what is typically done in smoothing tri-gram models for English [7]. Thus we add a second term to the equation with weights attached to each term saying how heavily that term should be counted. That is, we are looking for an equation of the following form: . . P(V I WJ) “At AI c&yy + X2(wj)f(t”, w-q. (14) Here f(ti, d) is standing in for our as yet to be disclosed improvement. The two Xs are the weights to be given to each term. Note that they can be different for different w and thus we have made them functions of &. For any w Equation 14 must sum to one over all t”. We ensure this by requiring that the As sum to one and that the terms they combine do so as well. Statistically-Based NLP 785 If P(ti we are 1 wj primarily concerned ) for & which have not been about estimating encountered before A more important difference is that many do not use Equation 19 or the just-mentioned variants, but rather: the is can take a particularly simple form: 1 if C(wj) 2 1 0 otherwise. (15) T(Wl,n) = argr;fi P(ti I td-l)P(td I w;). (20) , j= 1 With these Xs the second term of Equation 14 should be the probability that a token of a word & which we have never seen before has the tag t’ . Obviously we can- not really collect statistics on something that has never occurred: however when we were gathering our count data in the first place we often encountered words which up to that point had not been seen. We collect statistics on these situations to stand in for those which occur in the test data. Thus, let C,(t”) be the number of times a word which has never been seen with the tag t” get this tag, and let total. Then tion is this: Cn( our - - ) be the number of such occurrences in improved probability estimation equa- P(ti I wj) zt X&d) C(@, wj) c(wj) + xz(-j)~ (16) n With this improved parameter estimation function we are now able to collect statistics from our corpus and test the model thereby derived on our test corpus. The re- sults are quite impressive for so simple a model: 90.25% of the words in the test data are labeled correctly. (The data for all of the models is summarized at the end of the paper in Figure 2.) The “Standard” Model The model of Equations 12 and 16 does not take any context into account. It simply chooses the most likely tag for each word out of context. Next we develop a model which does take context into account. This time we start from Equation 8 and simplify it by making the following two Markov assumptions: qti I h,i-lP-qi--1) = P(k I h-1) (17) P(Wi I t l,i, w,a-1 > = P(w I h,i) (18) That is, we assume that the current tag is independent of the previous words and only dependent on the previ- ous tai. Similarly we assume that the correct word is independent of everything except knowledge of its tag. With these assumptions we get the following equation: The difference is in the last term. This equation is found in [4,9] and is described in words in [5]. (However, while Church gives Equation 20 in [4], the results cited there were based upon Equation 19 (Church, personal commu- nication) .) Equation 20 seems plausible except that it is virtually impossible to derive it from basic considerations (at least we-have been unable to do so). Nevertheless, given the drastic Markov assumptions we made in the’derivation of Equation 19 it is hard to be sure that its comparative theoretical purity translates into better performance. In- deed, the one paper we are acquainted with in which the comparison was made [l] found that the less pure Equa- tion 20 gave the better performance. However, this was on a very small amount of training data, and thus the results may not be accurate. To determine which, in fact, does work better we trained both on 90% of the Brown Corpus and tested on the remainder. We smoothed the probabilities using Equation 16. To use this on Equation 19 we made the following change in its form: n I(Wl,n) = argmax P(G I h-1) P(w,)pqlt; I 4 (21) = arg;-fi P(ta I t~-~)‘$,i 7) (22) d=l i , It was also necessary to smooth P(ta 1 ti-1). In par- ticular we found in our test data consecutive words with unambiguous tags, where the tags had not been seen con- secutively in the training data.‘-To overcome this prob- lem in the simplest fashion we smoothed the probability as follows: P(ta I t&l) e (1 - c)C$-“:’ + c. (23) d- 1 T(Wl,n) = argmaxfi P(t; I ta-l)P(wi I ti) (19) Here E is a very small number so that its contribution is swamped by the count data unless that contributes ‘l,n j=l This equation, or something like it, is at the basis of most of the tagging programs created over the last few years. One modification expands P(ti I ti-1) to take into consideration the last two tags [4,7]. Experimentation has shown that it offers a slight improvement, but not a great deal. We ignore it henceforth. Another modifica- tion conditions the tag probability on the tags following the word rather than those which preceded it [4]. How- ever, it is easy to show that this has no effect on results. zero. The net effect is that when there is no data on tag context the decision is made on the basis of P(d I ti) in Equation 19 or P(t’ I wj) in Equation 20. The results were unequivocal. For Equation 19 we got 95.15% correct while for the less pure Equation 20 the results were poorer, 94.09%. While this may not seem like a huge difference, a better way to think of it is that we got an 18% reduction in errors. Furthermore, given that the models have exactly the same complexity, there is no cost for this improvement. 786 Charniak The smoothing model of Eq uation 16 is very crude. This and the subsequent section improve upon it. One prob- lem is that the model uses raw counts to estimate the probabilities for a word’s tags once it has seen a word, even if only once. Obviously, if we have seen a word, say, 100 times, the counts probably give a good estimate, but for words we have seen only once they can be quite in- accurate. The improvement in this section is concerned with this problem. An Improved Smoothing Model In Equation 16 we collected statistics on the first oc- currences of words we saw in the training data and used these statistics to predict what would happen on the first occurrences of words we saw in the test data. To im- prove our model we try to estimate the probability that we will next see the tag t” as the tag for & despite the fact that it has never appeared as the tag for d before - P(t”-new I wj). { if C(t”, wj) > 1 0 P(t”-new I wj) “At otherwise P(new tag ] C(wj))P(t” I new tag) (24) The first line states that that t” cannot be new if it has already appeared as a tag of wj. The second line says that we approximate the probability by assuming inde- pendence of the “newness” and the fact that it is the ith tag. Second it assumes that the probability of newness is only dependent on how many times we have seen wj before. Also, rather than collect P(new tag I C(wj)) for all possible C(wJ), we have put counts into the following equivalence categories based upon how many times the word has been seen: 0, 1, 2, 3-4, 5-7, 8-10, 11-20, 21-36, 30-up. Let N(C(wj)) d enote the frequence class for wJ. Then P(new tag I C(wj)) “At P(new tag I N(C(wj))) (25) We can now use P(t”-new ] wj) to smooth P(t” I wj) in Equation 16, giving us: . . p(ti 1 wj) ?2 xl(wj) c:i;y;J P(t”-new I wj) However, Xi and A2 cannot retain their definitions from Equation 15 as that assumed that any word we had seen would use the direct counts rather than the Cn’s. One way to get the new Xs is to use extra training data to train the HMM corresponding to Equations 22 and 26 to find a (locally) best set of X-values as done in [7]. However, it is possible to provide an argument for what their values ought to be. If we think about an HMM for producing the part of speech given the word, for each word there would be arcs leaving the state cor- responding to each possible tag. (In fact, there would be several arcs for each tag, each going to a different next state, but we ignore this.) This would appear as shown Figure 1: A Markov model for P(t” I wj) on the upper portion of Figure 1. We assume that for wj we have seen v of the tags, t1 to t”, and the rest, t”+l to tT, have not been observed for wj. The idea of Equation 26 is that the first term estimates the proba- bilities associated with the arcs t1 to t”, and the second term estimates the rest. To make the HMM look more like this equation we can transform it into the form on the lower portion of Figure 1 where we have introduced two E (no.output) transitions with probabilities Xi(wj) and &(wJ), respectively. From this HMM it is easier to see the significance of these two probabilities: Xr(wj) is the probability that the next occurrence of WJ has as an associated tag, a tag which has occurred with wj before, and Xz(wj) is the probability that it is a new tag for wJ. This latter term is the sum in the denominator of the second term of Equation 26. Morphological The second improvement to our smoothing function uses the word-ending information to help determine correct tags. For example, if we have never seen the word “rak- ishly,” then knowledge that “1~” typically ends an adverb will improve our accuracy on this word - similarly, for “randomizing.” We do not want to tackle the problem of determining the morphology of English words in this paper. Rather we assume that we have available a program which as- signs roots and suffixes (we do not deal wit h any other kind of morphological feature) to our corpus and does so Statistically-Based NLP 787 as well for those words in our test corpus which have ap- peared in the training corpus. For the words in the test corpus which have not appeared in the training corpus the morphological analyzer produces all possible anal- yses for that word and part of the problem we face is deciding between them. We should note that the mor- phological analyzer we had was quite crude and prone to mistakes. A better one would no doubt improve our With these assumptions we can manipulate Equation 31 as follows: n 7(w1,~) = argmax C n P(ti I ii-l) t19n Pl,n,Sl,n i=l P(Si I ti)P(ri l ii) (35) n = arg max tl n c j-J ws I ti-l)P(Si I ti) ’ rl,ntSl,n i=l results. To accommodate our morphological analysis we now consider probabilities for different- root-suffix combina- tions. Following our earlier conventions, we have a set of roots {rl, . . . . rp) and a set of suffixes {sl, . . . , so). f1.n and s1.n are sequences of n roots and suffixes with ri’and sa being the ith one of each. Equation 36 is a version of Equation 19, but adapted to morphological analysis. It differs from the earlier equation in three ways. First, it includes a new term, P(si I ti), for which we now need to gather statistics. However, since the number of tags and suffixes are small, this should not provide any difficult sparse-data prob- lems. Second, rather than needing to smooth P(t” I wj) as in Equation 19, we now need to smooth P(t” I d). However, it seems reasonable to continue to use Equa- tion 26, with rj substituted for &. Finally, there is the I(Wl,n) = argmax tl n C P(tl,n~~l,n~Sl,n I W,n) t rl,n,Sl,n (27) = arg max >: P(h ,nt fi,n, Sl,n, W,n > tl,n rl,nA,n p(wl,n) (28) term P(ri), and this deserves some discussion. There was no term corresponding to P(ri) in = arg max tl n >: P(tl,n, rl,n, Sl,n, W,n) 9 ri,n,Si,n (29) Equation as the term which would have correspbn’ded to it was wi), and that was removed as it was the same for all ta&. However, in Equation 36 we are summing over = arg max tl n ): P(tl,n, rl,n9Sl,n) (30) 7 rl,nA,n roots, so P(ri) is not a constant. In particular assuming that we would want our program to interpret some new word, e.g., “rakishly” as “rakish” + “ly,” (or even better, “rake” + “ish” + “ly,” if our morphological analyzer could handle it) it would be the P(.ri) term which would encourage such ‘a preference in Equation 36. It would do so because the probability of the shorter root would be = arg max zl,n C fi P(ti 1 tl,i--1, rl,i-1, qi-1) rl,n,Sl,n i=l P(ri, si 1 tl,i, rl,i-1, qi-I) (31) much higher than the longer ones. To model P(ri) we have adopted a spelling model along the lines of the one used for the spelling of un- known words in [3]. Th is combines a Poisson distribu- tion over word lengths with a maximum at 5, times a distribution over letters. We adopted a unigram model for letters. Here I rj I is the length of rj and Zi is the ith letter of 7-j. We delete wr ,n in going from Equation 28 to 29 because roots nlus suffixes determine the words. However, this L means that in all of the equations after that point there is an implicit assumption that we are only considering root as suffixes which combine to form the desired word. P(d) Et ne 5’r3’ -5 ‘2 p(li I &,l) (37) Next we make some Markov assumptions. P(h I t 1+-l, qi-1, qi--l I= P(ti 1 b-1) (32) P(ri, Si 1 tl,i, rl,i-1, Sl,i-1) = P(ri, Si I ti) (33) P(ri, Sj I t;) = P(ri 1 ti)P(& I ti) (34 I’ I’ k=l Results The results of our experiments are summarized in Fig- ure 2. We trained our models on the Brown corpus The first two are just the ones we made earlier, but now with every tenth sentence removed (starting with sen- with the roots and suffixes broken out. Equation 34 is tence 1) and tested on these removed sentences. There new. It can be interpreted as saying that knowing the were 114203 words in the test corpus. For the more ba- root does not help determining the suffix if we know the sic methods we did experiments on both the full Brown part-of-speech of the word. This is probably a reasonable corpus tag set (471 different tags) and a reduced set (186 assumption, particularly compared to the others we have tags). (Most of the tags in the full set are are “complex” made. tags’in‘that they consist of a basic tag plus one or more 788 Charniak Equations 12 and 16 20, 16, and 23 19, 16, and 23 19, 26, and 23 36, 26, and 23 Figure 2: Results obtained from the various models % Correct 471 Tags 90.25 94.09 95.15 % Correct 186 Tags 91.51 95.04 95.97 96.02 96.45 tag modifiers. For those familiar with the Brown Cor- pus, to get the reduced set we stripped off the modifiers “F W” (foreign word), “TL” (title), “NC” (cited word), and “HL” (headline). For the more complex techniques we only used the reduced set since the basic dynamic programming algorithm for finding the best tags runs in big-0 time = r2 where r is the number of different tags. Using normal tests for statistical significance we find that for the interesting cases of Figure 2 a difference of .l% is significant at the 95% level of confidence. Certain results are clear. One can get 90% of the tags correct by just picking the most likely tag for each word. Improving the model to include bigrams of tags increases the accuracy to the 95% level, with the more theoreti- cally pure P(wi ] ti) performing better than P(ti ] wi), contrary to the results in [l]. Furthermore the improve- ment is much larger than the .l% required for the 95% significance level. Improvement beyond this level is pos- sible but it gets much harder. In particular, the improve- ment from the more sophisticated smoothing equation, Equation 26 is minimal, only .05%. This is not statis- tically significant. However, there is reason to believe that this understates the usefulness, of this equation. In particular we believe that the very crude 23 is causing ex- tra errors when combined with the improved smoothing. Equation 23 combined with the crudeness of our mor- phology component also limited the improvement shown in the last line of Figure 2. Also, we should really treat endings as tag “transformers,” something Equation 36 does not do. The combination of these three debili- tating factors caused frequent errors in known words, and thus the figure of 96.45% was obtained when we treated known words as morphologically primitive. This improvement is statistically significant. We believe that fixing these problems would add another tenth of a per- cent or two, but better performance beyond this will require more lexical information, as that used in [lo]. However, the point of this paper was to clarify the basic equations behind tagging models, rather than im- proving the models themselves. We hope this paper en- courages tag modelers to think about the mathematics which underly their models and to present their models in terms of the equations. References 1. BOGGESS, L., AGARWAL, R. AND DAVIS, R. Dis- ambiguation of prepositional phrases in automatically labelled technical text. In Proceedings of the Ninth National Conference on Artificial Intelligence. 1991, 155-159. 2. BRILL, E. A simple rule-based part of speech tagger. In Proceedings of the Third Conference on Applied Natural Language Processing. 1992. 3. BROWN, P. F., DELLA PIETRA, S. A., DELLA PIETRA, V. J., LAI, J. C. AND MERCER, R. L. An estimate of an upper bound for the entropy of english. In IBM Technical Report. 1991. 4. CHURCH, K. W. A stochastic parts program and noun phrase parser for unrestricted text. In Second Conference on Applied Nat urai Language Processing. 1988, 136-143. 5. DEROSE, S. J . Grammatical category disambigua- tion by statistical optimization. Computational Lin- guistics 14 (1988), 31-39. 6. FRANCIS, W. N. AND KUCERA, II. Frequency Analysis of English Usage: Lexicon and Grammar. Houghton Mifflin, Boston, 1982. 7. JELINEK, F. Markov source modeling of text gener- ation. IBM T.J. Watson Research Center, Continuous Speech Recognition Group. 8. KUPIEC, J. AND MAXWELL, J. Training stochas- tic grammars from unlabelled text corpora. In Work- shop Notes, AAAI-92 Workshop on Statistically- Based NLP Techniques. 1992, 14-19. 9. DEMARCKEN, C. G. Parsing the LOB corpus. In Proceedings of the 1990 Conference of the Associa- tion for Computational Linguistics. 1990, 243-259. 10. ZERNIK, U. Shipping departments vs. shipping pace- makers: using thematic analysis to improve tagging accuracy. In Proceedings of the Tenth National Con- ference on Artificial Intelligence. 1992, 335-342. Statistically-Based NLP 789 | 1993 | 117 |
1,318 | Dekai Wu* Departinent of Computer Science The Hong Kong University of Science & Technology Clear Water Bay, Hong Kong dekai@ cs.ust.hk Abstract We analyze the difficulties in applying Bayesian belief networks to language interpretation domains, which typically involve many unification hypotheses that posit variable bindings. As an alternative, we observe that the structure of the underlying hypothesis space permits an approximate encoding of the joint distrib- ution based on marginal rather than conditional prob- abilities. This suggests an implicit binding approach that circumvents the problems with explicit unification hypotheses, while still allowing hypotheses with al- ternative unifications to interact probabilistically. The proposed method accepts arbitrary subsets of hypothe- ses and marginal probability constraints, is robust, and is readily incorporated into standard unification-based and frame-based models. 1 Introduction The application of Bayesian belief networks (Pearl 1988) to natural language disambiguation problems has recently generated some interest (Goldman & Charniak 1990; Char- niak & Goldman 1988, 1989; Burger & Connolly 1992). There is a natural appeal to using the mathematically con- sistent probability c&ulus to combine quantitative degrees of evidence for alternative interpretations, and even to help - resolve parsing decisions. However, to formulate disambiguation problems using belief networks requires an unusual form of hypothesis nodes. Natural language interpretation models (as well as many others) employ the unification operation to com- bine schemata; this is realized alternatively as slot-filling, role-binding, or attribute co-indexing in feature structures. *Preparation of this paper was partially supported by the Nat- ural Sciences and Engineering Research Council of Canada while the author was a postdoctoral fellow at the University of Toronto. Much of this research was done at the Computer Science Division, University of California at Berkeley and was sponsored in part by the Defense Advanced Research Projects Agency (DOD), moni- tored by the Space and Naval Warfare Systems Command under N00039-88-C-0292, the Office of Naval Research under N00014- 89-J-3205, the Sloan Foundation under grant 86-10-3, and the National Science Foundation under CDA-8722788. Specifically, in this paper we are concerned with the class of problems where the input context introduces a number of possible conceptual entities but the relationships between them must be inferred. This phenomenon is ubiquitous in language, for example in prepositional and adverbial at- tachment, adjectival modification, and nominal compounds. The process of resolving such an ambiguity corresponds to unifying two variables (or role binding.s or slot fillers). In extending the models to Bayesian belief networks, unification operations are translated to hypothesis nodes- for example (patient g3)=r2 in figure l-that sit alongside “regular” hypotheses concerning the features of various con- ceptual entities. The incorporation of binding hypotheses introduces a modelling difficulty in the context of belief net- works. The strength of the unification-based paradigm rests precisely in the relatively symmetric role binding, which is subject to no constraints other than those explicitly given by the linguist or knowledge engineer. However, we ar- gue in section 2 that this same characteristic directly resists models based on the notion of conditional independence, in particular belief networks. In section 3 we re-analyze the structure of the underlying hypothesis space and its joint distribution. This formulation leads to an alternative approach to approximation, proposed in section 4. A natural language application dealing with nominal compound interpretation is outlined in section 5. 2 Unification Resists Conditional Independence In conditional independence networks, the values of some hypotheses are permitted to influence others but the paths of influence are restricted by the graph, thus providing compu- tational leverage. In the extreme, a completely connected graph offers no computational shortcuts; instead, to im- prove performance a distribution should be graphed with the lowest possible connectivity. In general, conditional independence networks have been applied in highly struc- tured domains where low-connectivity approximations can be accurate. The types of domains that invite unification- oriented representations, however, resist low-connectivity approximations, because binding hypotheses have a high inherent degree of interdependence. Typically in such a domain, there will be some number 790 wu From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. (kilii$- - (get I I (.W I (get w3) (object-of w2 w3) (rope w2) Figure 1: Example belief net from Goldman & Chamiak (1991). n of “free” variables a, b, c . . . that are potentially unifiable with others. A unification hypothesis is of the form a = b, and there are m = ( 2 ) such hypotheses. A priori knowl- \-I edge, like selectional restrictio&may help rule out some of these hypotheses, but many bindings will remain possible and we’ll assume here that all unification hypotheses have nonzero probability. A joint hypothesis is an assignment of truth values to each of the m unification hyp0theses.l The number of legal joint hypotheses is less than 2” because of the dependence between-hypotheses. For example, if a = c and b = c are true, then a = b must also be true. In fact the number of legal joint hypotheses is equal to the number of possible partitionings of a set of n elements. Figure 2 shows the legal joint hypotheses when n = 4. Hypotheses Legal assignments a=b 000000100100111 U=C 000001001001011 a=d 000010010001101 b=c 000100010010011 b=d 001000001010101 c=d 010000000111001 Figure 2: The legal joint hypotheses for n = 4. Each column shows a permissible t&h value assignment. Now consider the dependence relationships between uni- fication hypotheses. The probabilities of a = c and b = c are not independent since they may be affected by the value . of a = b ; if a # b then all events where a = c-and b = c are ruled out. However, it is possible for a = c and b = c to be conditionally independent given a = b, which can be modelled by a= c-a=b- b=c By symmetry, all three hypotheses must be connected. This extends to larger n, so if n = 4, then if a = d and b = d are conditionally independent, it must also be conditioned ona = b: ‘We ignore analysis. all other types of hypotheses in this section’s a=c b=c ’ a= d b=d In general, any pair of unification hypotheses that involve a common variable must be connected. Thus for n variables, the total number of links is 1~ n . (” 2 ‘) = n(n - l)(n - 2) 2 = m(n - 2) which is O(n3) or O( m3i2). This is better than a completely connected network which would be O(n4) or O(m2) but there are many loops nonetheless, so evaluation will be expensive. By symmetry, each of the m hypotheses is of degree 2d = 2(n - 2) m and any clustering of variables will be subject to this bound. We conclude that in domains where unification hypothe- ses are relatively unconstrained, the connectivity of con- ditional independence networks is undesirably high. This means that it is difficult to find efficient conditional probabil- ity representations that accurately approximate the desired joint probability distributions. Therefore, in the next sec- tion we reconsider the event space that underlies the joint distribution. 3 Since conditional probabilities do not lend themselves well to representations involving unification hypotheses, we now examine the structure of the joint hypothesis space. Before, we considered the unification hypotheses in explicit form because we sought conditional independence relationships between them. Having abandoned that objective, here we instead consider the feature structures (or frames) that result from assigning truth values to the unification hypotheses. In other words, the unification hypotheses are left implicit, re- flected by co-indexed variables (roles) in feature structures. Figure 3 depicts the qualitative structure of the joint hy- pothesis space, which forms a semilattice hierarchy. We now take into consideration not only the implicit unification hypotheses, but also implicit hypotheses that specialize the features on the variables; for example, a variable of type a may be specialized to the subtype b. Each box denotes the feature structure that results from some combination of truth values over a subset of unification hypotheses and spe- cialization hypotheses. Each small shaded box denotes a joint hypothesis specifying the truth values over all unifi- cation and specialization hypotheses. Thus the distinction between the shaded and non-shaded boxes is a kind of type- token distinction where the shaded boxes are tokens. Notice furthermore that role specialization and unification are in- tertwined: a role of type z results when a type II: role and a type y role are conjoined by unifying their fillers. Statistically-Based NLP 791 Figure 3: Simplified partial abstraction lattice for feature structures. The type b is a subtype of a; the role type z is a composite role equivalent to the conjunction xy. A dashed line indicates that the variables are “free” to be unified. In principle, the joint distribution would be completely specified if we could enumerate the probabilities over the (shaded) tokens. We saw in the previous section that con- ditional probabilities are not well suited for approximately summarizing distributions over this space, because there is no way to discard large numbers of binding dependencies in the general case. However, there is another straightforward way to store distributional information, namely to record marginal probabilities over the abstract (non-shaded) types, i.e., the sums of probabilities over all descendant leaves. To summarize the distribution approximately, a selected subset of the marginal probabilities can be stored. Theoretically, a set of marginal probabilities induces an equivalent set of conditional probabilities over the same lattice, though it may be an unreasonably large set. If there are any independence relationships to be exploited, equivalently a subset of mar- ginal probabilities can be omitted and the maximum-entropy principle (Jaynes 1979) can be applied to reconstruct the joint distribution. The advantages of this formulation are: (1) fewer parame- ters are required since it does not encode redundant distrib- utional information in multiple dependent conditional prob- abilities, (2) consistency is easier to maintain because the interdependent unification hypotheses are not explicit, (3) it facilitates an alternative structural approximation method for computing a conditional distribution of interest, as dis- cussed in the next section. 4 An Approximation Based on Marginal Constraints By itself, the marginal probability formulation can reduce probability storage requirements but does not improve com- putation cost. Computing maximum entropy distributions subject to largenumbers of marginal constraints is infeasible in the general case. However, in many applications, includ- 792 wu ing language interpretation, the input cues are sufficient to eliminate all but a relatively small number of hypotheses. Only the distribution over these hypotheses is of interest. Moreover, the input cues may suffice to preselect a subset of relevant marginal probability constraints. The proposed method takes advantage of these factors by dynamically creating a secondary marginal probability formulation of the same form as that above, but with far fewer constraints and hypotheses, thereby rendering the en- tropy maximization feasible. In the secondary formulation, only details within the desired hypothesis and constraint space are preserved. Outside this space, the minimum pos- sible number of “dummy” events are substituted for mul- tiple hypotheses that are not of interest. It turns out that one dummy event is required for each marginal constraint. Let Q be the set of token feature structures and G is the set of type feature structures, and F sf G U &. Suppose 7-1 = {hI ,... ,hi,.. . , hH} C & are the candidate hy- potheses,andsupposeM = {ml,. . . , mj, . . . , mM> c G are the abstract class types that have been preselected as being relevant, with associated marginal probabilities Pmj = P( rnj ) . Denote by c the partial ordering induced on ‘FI U M by the subsumption semilattice on f-structure space. Then we define the secondary formulation as follows. Let thesetofdummyeventsbeD = {dr ,..., dj ,..., dm}, one for each marginal constraint. Define ? gf ti U M U 23 to be the approximate event space, and define G gf ti U 2, to be the approximate hypothesis space. We construct the approximate ordering relation c over ? according to: aCc;c=mj;b=dj Let hmj be the marginal probability constraints on 3. We use Pmj as estimators for P, j. (Of course, since the event space has been distorted by the structural dummy event approximation, actually Pmj # Pmj .) To estimate the distribution over the hypotheses of inter- est, along with the dummy events, we compute i)h, and pd, such that while maximizing the entropy (2) subject to the marginal constraints (3) c Pq = Pm, Technical details of the solution are given in Appendix A. Note that unlike methods for finding maximum a posteri- ori assignments (Charniak & Santos Jr. 1992) which returns the probability for the most probable joint assignment, the objective here is to evaluate the conditional distribution over a freely chosen set of joint hypothesis assignments and mar- ginal constraints. One of the strengths of AME is robustness when arbi- trarily chosen marginals are discarded. Arithmetic incon- sistencies do not arise because the dummy events absorb any discrepancies arising from the approximation. For example, if C through F are discarded from figure 4(a), then P(A) + P(B) < P(G), but the remaining probabil- ity weight is absorbed by G’s dummy event in (b). The ability to handle arbitrary subpartitions of the knowledge base is important in practical applications, where many dif- ferent heuristics may be used to preselect the constraints dynamically. In contrast, when there are dependent unifica- tion hypotheses in a belief network, discarding conditional probability matrices can easily lead to networks that have no solution. 0 03 0 03 0 05 (4 (b) 5 A Nominal Compound hte Example In this section we summarize an example from the language interpretation domain that drove the development of AME, a more detailed discussion of which is found in the compan- ion paper @Vu 1993a). Space does not permit a description of the semantics and lexical models; see Wu (1992,1993b). Although our modelling objectives arise solely from dis- ambiguation problems, we believe the foregoing discussion applies nonetheless to other structured domains involving highly interdependent variable bindings with uncertainty. The example task here is to interpret the nominal com- pound coast road,2 which in null context most likely means a road in coastal area but, particularly in other contexts, can also mean other things including a road leading to coastal area, a coasting road amenable to coasting, and Highway 1. As is typical with novel nominal compounds, interpretation requires a wide range of knowledge. Figure 5 shows the fairly standard feature-structure notation we use to encode such knowledge; the marginal probabilities in (a) and (b) are the primary representational extension. During interpretation, a hypothesis network as in fig- ure 6 is dynamically constructed. Each node cor- responds to a marginal constraint from the knowl- edge base, of the form figure 5(a)-(b). Ignor- ing the boldface marginals for now, the probabilities P(coast and road) and P(coast and coastal road) indi- cate that when thinking about roads, it is the sub- category of roads running along the coast that is fre- quently thought of. Similarly P(coasta1 road) and P(Highway I) model a non-West Coast resident who does not frequently specialize coastal roads to High- way 1. Together, P(L:coast), P(C:coast:seacoast), and P(C:coast:coasting accomplishment) indicate that the noun coast more frequently designates a seacoast rather than an unpowered movement. Finally, P( C:NN:containment) indicates that the noun-noun construction signifies contain- ment twice as often as P( C:NN:Zinear order locative). Figure 6 summarizes the results of the baseline run and four variants, from a C implementation of AME. In the base run labelled “O:“, the AME estimate of the conditional distribution assigns highest probabilities to road in coastal area and road along coastline (features distinguishing these two hypotheses have been omitted). The next run “1:” demonstrates what would happen if “coast” more often sig- nified coasting accomplishment rather than seacoast: tie coasting road hypothesis dominates instead. In “2:” the noun-noun construction is assumed to signify linear order locatives more frequently than containment. The margin- als in “3:” effectively reduce the conditional probability of thinking of roads along the seacoast, given one is thinking of roads in the context of seacoasts. The West Coast res- Figure 4: Robust handling of discarded marginal constraints. (a) Original KB fragment. (b) Dummy event (black) absorbing discrepancy caused by discarding marginals. 2From the Brown corpus (KuEera & Francis 1967). Our ap- proach to nominal compounds is discussed in Wu (1990), which proposes the use of probability to address long-standing problems from the linguistics literature (e.g., Lees 1963; Downing 1977; Levi 1978; Warren 1978; McDonald 1982). Statistically-Based NLP 793 (a) FRAME: 1 [ TYPE: seacoast] 09 fB#z f”‘$N:containment ISA: : hWconstr1 ISA: NN sm CONSTl: ’ 1 LCONST2: ’ 1 sm 4 1 ISA: containment FRAME: LM: 3 1 LTR: 4 1 TISA: N-constr 1 SYN: ‘[ISA: N] SEM: “[ISA: thing] ISA: N-constr SYN: 2[ ISA: N] SEW “[ ISA: thing] 1 (4 NN-constr p&T;; “1 [ ISA: N-constr SYN: ’ [ ISA: “coast”-N] 1 ISA: N-constr SYN: 2[ ISA: “road”-N] I road in coastal area SEM: 4 rnME [ ;g :;“““““‘I r ISA: coast-constrl Figure 5: Feature-structures for (a) the noun coast signifying a seacoast, (b) a noun-noun construction signifying a containment schema, (c) an input form, and (d) a full interpretation hypothesis (the floor brackets indicate a token as opposed to type). SYN: ’ [ ISA: “coast”-N] SEM: 3 [ ISA: coastal-area-container] ISA: road-constrl SYN: 2[ ISA: “road’‘-N] SEM: “[ISA: road] 1 area coastline 0: 0.046524 0.37215 0.37215 0.00025822 0.074419 0.074419 0.060089 1: 0.015757 0.12605 0.12605 0.00061625 0.02521 0.02521 0.6811 2: 0.38339 0.15336 0.15336 0.0010636 0.030672 0.030672 0.24748 3: 0.40849 0.20422 0.20422 0.00056666 0.025527 0.025527 0.13145 4: 0.010205 0.081579 0.081579 0.00028371 0.40657 0.40657 0.01321 Figure 6: Estimated conditional distributions for five runs on coast road with varying marginal constraints. Dummy events have been omitted. ident is modelled in “4:” by an increase in the marginal P(Highway 1). 6 Conclusion We have discussed the difficulties encountered in applying Bayesian belief networks to domains like language inter- pretation, which involve unification hypotheses over “free” variables. We observed that the structure of the underly- ing joint hypothesis space permits an alternative approxi- mate encoding based on marginal rather than conditional probabilities. This implicit binding formulation facilitates a structural approximation method. For many applications, language interpretation in particular, the structural approx- imation is adequate and flexibility in handling unification hypotheses is quite important, whereas exact probability distribution computation is unnecessary. The method is robust and incorporates readily into unification- or frame- based models. Acknowledgements I am indebted to Robert Wilensky, Jerome Feldman, and the members of the BAIR and LO groups for many valuable 794 wu discussions, as well as Graeme Hirst, Geoff Hinton, and their respective groups. A et&Is of the Entropy To solve the constrained maximization problem in equa- tions (l)-(3), we define a new energy function with La- grange multipliers, J, to be maximized: j=l q:qE3i,mjCq M qE7-i j=l q:qEti,m, Iin This method is a modified version of Cheeseman’s (1987) method, which applied only to feature vectors. Observe that setting the gradients to zero gives the desired conditions: VxJ=O + E=O;l<j<M * expresses all marginal constraints VpJ=O j $=O;& * makmizes entropy Since the partials with respect to * are dJ izy _ -1Og Pq - >: xj j:mjCQ then at Vp J = 0, log Fq = - ): Xj j:mjCq Defining wj = e def -A, , Pq = wj j:m,Cq the original marginal constraints become P-, = C “Jk q:mjCq k:mkeq which can be rewritten P-j - C wk = 0 q:m,Cq k:mkCq The last expression is solved using a numerical algorithm of the following form: 1. Start with a constraint system X + { } and an estimated w vector () of length zero. 2. For each constraint equation, (a) Add the equation to X and its corresponding wi term to (WI,. . . ) wi-1, Wi). (b) Repeat until (WI, . . . , wi) settles, i.e., the change be- tween iterations falls below some threshold: 1. For each equation in X constraining Pm,, solve for the corresponding wj assuming all other w values have their current estimated values. References BURGER, JOHN D. & DENNIS CONNOLLY. 1992. Probabilistic res- olution of anaphoric reference. In AAAI Fall Symposium on Probabilistic NLP, Cambridge, MA. Proceedings to appear as AAAI technical report. CHALK, EUGENE 8~ ROBERT GOLDMAN. 1988. A logic for semantic interpretation. In Proceedings of the 26th Annual Conference of the Associationfor Computational Linguistics, 87-94. CHARNIAK, EUGENE & ROBERT GOLDMAN. 1989. A semantics for probabilistic quantifier-free first-order languages, with particular application to story understanding. In Proceed- ings of IJCAI-89, Eleventh International Joint Conference on Artificial Intelligence, 1074-1079. CHARNIAK, EUGENE & EUGENE SANTOS JR. 1992. Dynamic MAP calculations for abduction. In ProceedingsofAAAI-92, Tenth National Conferenceon Artificial Intelligence, 552-557, San Jose, CA. CHEESEMAN, PETER. 1987. A method of computing maximum entropy probability values for expert systems. In Maximum- entropy and Bayesian spectral analysis and estimationprob- lems, ed. by Ray C. Smith & Gary J. Erickson, 229-240. Dordrecht, Holland: D. Reidel. Revised proceedings of the Third Maximum Entropy Workshop, Laramie, WY, 1983. DOWNING, PAMELA. 1977. On the creation and use of English compound nouns. Language, 53(4):810-842. GOLDMAN, ROBERT P. & EUGENE CHARNIAK. 1990. A prob- abilistic approach to text understanding. Technical Report CS-90-13, Brown Univ., Providence, RI. JAYNES, E. T. 1979. Where do we stand on maximum entropy. In The maximum entropy formalism, ed. by R. D. Levine & M. Tribus. Cambridge, MA: MIT Press. KUEERA, HENRY & W. NEISON FRANCIS. 1967. Computational analysis of present-day American English. Providence, RI: Brown University Press. LEES, ROBERT B. 1963. The grammarof English nominalizations. The Hague: Mouton. LEVI, JUDITH N. 1978. The syntax and semantics of complex nominals. New York: Academic Press. MCDONALD, DAVID B. 1982. Understanding noun compounds. Technical Report CMU-CS-82-102, Carnegie-Mellon Univ., Dept. of Comp. Sci., Pittsburgh, PA. PEARL, JUDEA. 1988. Probabilistic reasoning in intelligent sys- tems: Networks of plausible inference. San Mateo, CA: Morgan Kaufmann. WARREN, BEATRICE. 1978. Semantic patterns of noun-noun com- pounds. Gothenburg, Sweden: Acta Universitatis Gothobur- gensis. WU, DEKAI. 1990. Probabilistic unification-based integration of syntactic and semantic preferences for nominal compounds. In Proceedings of the Thirteenth International Conference on Computational Linguistics, volume 2,413-418, Helsinki. WV, DEKAI, 1992. Automatic inference: A probabilistic basis for natural language interpretation. University of California at Berkeley dissertation. Available as UC Berkeley Computer Science Division Technical Report UCBKSD 92/692. WU, DEKAI. 1993a. Approximating maximum-entropy ratings for evidential parsing and semantic interpretation. In Proceed- ings of IJCAI-93, Thirteenth International Joint Conference on Artificial Intelligence, Chamberry, France. To appear. WV, DEKAI. 1993b. An image-schematic system of thematic roles. In Proceedings of PACLING-93, First Conference of the Pa- cific Association for Computational Linguistics, Vancouver. To appear. Statistically-Based NLP 795 | 1993 | 118 |
1,319 | Claire Cardie Department of Computer Science University of Massachusetts Amherst, MA 01003 E-mail: cardie@cs.umass.edu Abstract This paper describes a case-based approach to knowledge acquisition for natural language systems that simultaneously learns part of speech, word sense, and concept activation knowledge for all open class words in a corpus. The parser begins with a lexicon of function words and creates a case base of context-sensitive word definitions during a human- supervised training phase. Then, given an unknown word and the context in which it occurs, the parser retrieves defmitions from the case base to infer the word’s syntactic and semantic features. By encoding context as part of a definition, the meaning of a word can change dynamically in response to surrounding phrases without the need for explicit lexical dis- ambiguation heuristics. Moreover, the approach acquires all three classes of knowledge using the same case representa- tion and requires relatively little training and no hand-coded knowledge acquisition heuristics. We evaluate it in experi- ments that explore two of many practical applications of the technique and conclude that the case-basedmethod provides a promising approach to automated dictionary construction and knowledge acquisition for sentence analysis in limited domains. In addition, we present a novel case retrieval algo- rithm that uses decision trees to improve the performance of a k-nearest neighbor similarity metric. Introduction In recent years, there have been an increasing number of natural language systems that successfully perform domain-specific text summarization (see [MUC-3 Proceed- ings 1991; MUC-4 Proceedings 19921). However, many of the best-performing systems rely on knowledge-based parsing techniques that are extremely tedious and time- consuming to port to new domains. We estimate, for ex- ample, that the domain-dependent knowledge engineering effort for the UMas~/h!lUC-3~ system spanned 1500 person- hours [Lehnert et al. 1991b]. Although the exact type and form of the domain-specific knowledge required by a parser varies from system to system, all knowledge-based language processing systems rely on at least the following informa- tion: for each word encountered in a text, the system must ‘The domain for the MUC-3 and MUG4 performance evalu- ations was Latin American terrorism. The general task for each system was to summarize all terrorist events mentioned in a set of 100 previously unseen texts. 798 Cardie (1) know which parts of speech, word senses, and concepts are plausible in the given domain and (2) determine which part of speech, word sense, and concepts apply, given the particular context in which the word occurs. Consider, for example, the following sentences from the MIJC domain of Latin American terrorism: 1. The terrorists killed General Bustillo. 2. The general concern was that children might be killed. 3. In genera& terrorist activity is confined to the cities. It is clear that in this domain the word “general” has at least two plausible parts of speech (noun and adjective) and two word senses (a military officer and a universal entity). A sentence analyzer has to know that these options exist and then choose the noun/military officer form of “general” for sentence 1, the adjective/universal entity form in 2, and the noun/universal entity form in 3. In addition to part of speech and word sense ambiguity, these sentences also illustrate a form of concept ambiguity with respect to the domain of terrorism. Sentence 1, for ex- ample, clearly describes a terrorist act - the word “killed” implies that a murder took place and the perpetrators of the crime were “terrorists.” This is not the case for sentence 2 - the verb “killed” appears, but no murder has yet occurred and there is no implication of terrorist activity. This distinc- tion is important in the MUC domain where the goal is to extract from texts only information concerning 8 classes of terrorist events including murders, bombings, attacks, and kidnappings. All other information should be effectively ignored. To be successful in this selective concept extrac- tion task [Lehnert et al. 1991a], a sentence analyzer not only needs access to word-concept pairings (e.g., the word “killed” is linked to the “terrorist murder” concept), but must also accurately distinguish legitimate concept activa- tion contexts from bogus ones (e.g., the phrase “terrorists killed” implies that a “terrorist murder” occurred, but “chil- dren might be killed” probably doesn’t). This paper describes a case-based method for know- ledge acquisition that begins with a lexicon of only closed class words and learns the part of speech, general and spe- cific word senses, and concept activation information for all open class words in a corpu~.~ We first create a case 2Closed class words are function words like prepositions, aux- From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. base of context-sensitive word definitions during a human- supervised training phase. After training, given an open class word and the context in which it occurs, the parser retrieves the most similar cases from the case base and then uses them to infer syntactic and semantic information for the open class word. No explicit lexical disambiguation heuristics are used, but because context is encoded as part of each definition, the same word may be assigned a dif- ferent part of speech, word sense, or concept activation in different contexts. The paper also describes the results of two experiments that explore different, but related applications of this know- ledge acquisition technique. In the first application, we assume the existence of a nearly complete domain-specific dictionary and use the case base to infer the features of unknown words. In the second, more ambitious application, we assume only a small dictionary of function words and use the case base to determine the definition of all open class words. Although these tasks have been addressed separately in related research, our approach is the first to simultan- eously accommodate both using a single mechanism. Moreover, previous approaches to automated lexical ac- quisition can be classified along three dimensions: (1) the type of knowledge acquired by the approach, (2) the amount of training data required by the approach, and (3) the amount of knowledge required by the approach. [Brent 1990; Grefenstette 1992; Resnik 1992; and Zemik 19911, for ex- ample, present systems that learn either syntactic or limited semantic knowledge but not both. Statistically-based meth- ods that acquire (usually syntactic) lexical knowledge have been successful (e.g., [Brent 1991; Church & Hanks 1990; Hindle 1990; Resnik 1992; Yarowsky 1992; and Zemik 1991]), but these require the existence of very large, of- ten hand-tagged corpora Finally, there exist knowledge- intensive methods that acquire syntactic and/or semantic lexical knowledge, but rely heavily on hand-coded world knowledge (e.g., [Berwick 1983; Granger 1977; Hastings et al 1991; Lytinen & Roberts 1989; and Selfridge 19861) or hand-coded heuristics that describe how and when to ac- quire new word definitions (e.g., [Jacobs & Zemik 1988 and Wilensky 19911). Our approach to knowledge acquisition for natural lan- guage systems differs from existing work in its: o unified approach to learning lexical knowledge. The same case-based method and case representation are used to simultaneously learn both syntactic and semantic in- formation for unknown words. 0 encoding of context as part of a word definition. This allows the definition of a word to change dynamically in response to surroundingphrases and obviates the need for explicit, hand-coded lexical disambiguation heuristics. e need for relatively little training. In the experiments de- scribed below, we obtained promising results after train- ing on only 108 sentences. This implies that the method iliaries, and connectives, whose meanings vary little from one domain to another. All other words (e.g., nouns, verbs, adjectives) are open class words. may work well for small corpora where statistical ap- proaches fail due to lack of data. a Pack of hand-coded heuristics to drive tbe acquisition process. These are implicitly encoded in the case base. e leveraging of two existing machine learning paradigms. For case retrieval, we use a decision tree al- gorithm to improve the performance of a simple k-nearest neighbor similarity metric. In the remainder of the paper we describe the details of the approach including the case representation, case base construction, and the hybrid approach to case retrieval. We also discuss the results of the two experiments mentioned briefly above. Case Representation As discussed in the last section, our goal is to learn part of speech, word sense, and concept activation knowledge for any open class word in a corpus by drawing from a case base of domain-specific, context-sensitive word definitions. However, the case representation relies on three predefined taxonomies, one for each class of knowledge that we’re try- ing to learn. This section, therefore, first briefly describes the taxonomies and then shows how they are used in con- junction with parser-generated knowledge to construct the word definition cases. The Taxonomies To start, we set up a taxonomy of allowable word senses. Naturally, these reflect the goals of a particular domain. For the remainder of the paper, we will use the TIPSIER JV cor- pus as our sample domain. This corpus currently contains over 1300 texts that recount world-wide activity in the area of joint ventures/tie-ups between businesses. A portion of the word sense taxonomy created for the TIPSTER JV do- main is shown in Figure 1. The complete taxonomy includes 14 general word senses and 42 specific word senses. They are used to describe all non-verb open class words. I General word sense Specific word sense jv-cntity -y--e genericsompany-- government per- indusby resem% prcdtlccion sales facility connotmications , falm location WUlbY city entity Description party involved in a (IGup - of a company cg. “co.” in “Plastics 6.” government-affikitad entity an individual type of business CT industry research and development manufachming, production sales, marketing,wade physical facilities broadcasting stations mmufachrinp sites agTicdtma1 sit.%5 location expression czxxtmy name city nanle generic entity Figure 1: Word Sense Taxonomy (partial) Trainable Natural Language Systems 799 total-capitalization total cash capitalization indicates a sham in the tie-up indicates t3e type of industry performed within the scope Figure 2: Taxonomy of Concept T&es (partial) Next, we define a taxonomy of 11 domain-specific con- cept types which represent a subset of the relevant infor- mation to be included in the summary of each joint venture text (see Figure 2). Finally, we use a taxonomy of 18 parts of speech (not shown). The taxonomy specifies 7 parts of speech generally associated with open class words and reserves the remaining 11 parts of speech for closed class words. Although the word sense and concept taxonomies are clearly domain-specific, the part of speech taxonomy is parser-dependent rather than domain-dependent. We em- phasize, however, that our approach depends not on the specifics of any of the taxonomies, only on their existence. Representation of Cases Each case in the case base represents the definition of a single open class word as well as the context in which it occurs in the corpus. It is a list of 39 attribute-value pairs that can be grouped into three sets of features: o word definition features (6) that represent semantic and syntactic knowledge associated with the open class word in the current context 0 local context features (20) that represent semantic and syntactic knowledge for the two words preceding and the two words following the current word l global context features (13) that represent the current state of the parser Figure 3 shows the case for the word “venture” in a sen- tence taken directly from the TIPSIER JV corpus. Examine first the word definition features. The open class word de- fined by this case is “venture” and its part of speech in the current context is a nowz modifier (nm).3 The gen-ws and spec-ws features refer to the word’s general and specific word senses. In this example, “venture” has been assigned the most general word sense, entity, and has no specific word senses. The concept feature indicates that “venture” activates the domain-specific tie-up concept in this context. There is also a morph01 feature associated with the current word that indicates its class of suffix. The nil value used here means that no morphology information was derived for “venture.” Next, examine the local context features. For each of the two words that precede and follow the current open class 3The noun mod;fier(nm) category covers both adjectives and nouns that act as modifiers. We reserve the noun category for head nouns only. word (referred to in Figure 3 as prevl, prev2, foil, and fo12), we draw from the taxonomies to specify its part of speech, word senses, and activated concepts. The word immediately following “venture,” for example, is the noun ‘Yirm.” It has been assigned the jv-entity general word sense because it refers to a business, but has no specific word senses and activates no domain-specific concept in this context. Finally, examine the global context features that encode information about the state of the parser at the word “ven- ture.” When the parser reaches the word “venture,” it has recognkd two major constituents - the subject and verb phrase. Neither activates any domain-specific concepts, but the subject does have general and specific word senses. These are acquired by taking the union of the senses of each word in the noun phrase. (Verbs are currently assigned no general or specific word senses.) Because the direct ob- ject has not yet been recognized, all of its corresponding features in the case are empty. In addition to specifying information about each of the main constituents, the global context features also include syntactic and semantic know- ledge for the most recent low-level constituent (last constit). A low-level constituent can be either a noun phrase, verb, or prepositional phrase and sometimes coincides with one of the major constituents - the subject, verb phrase, or direct object. This is the case in Figure 3 where the low-level constituent preceding “venture” is just the verb. Case Base Construction Using the case representation described in the last section, we create a case base of context-dependent word definitions from a small subset of the sentences in the TIPSIER JV corpus. Because the goal of the approach is to learn syntactic and semantic information for only open class words, we assume the existence of a function word lexicon. This lexicon maintains the part of speech and word senses (if any apply) for 129 function words. None of the function words has any associated domain-specific concepts. The semi-automated training phase alternately consults a human supervisor and a parser (i.e., the CIRCUS parser [Lehnert 19901) to create a case for each open class word in the training sentences. More specifically, whenever an open class word is encountered, CIRCUS creates a case for the word, automatically filling in the global context features, the word and morphol features for the unknown word, and the local context features for the preceding two words (i.e., the prevl and prev2 features). Local context features for the following two words (i.e., foil and fol2) will be added to the case after CIRCUS reaches them in its left-to-right traversal of the sentence. The user is consulted via a menu- driven interface only to specify the current word’s part of speech, word senses, and concept activation information. These values are stored in the p-o-s, gen-ws, spec-ws, and concept word definition features and are used by the parser to process the current word. When CIRCUS finishes its analysis of the training sentences, it has generated one case for every occurrence of an open class word. 800 Cardie Word definition features en-ws: jv-entity 1 spec-ws: company-name generic-company-name spec-ws: nil concept: nil Global context features Figure 3: Case for “venture” Case Retrieval Once the case base has been constructed, we can use it to determine the definition of new words in the corpus. Assume, for example, that we want to know the part of speech, word senses, and activated concepts for “Toyo’s” in the sentence: Yasui said this is Toyo's and JAL’s third hotel joint venture. First, CIRCUS parses the sentence and creates a probe case for “Toyo’s” filling in the word and morph01 features of the case as well as its global and local context features using the method described in the last section.4 The only differ- ence between a test case and a training case is the gen-ws, spec-ws, p-o-s, and concept features for the unknown word. During training, the human supervisor specifies values for these missing features, but during testing they are omitted from the case entirely. It is the job of the case retrieval al- gorithm to find the training cases that are most similar to the probe and use them to predict values for the missing features of the unknown word. We use the following algorithm for this task: 1. Compare the probe to each case in the case base, counting the number of features that match (i.e., match = 1, mismatch = 0). Do not include the missing features in the comparison. Only give partial credit (S) for matches on nil’s. 2. Keep the 10 highest-scoring cases. 4There is a bootstrapping problem in that the foil and fol2 features are needed to specify the probe case for “Toyo’s.” This problem will be addressed in the second experiment. For now, assume that the parser has access to all foil and fol2 features at the position of the unknown word. 3. Of these, return the case(s) whose word matches the unknown word, if any exist. Otherwise, return all 10 cases.5 4. Let the retrieved cases vote on the values for the probe’s missing features. The case retrieval algorithm is essentially a k-nearest neighbors matching algorithm (k = 10) with a bias toward cases whose word matches the unknown word. An interest- ing feature of the algorithm is that it allows a word to take on a meaning different from any it received during the training phase. However, one problem with the retrieval mechanism is that it assumes that all features are equally important for learning part of speech, word sense, and concept activation knowledge. Intuitively, it seems that accurate prediction of each class of missing information may rely on very differ- ent subsets of the feature set. Unfortunately, it is difficult to know which combinations of features will best predict each class of knowledge without trying all of them. There are machine learning algorithms, like decision tree algorithms (see [Quinlan 1986]), however, that can be used to perform the feature specification task. Very briefly, de- cision tree algorithms learn to classify objects into one of n classes by finding the features that are most important for the classification and creating a tree that branches on each of them until a classification can be made. We use Quin- lan’s C4.5 decision tree system [Quinlan 19921 to select the features to be included for k-nearest neighbor case retrieval: 1. Given the cases from the training sentences as input, let C4.5 create a decision tree for each missing feature.6 ‘More than 10 cases w2l be returned if there are ties. 6We omit the p-o-s, gen-ws, spec-ws. and concept word deli- nition features from training cases because those are the features Trainable Natural Language systems 801 2. Note the features that occurred in each tree. This essentially produces, for each of the 4 missing attributes, a list of all features that C4.5 found useful for predicting its value. 3. Instead of invoking the case retrieval algorithm once for each test case, run it 4 times, once for each missing attribute to be predicted. In the retrieval for attribute X, however, include only the features C4.5 found to be important for predicting x in the k-nearest neighbors calculations. By using C4.5 for feature specification, we automatically tune the case retrieval algorithm for independent prediction of part of speech, word senses, and concept activation.’ Experiment 1 In this section we describe an application that uses the case- based approach described above to determine the definition of unknown words given a nearly complete domain-specific dictionary. We assume the existence of the function word lexicon briefly described above (129 enties) and then create a case base of context-sensitive word definitions for all open class words in 120 sentences from the TIPSIER JV corpus. In each of 10 experiments, we remove from the case base (of 2056 instances) all cases associated with 12 randomly chosen sentences and use these as a test set.’ For each test case, we then invoke the case retrieval algorithm to predict the part of speech, general and specific word senses, and concept activation information of its unknown word while leaving the rest of the case intact. This experimental design simulates a nearly complete dictionary in that it assumes perfect knowledge of the global and local context of the unknown word. Figure 4 shows the average percentage correct for predic- tion of each feature across the 10 runs and compares them Missing Case-Based Random Default Feature Approach Selection p-o-s 93.0% 34.3% 81.5% gen-ws 78.0% 17.0% 25.6% spec-ws 80.4% 37.3% 58.1% concept 95.1% 84.2% 91.7% Figure 4: Experiment 1 Results (% correct for prediction of each feature) to two baselines.g The first baseline indicates the expected whose values the decision trees are trying to predict. In addition, we omit the word, prevl-word, prev2-word, foll-word, and fol2-word features because of their large branching factor. These “word” features are always included in the k-nearest neighbors calculations, however. ‘Space limitations preclude the inclusion of experiments that compare the original case retrieval algorithm with the modified version. Those results are discussed in [Cardie 19931, however, which focuses on the contributions of this research to machine learning. *In each experiment, a different set of 12 sentences is chosen. This amounts to a lo-fold cross validation testing scheme. ‘Note that all results indicate performance for only the open class words. When function words are included, all percentages accuracy of a system that randomly guesses a legal value for each missing feature based on the distribution of values across the test set. The second baseline shows the perfor- mance of a system that always chooses the most frequent value as a default. The default for the concept activation feature (nil) achieves quite good results, for example. (This is because relatively few words actually activate concepts in this domain.) Chi-square significance tests on the asso- ciated frequencies show that the case-based approach does significantly better than both of the baselines (p = .Ol). Experiment 2 In the second application, we assume only a very sparse dictionary (129 function words) and use the case-based ap- proach to acquire definitions of all open class words. We use the same experimental design as experiment 1 - we create a case base from 120 TIPSTER JV sentences (2056 cases) and use lo-fold cross validation. During testing, however, we now make no assumptions about the availabil- ity of definitions for words surrounding the unknown word. CIRCUS parses each test sentence and creates a test case each time an open class word is encountered, filling in the global context features, the word and morph01 features for the unknown word, and the local context features for the preceding two words. If the following two words are both function words, then foll and fo12 features can also easily be specified. In most cases, however, one or both of foll and fo12 are open class words for which the system has no definition. In these cases, the parser makes an educated guess based on the training instances: 1. If the word did not appear during training, fill in the word features, but use nil as the value for the remaining foil and fo12 attributes. 2. If the word appeared during training, let each foll and fol2 feature be the union of the values that occurred in the training phase definitions. We also relax the k-nearest neighbors matching algorithm and allow a non-empty intersection on any foil or fo12 fea- ture to count as a full match. (Matches on nil still receive only half credit.) Results for experiment 2 are shown in Figure 5 along with the same baseline comparisons from ex- Missing Case-Based Random Default Feature Approach Selection p-o-s 91.0% 34.3% 81.5% gen-ws 65.3% 17.0% 25.6% spec-ws 74.0% 37.3% 58.1% concept 94.3% 84.2% 91.7% Figure 5: Experiment 2 Results (% correct for prediction of each feature) periment 1. Not surprisingly, all of the results have dropped somewhat; however, chi-square analysis still shows that the performance of the case-based approach is significantly better than the baselines (p =.Ol). increase. For part of speech prediction, for example, the case- based results increase from 93.0% to 96.4%. 802 Cardie Conclusions , We have presented a new, case-based approach to the ac- quisition of lexical knowledge that simultaneously learns 3 classes of knowledge using the same case representa- tion and requires no hand-coded acquisition heuristics and relatively little training. We create a case base of context- sensitive word definitions and use it to learn part of speech, word sense, and concept activation knowledge for unknown words. The case-based technique employs a decision tree al- gorithm to specify the features relevant for simple k-nearest neighborcase retrieval and allows the definition of a word to change in response to new contexts without the use of lexi- cal disambiguation heuristics. We have tested our approach in two practical applications and found it to perform signif- icantly better than baselines that randomly guess or choose default values for the features of the unknown word. Given results in previous work (see [Cardie 1992]), however, we believe performance can be much improved through the use of case adaptation heuristics that exploit knowledge implicit in the taxonomies that is unavailable to the learning algo- rithms. In addition, although this paper discusses only two applications of the approach, many more exist. Explicit domain-specific lexicons can be constructed, for example, by saving the definitions acquired during the testing phase of the experiments discussed above. Finally, we have demon- strated that the case-based technique described here is a promising approach to dictionary construction and know- ledge acquisition for sentence analysis in limited domains. Acknowledgments I wish to thank Professor J. Ross Quinlan for supplying the C4.5 decision tree system. This research was supported by the Office of Naval Research Contract N00014-92-J-1427 and NSF Grant No. EEC-9209623, State/Industry/University Cooperative Re- search on Intelligent Information Retrieval. References Berwick, R. 1983. Learning word meanings from examples. Proceedings, Eighth International Joint Conference on Artificial Intelligence. Karlsruhe, Germany, pp. 459-461. Brent, M. 1991. Automatic acquisition of subcategorization frames from untagged text. Proceedings, 29th Annual Meeting of the Association for Computational Linguistics. University of California, Berkeley, Association for Computational Linguistics, pp. 209-214. Brent, M. 1990. Semantic classification of verbs from their syntactic contexts: automated lexicography with implications for child language acquisition. Proceedings, TMpelfth Annual Con- ference of the Cognitive Science Society. Cambridge, MA, The Cognitive Science Society, pp. 428-437. Cardie, C. 1993. Using Decision Trees to Improve Case-Based Learning. To appear in, I? Utgoff (Ed.), Proceedings, Tenth In- ternational Conference on Machine Learning. University of Mas- sachusetts, Amherst, MA. Cardie, C. 1992. Learning to Disambiguate Relative Pronouns. Proceedings, Tenth National Conference on Artificial Intelligence. San Jose, CA, A4A.I Press/MIT Press, pp. 38-43. Church, K., & Hanks, P. 1990. Word association norms, mutual information, and lexicography. Computational Linguistics, 16. Granger, R. 1977. Foulup: A program that figures out meanings of words from context. Proceedings, Fifth International Joint Conference on Artijkiallntelligence. Morgan Kaufmannpp. 172- 178. Grefenstette, G. 1992. SEXTANT: Exploring unexplored con- texts for semantic extraction from syntactic analysis. Proceedings, 30th Annual Meeting of the Association for Computational Lin- guistics. University of Delaware, Newark, DE, Association for Computational Linguistics. pp. 324-326. Hastings, I?, Lytinen. S., & Lindsay, R. 1991. Learning Words from Context. Proceedings, Eighth International Conference on Machine Learning. Northwestern University, Chicago, IL. Hindle, D. 1990. Noun classtication from predicate-argument structures. Proceedings ,28th Annual Meeting of the Association for Computational Linguistics. University of Pittsburgh, Associa- tion for Computational Linguistics, pp. 268-275. Jacobs, P., & Zemik, U. 1988. Acquiring Lexical Knowledge from Text: A Case Study. Proceedings, Seventh National Confer- ence on Artificial Intelligence. St. Paul, MN, Morgan Kaufmann, pp. 739-744. Lehnert, W. 1990. Symbolic/Subsymbolic Sentence Analysis: Exploiting the Best of Two Worlds. In J. Bamden, 81 J. Pol- lack (Eds.), Advances in Connectionist and Neural Computation Theory. Norwood, NJ, Ablex Publishers, pp. 135-164. Lehnert, W., Cardie, C., Fisher, D., Riloff, E., & Williams, R. 1991a. University of Massachusetts: Description of the CIR- CUS System as Used for MUC-3. Proceedings, Third Message Understanding Conference (MUC-3). San Diego, CA, Morgan Kaufrnann, pp. 223-233. Lehnert, W., Car-die, C., Fisher, D., Riloff, E., & Williams, R. 1991b. University of Massachusetts: MIX-3 Test Results and Analysis. Proceedings, Third Message Understanding Conference (MUC-3). San Diego, CA, Morgan Kaufmann, pp. 116-l 19. Lytinen, S., & Roberts, S. 1989. Lexical Acquisition as a By- Product of Natural Language Processing. Proceedings, IJCAI-89 Workshop on Lexical Acquisition. Proceedings, Fourth Message Understanding Conference (MUC-4). 1992. McLean, VA, Morgan Kaufmann. Proceedings, Third Message Understanding Conference (MUC-3). 1991. San Diego, CA, Morgan Kaufmann. Quinlan, J. R. 1992. C4.5: Programs for Machine Learning. Morgan Kaufmann. Quinlan, J. R. 1986. Induction of decision trees. Machine Learning, 1, pp. 81-106. Resnik, P. 1992. A class-based approach to lexical discov- ery. Proceedings, 30th Annual Meeting of the Association for Computational Linguistics. University of Delaware, Newark, DE, Association for Computational Linguistics, pp. 327-329. Selfridge, M. 1986. A computer model of child language leam- ing. Artificial Intelligence, 29, pp. 171-216. Wilensky, R. 1991. Extending the Lexicon by Exploiting Sub- regularities. Tech. Report No. UCBICSD 911618. Computer Science Division (EECS), University of California, Berkeley. Yarowsky, D. 1992. Word-Sense Disambiguation Using Sta- tistical Models of Roget’s Categories Trained on Large Corpora. Proceedings, COLZNG-92. Zemik, U. 1991. Train1 vs. Tram 2: Tagging Word Senses in Corpus. In U. Zemik (Ed.), Lexical Acquisition: Exploiting On-Line Resources to Build a Lexicon. Hillsdale, NJ, Lawrence Erlbaum Associates, pp. 91-112. Trainable Natural Language Systems 803 | 1993 | 119 |
1,320 | Roland zito- ard Alterman Computer Science Department/Center for Complex Systems Brandeis University, Waltham, MA 02254 rjz@cs. brandeis.edu, alterman@cs. brandeis.edu Abstract Case-based reasoning refers to the class of memory-based problem solving methods which emphasize the adaptation of recalled solutions (ex- planations, diagnoses, plans) over the generation of solutions from first principles. CBR has become a popular methodology, resulting in a proliferation of case organization and representation proposals. The goal of this paper is to sort through some of these proposals. Using the formal models of “pro cedure,, and “case-based reasoning” introduced in Zito-Wolf and Alterman (1992)) we compare three current proposals for the organization of procedu- ral case-bases: individual cases, microcases, and multicases. We give a worst-case analysis that shows the advantages of the multicase in terms of case storage and retrieval costs. The model pre- dicts that multicases reduce case storage and re- trieval costs as compared to the other two models. We then provide some empirical evidence from an implemented system that suggests that the trends observed in the formal model are also observable in case bases of practical size. 1 Introduction In recent years, Artificial Intelligence researchers have become increasingly interested in techniques for rea- soning directly from examples rather than from the abstract knowledge one might distill from them. Cuse- bused reasoning (CBR) refers to the class of memory- based problem-solving methods which emphasize the adaptation of recalled solutions (explanations, diag- noses, plans) over the generation of solutions from first principles (such as a domain theory). People rely heav- ily on such techniques both in expert domains, such as medical diagnosis, legal reasoning, and in coping with the more mundane problems that arise in everyday life (Kolodner & Simpson, 1989). The variety of applications of CBR has resulted in a proliferation of case organization and representation proposals. Until now the evaluation of proposals has been informal making comparisons difficult. The goal of this paper is to put some of these issues on a firmer foundation. Because it is difficult to discuss these issues in any detail independent of specific tasks, this paper will fo- cus on the organization of procedural knowledge for planning tasks. Because procedures are typically exe- cuted many times, providing large numbers of related yet distinct cases, and they are complex, with many in- terrelated components (i.e., steps), procedural domains are a good test domain for examining these issues. It is also an area where perhaps the largest number of different proposals have been made, which we inter- pret as reflecting the difficulty of the representational problem. We will present a formal model which will allow for the comparison of case-base organization propos- als. We discuss three existing proposals, the third of which, the multicase, combines the benefits of the other two. We give a worst-case analysis that shows the ad- vantages of the multicase in terms of case storage and retrieval costs. We also provide some empirical evi- dence that suggests that the trends observed in the formal model are also observable in case bases of prac- tical size. For an extended analysis that also discusses additional factors (e.g., adaptation costs) see Zito-Wolf (1993). 2 epresenting rocedures Let a problem as presented to the system consist of a situation (a world state) plus a goal to be achieved. The solution to a problem will be a procedure, that is, a sequence of steps that achieves the goal in that situation. Assume we are given a set of examples of some procedure. What shall we call a case? In this paper we will use the terms example or episode for an instance of a problem and a solution procedure. The term case will be reserved for the unit of storage and retrieval from memory. Many CBR systems, especially early ones (e.g., CHEF, CYRUS) have equated the two; however, they are logically distinct, and it is useful to distinguish them. Storage eqlaisements Although memory is be- coming increasingly plentiful, at any given time mem- Case-Based Reasoning 73 From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. ory is a finite resource which needs to be traded off against possible uses. Consequently, one significant issue is the amount of memory required to store all the cases. As the quality of solution retrieved by a case-based reasoner is expected to be monotonically increasing with the number of distinct, relevant cases available to it, one wants to accommodate as many cases as possible. On the other hand, the number of potential cases can be very large - it is exponential in the complexity (number of features) of the cases. Retrieval Cost Retrieval cost is a function of the number of cases examined and the effort required to de- termine their relevance to the current situation. Both the number of cases to examine and the cost of deciding among them can be large in practical casebases. When the number of cases becomes large, retrieval normally relies on indexes. An index is an auxiliary data struc- ture that provides a direct mapping from each specific feature of interest to cases having that feature. Com- plete indexing may not be practical for all features, however; for example, the type of data involved may not admit of simple indexes, or the feature of interest may be computed only at execution time. Hence, we will look at both indexed and unindexed retrieval. 3 Modelling Procedure Organizations In this section we will compare the behavior of three representations for procedural case-bases, based on the formal models of “procedure” and “case-based rea- soning” introduced in Zito-Wolf and Alterman (1992). The results of this section are summarized in Table 1. We assume the procedural knowledge to be captured has the form of a complete binary decision tree 7’ of uniform depth n. Each node i E T contains a step to be performed plus a decision selecting the next node to be executed. T therefore contains ITI = 2n - 1 steps and 2”-l- 1 decisions (those in the leaves are ignored). Each procedure execution episode will consist of n - 1 decisions selecting n steps along some path in T from the root to a leaf node. Let the input to the decision at a node i be the set of binary features Fi, so that F = UiET Fi is the set of all features referenced by the procedure. We assume there exists some upper bound f = maxiET [Fij on the number of features tested by any specific decision, and that f is small compared to F.l To estimate F we choose (n - 1)f. This corresponds (for example) to a procedure composed of n - 1 distinct decisions occurring in a fixed order. Storage Case-based reasoning for procedure gener- ation is the example-based selection of a sequence of steps to achieve a given goal. Each occasion for selec- tion is a problem Pi, the process of searching through lFor simplicity, in the remainder of the paper we will write X for 1x1 where there is no likelihoot of confusion. the case-base to solve a problem is a retrieval, and the number of steps determined by each problem is the problem size Sp. The solution to each problem will be encoded in memory as some set of cases Cp. Each case pairs a problem solution with a conjunction of features for which it applies. Since it has been stipulated that a given decision references at most f features, at most 2f cases will be required to represent a decision, one for each possible conjunction of the features and their negations. The union of all the Cp is the case-base C. The case-base size S(C) will be measured as the num- ber of step instances in the case base 6, that is, the product of the number of cases ICI and the problem size. The set of examples from which a given case base is derived will be denoted E. Retrieval and Indexing Consider a linear search model of case retrieval, in which the retrieval e#ort per problem RP is proportional to the number of feature tests made. Rp is the product of the number of cases to be searched through and the number of features to be tested per case. If P is the number of problems per episode, then the total retrieval eflort per episode R = RpP. Because case-retrieval via linear search involves ef- fort exponential in the procedure size (n), most CBR systems use some form of indexing for faster retrieval.2 We will model an index as a boolean discrimination network which tests just enough features to discrim- inate all the cases. Assuming that the index is well- constructed, the decision cost per problem RXP is pro- portional to the depth d of the index, that is, log base 2 of the number of cases entering into a given decision, and the pre-episode cost Rx = RxpP The size of an index (in nodes) is 2d - 1, of the same order as the number of cases indexed. 3.1 Individual Cases The first representation we will consider is the storage of individual cases (CHEF, Hammond 1990; COOKIE, McCartney 1990). In this method the unit of retrieval from memory, the case, is taken to be the same as the unit of knowledge presentation, the episode. Proce- dure execution over such a case base consists of a single up-front decision among alternative cases (Figure la). That is, case retrieval returns a single complete exam- ple episode for the target task, which (usually after some tweaking) is interpreted as a procedure for the desired task. Also in this class are MOP-based sys- tems (Kolodner, 1983; Lebowitz, 1983; Turner, 1989). 2The term “indexing” is used in the CBR literature in at least three distinct senses: to refer to performance methods that accelerate access to subsets of the case base; to refer to organizing methods that group cases observed to have sim- ilar features, typically in the service of generalization (cf. CYRUS and IPP); and to refer to the process of encoding knowledge by adding features to a case-base to define sets of cases with related content. Our analysis focuses on the first of these meanings. 74 Zito-Wolf MOPS indexes, though more complex then our model index, serve the same function. The key similarity is that cases are stored and accessed as wholes; for the purpose of this paper the indexing differences can be ignored. Storage Each episode of (i.e., path through) T is a case, so the problem size Sp = n. The entire mapping from features to procedure is performed in one retrieval (P = 1) with 2n-1 potential outcomes. The number of potential cases can be estimated from the total number of features referenced, yielding C = Z(+‘)f, S(C) = nC, and Rp = FC). Retrieval Cost Unindexed retrieval cost is deter- mined by the number of feature comparisons made, which is the size of the case base times the number of features per case. Only one retrieval is required, so that R = RpP = F2F. Using an index, the decision cost is the log base 2 of the number of cases entering into a given decision, or O(F), hence the popularity of indexing. A simple index requires on the order of as many nodes as there are cases indexed, so its storage can be ignored.3 Most notable here is the rapid growth in possible cases as either the number of features or procedure steps increases. This is due to the fact that the cases are significantly redundant - each case instantiates a complete path through the tree. To represent proce- dural knowledge in an individual-case-based system, for, say, one’s knowledge of procedures for telephon- ing, one would have to store a case for every possible event sequence and every situation type that could be encountered in executing one’s phone-call procedure, or at least a significant number of them. Note that as the case-base fills up with variant episodes, retrieval also becomes more expensive. 3.2 Microcases The second class of procedure representations is the microcase. In this method, each example presented to the system is converted into many cases, typically one for each step of the episode. All the cases so created go into a common case-base. At execution time, proce- dures are not so much retrieved as incrementally recon- structed, steps being selected sequentially by separate retrievals over the case-base. (Figure lb). Microcases avoid the redundant storage and difficulty with transfer that we encountered with individual cases. Micro-case- based systems have been applied to planning (Langley and Allen, 1990), parsing (Goodman, 1991), and word pronunciation (NetTalk, Stanfill and Waltz 1986). Storage To encode procedure T using microcases we make each selection of a step a separate problem. Then sp = 1, and Cp = 2f cases per problem. There are P = n problems per episode, but 2n - 1 problems to be encoded to represent the entire procedure, giving c = 2q2n - 1). etrieval Cost The price of the microcase’s addi- tional flexibility and reduced storage is increased re- trieval costs: a case retrieval occurs at every procedure step. Each such decision has to select among a large number of options, namely, all possible steps. Unin- dexed retrieval effort per problem is Rp = (F+n)2”+f. Note that n additional features are added to distin- guish the 2+’ potential “current positions” within the represented procedure; that is, the structure of the pro- cedure T needs to be encoded implicitly as extra fea- tures referenced by the cases. Total retrieval effort R is n times the per-problem figure, or C(n22”+f). In- dexed decision cost is the log base 2 of the number of cases entering into a given decision, or C(n+f). There is a single index of size ICI, or 0(2nCf). 3.3 The Multicase We have proposed a third organization, the multi- case (Zito-Wolf and Alterman 1992). By a multi- case we mean a structure which merges many individ- ual episodes but retains a representation of the over- all structure of those episodes. Episodes are merged through the introduction of conditionals, so that the details of the individual episodes can be retained (Fig- ure lc). Each example is represented by some specific path through the procedure graph. Episodic memory is organized around the underlying procedure, which serves to partition the procedure into many individual decisions. This organization efficiently accommodates variation among episodes, and is moreover a conve- nient vehicle for organizing related knowledge, such as unexpected events or episodes where the plan failed. The key difference between individual cases and mul- ticases is that individual cases store episodes in mem- ory as separate structures linked by indexes or abstrac- tion hierarchies, whereas multicases index them at a finer-grained level by segmenting them and distribut- ing them across the partonomic (i.e., step) structure of the procedure. The key difference between the multicase and mi- crocase is that for multicases the decision overhead is reduced by partitioning the pool of cases according to the structure of the procedure, whereas for microcases all the cases go into a single pool, so that each decision must decide among a much larger range of options. The historical antecedents of all of the case mod- els discussed here are the ideas of scripts (Schank & Abelson, 19X’), MOPS, and their elements, scenes and tracks (Schank, 1982). The issue is how to organize an agent’s episodic memory using ideas like this in the most effective manner. This is the problem the multi- case addresses. Case-Based Reasoning 75 f0 .@ 0 step1 S&P1 . . . 0 *P2 SF step3 0 slep4 Sttpl I -? MoL Mole of EPim& 1 E~odc2 (a) Individual Cases Figure 1: , Molod Moleof Episode 1 E~imde 2 (b) Microcases (c) Multicase Three Methods of Representing Episodes Storage The case base is partitioned both by the type of decision (e.g., next-step, role-choice); the struc- ture of the procedure is expressed explicitly in the structure of the multicase. We have Sp = 1, 1 PI = n- 1 problems per episode, with ICpl = 2f and 2n-1 - 1 problems overall, for a total of ICI = O(2n+f-1). 73 Done Retrieval Cost Because the multicase focuses on only one decision at a time, the number of cases that must be searched through and the number of features needing to be consulted at any given decision point are greatly reduced. For the multicase, on1 f features need be consulted per decision, so only 2 r cases need be examined; the rest of the cases are only relevant to other decisions. Thus Rp = f. 2f and R = O(nf a 2f). Indexed decision cost is the log base 2 of the number of cases entering into a given decision, or O(f). For multicases there may be as many 2’+’ indexes, one for each decision, the total index size is 2n+f-1. 4 Empirical Demonstration Our source data derives from a sample of runs of the FLOABN system on problems involving the opera- tion of household and office devices such as telephones, copiers, and vending machines (Alterman, Zito-Wolf, and Carpenter 1990; Alterman, Carpenter, and Zito- Wolf 1990). FLOABN was provided initially with a skeleton multicase for a simple procedure for the usage of each class of device. It was then presented with a sequence of 50 problem situations involving these pro- cedures; for example, phone calls varied in destina- tions, locations, call types, and phone features. There were on average 25 steps per episode, yielding in ex- cess of 1200 cases. For each episode we collected over 60 items of data about the evolution of memory and procedure performance. Each run of the example se- quence required approximately 8 hours on an 8 Mbyte Macintosh 11x under Allegro Common Lisp. 4.1 Comparing Representation Methods Our methodology for comparing the three case- organization models will be to use our formal model of procedures to define mappings between the multicase model and the individual-case and microcase models. We will gather data from runs of FLOABN to esti- mate practical values for the relevant parameters of the model: the total number of different features that were observed (F), the average number of features ref- erenced in making a decision (f), the average number of decisions needing to be made per episode, and the number of cases C. We then run this data through the model to derive costs for the three methods. It would have been preferable in some sense to com- pare implementations of the three methods directly rather than comparing projections based on a single implementation. Problems emerged from such an at- tempt. Several operations that were facilitated by the multicase - for example, instruction-processing and plan-modification operations - were hard to do, and in some cases even to define, on other representations. This is because .these functions required an evolving plan schema representation, which the multicase pro- vides and the other organizations do not. 4.2 Storage Costs Figure 2 compares memory requirements for case stor- age. The greatest storage requirements are for the individual case method. Individual cases save much redundant information with each nominally different case. In contrast, case memory growth for the micro- case and multicase methods tail off as as the memory becomes familiar with the range of variation of its pro- cedures. The multicase method uses less storage than the microcase method. This is because the multicase partitions its space of decisions much more finely, and consequently, many more decisions have only one op- tion and hence do not require that any cases be stored for them. Figure 3 compares the number of decisions with > 1 option for each method. The graph for mi- crocases represents something of an upper bound, since 76 Zito-Wolf Table 1: Storage and Retrieval Cost Summary microcases strive to have as many options as possible at each decision. Comparison with the graph for mul- ticases shows that in our example situations, anywhere from 50-75% of these decisions are unnecessary. This suggests that microcases overemphasize transfer at the cost of greatly increasing the amount of knowledge re- quired to “learn” the procedure. 900 - 800 - 700 - 600 - 500 - 400 -’ EDlsodes Processed Figure 2: Memory Usage Comparisons Cumulative Decisions .<- 4. MultIcase I I 21 31 41 Episodes Processed Figure 3: Cumulative Decisions per Episode 4.3 ecision Cost Now we wish to use our empirical data to compare decision costs. Recall that our model of unindexed de- cision cost Rp was the product of features tested and cases examined per problem, whereas for indexed re- trieval, we took R xp to be the depth of the smallest index needed to discriminate the cases under consider- ation. The required quantities were determined from the example sequence. Figure 4 shows the number of cases available per decision for the three methods. The number of features referenced per case was measured at the end of the problem sequence to be f x 7 per case and total F M 16. Lastly, the number of retrievals per episode was presented in the previous section (Fig- ure 3). Individual cases introduce a difficulty here. It is un- reasonable to assume that only one up-front case re- trieval is needed per episode, because it ignores the cost of the runtime decision-making which the other meth- ods are being “charged” for. Thus, since each such unanticipated circumstance incurs at least one addi- tional retrieval, we have added to the count of retrievals per episode in Figure 3 one retrieval per unanticipated decision event or relevant situation feature. 300 - 250 . 200 . Relevant Cases MIcrocase ,;------- ,._.--. .’ .a-* _,_.-- -_.. t I I 21 31 41 Eptsode Number Figure 4: Choices Per Decision Figure 5 graphs indexed retrieval effort for the three methods. The first result we observe is that micro- cases have much larger retrieval effort - an order of magnitude larger than multicases even in the fully in- dexed case. This difference is the product of the two Case-Based Reasoning 77 differences shown above: the number of cases requir- ing a decision, and the number of cases examined per decision. Secondly, we observe that multicases require effort less than (but roughly comparable to) that for individual cases. The individual-case method’s advan- tage of making fewer decisions per episode is more than offset by the extra costs of handling more contingencies and sorting through more cases per decision. For unin- dexed retrieval (not shown), the multicase has about 1% of the cost of individual cases, while the microcase remains the most costly of the three. 7000 T Cumulative 5000 t Indexed Retrieval Cffort , I’ _a-. _.--- -*-- -’ __-.- ,. - ,’ lndlvldual Case 21 31 Cplsodes Processed 41 I Ml~lticase Figure 5: Indexed Retrieval Effort Comparisons 5 Concluding Remarks The paper has presented a framework for comparing case-based procedure models over issues concerning case representation and organization. The framework is a formal model of procedures and their representa- tion that enables us to characterize analytically several important properties of each method. In this paper we focused on storage cost and retrieval effort. Three current proposals were analyzed within this framework: microcases, individual cases, and multi- case. The formal analysis addressed the large scale behavior of the different models, a kind of worst-case analysis. Results of this analysis can be summarized: the multicase has the least storage requirements and individual cases are worse by an exponential factor. For retrieval cost, multicase and individual cases have comparable costs, with microcases a factor of n worse (where n is the depth of the procedure). To evaluate the behavior in a more average case, empirical data was collected from a run of FLOABN in which over a thousand cases were collected. The results of this data can be summarized: The empiri- cal data qualitatively confirms the the formal analysis with regard to storage and retrieval costs. We note, however, that the difference in indexed decision cost between individual cases and multicases, though of the expected polarity, was not as significant as the formal model leads us to expect. This is because the number 78 Zito- Wolf of cases was not yet large enough for the proliferation of cases to dominate retrieval cost. Both our formal analysis and our empirical data em- phasize that representational choices in CBR systems have significant effects on system performance. We feel that this paper is the first step of a larger project in the exploration and characterization of efficient and useful case-base organizations, and an essential step in the development of truly large-scale CBR systems. PI PI PI PI I51 PI PI PI PI PO1 WI PI PI PI References Richard Alterman, Tamitha Carpenter, and Roland Zito-Wolf. An architecture for understanding in plan- ning, action, and learning. SIGART BuEZetin, 2(4):14- 19, 1991. Special Issue on Integrated Cognitive Archi- tectures. Richard Alterman, Roland Zito-Wolf, and Tamitha Car- penter. Interaction, comprehension, and instruction us- age. Journal of the Learning Sciences, l(4), 1991. Marc Goodman. A case-based, inductive architecture for natural language processing. In AAAI Spring Sym- posium on Machine Learning of Natural Language and Ontology, 1991. Kristian J. Hammond. Case-based planning: A frame- work for planning from experience. Cognitive Science, 14:385-443, 1990. Janet L. Kolodner. Reconstructive memory: A com- puter model. Cognitive Science, 7~281-328, 1983. Janet L. Kolodner and Robert L. Simpson. The MEDI- ATOR: Analysis of an early casebased problem solver. Cognitive Science, 13:507-549, 1989. Michael Lebowitz. Generalization from natural lan- guage text. Cognitive Science, 7:140, 1983. Robert McCartney. Reasoning directly from cases in a case-based planner. In Proceedings of the Twelfth An- nual Conference of the Cognitive Science Society, pages 101-108, Hillsdale, NJ, 1990. Lawrence Erlbaum Asso- ciates. R. Schank and R. Abelson. Scripts, Plans, Goals, and Understanding. Lawrence Erlbaum Associates, Hills- dale, NJ, 1977. Roger Schank. Dynamic Memory: A Theory of Remind- ing and Learning In Computers and People. Cambridge University Press, Cambridge, 1982. Craig Stanfill and David Waltz. Toward memory-based reasoning. Communications of the ACM, 29(12):1213- 1239, 1986. Roy M. Turner. A schema-based model of adaptive problem solving. Technical Report GIT-ICS-89/42, Georgia Institute of Technology, 1989. Roland Zito-Wolf. Case-based representations for pro- cedural knowledge. Unpublished doctoral dissertation, 1993. Roland Zito-Wolf and Richard Alterman. Multicases: A case-based representation for procedural knowledge. In Proceedings of the Fourteenth Annual Conference of the Cognitive Science Society, pages 331-336. Lawrence Erlbaum Associates, 1992. | 1993 | 12 |
1,321 | KITSS: A p-Based Translation System for Scenarios Van E. Kelly and Mark A. Jones AT&T Bell Laboratories Murray Hill, NJ 07974-0636 vek@research.att .com, jones@research.att .com Abstract Action: (1.1) Place calls to stations B3 and Di Machine-assisted language translation systems for technical documents, guide humans through a pro- cess of selecting and composing variant partial translations. The constrained nature of techni- cal sublanguages makes language processing aids cost-effective to build and use. and make them busy. (1.2) Station Bl calls station B2. (1.3) B2 does not answer. Verify: (1.4) Ringback to station Bl is replaced with Call Coverage tone. Action: (1.5) Wait for CR1 timeout at Bl. Analogously, we have developed KITSS, a knowledge-based translation system for converting informal English scenarios of the desired behav- ior of complex reactive systems into formal, exe- cutable test scripts. A trainable parser and ref- erence resolver capture domain-specific linguistic knowledge. A logic analyzer establishes coherence in the translation process in a role comparable to a “story understander”. It checks the consistency of each step of a translated test script using a the- orem prover, a planner, and logic-encoded back- ground knowledge about the system under test. This helps correct common but serious specifi- cation errors, including underspecificity, omitted steps, and even some outright mis-statements. To evaluate how well such technology can scale, we have exercised our technology progressively on a graduated corpus of 100 behavior scenarios span- ning 7 advanced calling features for a private tele- phone switch (PBX), successfully translating 70% into test scripts without any manual post-hoc edit- ing. Our experience with KITSS has enabled us to identify many of the tradeoffs in accommodat- ing informality in specification, versus demanding formality from a human agent. Verify: (1.6) Station Bl receives ringback again. (1.7) The call keeps ringing at station B2 (the call will not go to coverage since all the covering users are busy). Figure 1: Sample Behavior Scenario other engineers who will eventually validate the final product. For an initial release of new functionality, it makes economic sense to execute each required scenario manually once or twice, but for later repetitive regres- sion testing, a manual approach is not cost-effective. Informal scenarios must be converted into formal, exe- cutable test scripts and maintained in this form. This conversion, in practice, is a laborious and costly pro- cess for test engineers, averaging several hours per sce- nario initially, plus revisions for subsequent product releases. When product features and versions prolifer- ate, the cost of generating and maintaining a library of thousands of scripts only gets worse. Solution Approach Because the language used in informal scenarios is not really standard English but an extremely styl- ized technical subdialect, it seemed promising to try automated natural-language processing to speed the conversion and maintenance processes. In translation of specialized technical documents between two natu- ral languages, say from French to Swedish, customiz- able interactive translator systems have recently logged clear productivity benefits over manual approaches, es- pecially for restricted technical dialects. These tools do not replace human translators, but recast their roles as primarily one of selecting and composing partial translations suggested by the tool. We adopted this approach to structuring human-computer cooperation Introduction Problem Statement Engineers who specify complex reactive systems, such as telephone switches, are often unfamiliar with ex- pressive specification formalisms like temporal logics. Instead, they state functional requirements in thou- sands of stylized English prose scenarios describing lin- ear traces of specific desired behaviors, such as Fig- ure 1. These scenarios form an informal contract with 804 Kelly From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. as one of the major paradigms for our own solution ap- scenario translation process must flag such problems proach. As in most machine translation applications, and guide humans in correcting them; a story under- we began with a large extant corpus which had to be standing functionality is needed in addition to linguis- substantially parsed “as we found it” in order to cost- tic expertise. Although automated story understand- justify our approach. It was not sufficient merely to ing is very difficult in general, a telephony testing do- design a suitably habitable dialect for the expression main is, fortunately, much simpler to axiomatize than of future test scenarios. the “real” world. In machine translation from one natural language to another, a direct translation of individual sentences, or even just of individual phrases and clauses, usu- ally provides at least a marginally intelligible result, but this is not so for our task. First of all, an wide abstraction level gap must be bridged between the goal-oriented intensionality of scenarios and the pro- cedural extensionality required in executable scripts. Oblique descriptions of actions and objects in scenar- ios must be identified with concrete operations and de- vices in a telephone switch test lab. For example, an action such as “Place a call to station Bl” involves choosing a station to place the call, going off-hook, mapping “station Bl” to its extension, and choosing a method for dialing (dialing digits, speed dialing, last number redial, etc.). The problems of “exten- sionalizing” nominal descriptions have been partially addressed by natural language database front-ends, where noun phrases are resolved using known objects and attributes in a database schema (e.g., [Ballard and Stumberger 1986]), although such front-ends do not generally jump nearly as wide an abstraction gap as KITSS must. Nor have database front-ends had to deal with narrative text forms, nor have they faced head-on the history-sensitive scoping of references within nar- rative discourse structures. An Implementation KITSS (a.k.a. the Knowledge-based Interactive Test Scripting System) is our prototype system for for- malizing and analyzing scenarios of telephone switch behavior for medium-sized private-network switches (PBX’s). It was deliberately designed to integrate smoothly with an existing test process, partly au- tomating one manual task without otherwise impact- ing a test engineer’s job. Its translation subsystem guides the conversion of each sentence of an English scenario into an equivalent statement in WIL, a logic- based interlingua previously used in the WATSON system ([Kelly and Nonnenmann 19911). The new natural-language technology we developed for this task includes an extremely high-performance adaptive sta- tistical chart-parser [Jones and Eisner 19921 and a rule- based phrase converter which performs nominal refer- ence resolution and case-frame normalization. English phrases converted into WIL can be paraphrased back into a stylized pseudo-English. Users can also directly input the pseudo-English forms as an alternative to the full English interface. No human ability to read or write WIL directly is required. Beyond the problems of bridging abstraction gaps on the level of individual phrases and statements, there are also “discourse level” semantic hazards in informal scenarios: Scenario understanding and analysis is provided by a heavily instrumented “interpreter” which analyzes the translated WIL statements using a coarse black-box simulation of a telephone switch, prototyped using a theorem prover, a telephony domain theory expressed in logic, and a library of stereotypical plans for tele- phone usage. Whenever a scenario leads to an impos- sible state, such as trying to answer a telephone call which is not locally present, the interpreter summarizes the anomaly and, where possible, suggests plausible corrections, (such as answering the call from another station where it is present). Plausible elaborations of underspecified activities during the scenario, such as making a station become busy, are calculated using a planner and the plan library. Implicit initialization steps and test lab configuration conditions for test ex- ecution are also made explicit. As during translation, the analyzer keeps the human user “in the loop” for all modifications and elaborations of the WIL script, and all user interactions use pseudo-English paraphrases of WIL constructs. entire steps can be left out, important timeouts and other “null actions” go un- mentioned (e.g., Figure 1, sentences 1.3, 1.7), actions may create “hidden” linguistic referents, such as when explicitly dialing a string of digits im- plicitly creates a new telephone call, actions and observations can be underspecified, such as in sentence 1.1 where there are several non- equivalent ways to force phones to become “busy”, essential initialization steps and boundary condi- tions remain implicit, as in sentence 1.7 where the “covering users” (i. e., telephone call screeners, prob- ably secretaries) of B2 are mentioned but there were no explicit administrative steps to make B3 and Dl function in that capacity. Sometimes, there are also just plain mistakes. A hu- man concentrating on the purely linguistic aspects of the translation task is likely to miss at least some of these problems. An automated capability within the A more extensive description of our system archi- tecture, knowledge bases, user interface, and early ex- perimental results was published in [Nonnenmann and Eddy 19921. That paper also discusses our approach to solving the script maintenance problem for scenarios that have previously been analyzed. In the current pa- Trainable Natural Language Systems 805 per we focus on the natural language translation tech- nology, its interface with the scenario analyzer, and the lessons learned during the latter half of the project about building and maintaining a large hybrid reason- ing system. In the remainder of this paper, we first describe each of the two major functions of KITSS: translating English into WIL and analyzing the trans- lated scenario for anomalies. Translating English into WIL The translation of English scenarios into WIL occurs in two stages. Initially, a statistical, trainable chart parser converts English sentences into a case frame structure similar to that of a Lexical-Functional Gram- mar (LFG) [Kaplan and Bresnan 19821. The case frame is then translated by a rule-driven semantic re- solver into WIL. The semantic resolver performs sev- eral tasks during its translation step, including canoni- calizing the case frame, handling conjunctions, and re- solving definite and indefinite noun phrase references. In many cases, there may be more than one possi- ble WIL reading of an English sentence. This may be due to the inherent ambiguity of natural languages, or to imprecision in the parsing and reference resolution steps. The statistical chart parser computes probabili- ties which aid in ranking the alternatives that arise as a result of grammatical (but not referential) ambiguity. The translator, following the highly interactive design philosophy of KITSS, also includes a simple rule-driven facility which “paraphrases” WIL back into English. The choices are ranked by the probabilities from the parser. The human may select one of these transla- tions, or else create a novel one by cut-and-pasting WIL fragments from several of the “near-misses”. Note that we are not claiming to have successfully automated the translation of English into WIL, even for the restricted English subset we use. The condi- tions of use that make our translation process practical are the very same ones that differentiate past research failures in automatic language translation from recent commercial successes in machine assisted translation: e It is not necessary to produce translation of every sentence. an automatic, correct o Human users get machine help for salvaging some- thing useful out of incorrect “near-miss” translations (e.g., cut-and-paste), and are not unduly penalized for an occasional end-run around the translator and writing directly in the target language (i.e., WIL or its pseudo-English paraphrases). o The size of each corpus to be translated is large enough to amortize customizing the translator for a particular writing style, vocabulary, and topic of discourse. Statistical Chart-Parsing Natural 1 .anguage parsers have traditionally required elaborate hand-crafted grammars which subtly depend on the chosen parsing algorithm. This has made them complex to develop and maintain, even for computa- tional linguists. We needed a parser that could be maintained by our prospective end-users: engineers, not linguists. Instead of a large, pre-specified covering grammar, the KITSS grammar has grown incremen- tally from training on example sentences from our do- main corpus of scenarios. The statistical chart-parsing technology used in KITSS was inspired by the recent success of statistical methods in several areas of natural language processing, including part-of-speech tagging, bilingual corpora alignment and OCR postprocessing. Statistical knowledge can be used very effectively to limit search and to order alternatives. To “train” the parser on a sample set of sentences, a human feeds the parser a parse tree of the sentence (if the parser cannot already parse it correctly). Parse trees, for KITSS, are basically annotated versions of familiar elementary-school “sentence diagrams”. The part-of-speech categories in KITSS were taken from the Brown Corpus [Francis and Kucera 19821. The non-terminal categories are the standard ones in a sim- ple linguistic theory such as S (sentence), NP (noun phrase), etc. The parser assists in the bracketing pro- cess by providing analyses for major sentence con- stituents that it can find. In addition, there is a facility for guessing parts-of-speech for new vocabulary. Each node in the parse tree is implicitly identified with some context-free rule. Each non-terminal cate- gory also has a predefined semantic type (slot, filler, slot-filler pair) that is used to construct a case-frame representation. Relations such as prepositions (IN) are of type slot. Entities such as noun phrases (NP) or verb phrases (VP) are of type filler. Modifiers such as adjective phrases or prepositional phrases are of type slot-filler pair. To construct the case frame, the parser associates one or more semantic templates with each syntactic rule. The templates play a similar role (with- out unification) to the functional equations in LFG. For example, the template (9?1 : OBJECT ?2) is as- sociated with the rule VP -> V NP. The variable ?I is bound to the semantic interpretation of the first right- hand side constituent (V). The template specifies that this interpretation is to spliced in and followed by the slot :OBJECT and its filler from the interpretation of the NP. A semantic template only needs to be explic- itly defined for the rule if it cannot be “guessed” by the parser from the semantic types of the non-terminals and general linguistic knowledge about the heads of syntactic phrases (e.g., from X-bar theory [Jackendoff 19771). For example, for the syntactic rule VP -> VP PP, the parser will supply a default template of the form (@I 92). In most cases, users need not explicitly specify these semantic templates. The case frame representation includes predicate- argument information and syntactic features such as tense. It is often much “flatter’ than a parse tree. Fig- ures 2 and 3 give the parse tree and case frames for the 806 Kelly (S (VP (VP (VP (VB "Place") (NP (NNS "calls"))) (PP (IN "to") (NP (NNS "stations") (NPR (NPR "B,") (CC "and") (NPR "~iw))> (CC "and") (VP (VB "make") (SBE (NP (PPO "them")) (JJ "busy"))))) Figure 2: Parse Tree (AND :CONJI (PLACE :OBJECT (CALL :NUMBER PLUR) :TO (STATION :NUMBER PLUR (AND :CONJI (:NAME "B,") :CONJ2 (ZNAME ‘l~i’9))) : CONJ2 (MAKE :PRED (BE :ACTOR (THEM) :ADJ-MOD BUSY)>> Figure 3: Case Frame example sentence (1.1) “Place calls to stations B3 and D 1 and make them busy”. Rule-Based Phrase Conversion The second phase of the translator converts the parser’s case frames into WIL logic statements describ- ing precise facts about events in a hypothetical tele- phone test lab. In contrast to the large number of things that might be said in a document about testing (instructions to testers, runing commentary, explana- tory headings), there is but a small number of facts specifically relevant to executing a particular test. The tasks of the phrase converter are: e identifying references to relevant test lab actions or observations in a sentence (normalize verb concepts), o determining which pieces of lab apparatus are to par- ticipate in the action or observed event (resolve nom- inal references). e identifying simple plan-like discourse structures in the text (e.g., sequence and purpose). e paraphrasing its fully normalized and resolved un- derstanding of each sentence back into an English- like form. e accepting corrections from the user in the form of edited versions of near-miss paraphrases. With various degrees of success, the phrase con- verter also applies a set of rewrite rules to handle a number of natural language phenomena including syn- tactic transformations (e.g., active-passive), semantic transformations ( e.g., “button at station X” and “but- ton associated with station X” are equivalent), and collective/distributive readings of conjunctions. These rewrite rules include difficult situations such as the first sentence in Figure 1 where the conjunction and should be interpreted as “and thus” rather than “and then”. After determining a canonical representation, then the nominal descriptions are further resolved into entities in the domain model. Finally, the resulting form is converted into WIL. For example, in our sample sentence 1.1, the phrase converter identifies the referent of “them” in the sec- ond clause, namely “B3 and Dl", associates the verbs with specific WIL action predicates (place-call and busy-out-station), and links the referents of the sub- jects and objects of each clause with the required pa- rameters of each action. Next, it guesses that “and”, in this context, does not mean sequence (“and then”) but rather purpose (“and thus”), based on the under- specificity of the first clause (calls from whom? how many?). Finally it reads “make B3 and Dl busy” distributively as “make B3 busy and make Dl busy”. The final (paraphrased) translation of sentence 1.1 into WIL, is simply: Busy-out Busy-out station station B3. Dl. The underspecified information in the first clause of the sentence blocks its translation into WIL, and so it is dropped from the final translation as it occupies only a subsidiary discourse role. The user could, of course, re-introduce the missing information directly by typing an explicit WIL paraphrase, but in this case no harm was done. Analyzing IL Test Scripts Converting each English scenario sentence correctly into a WIL formula does not guarantee that the whole sequence of formulas constitutes a valid test script. To repair the anomalies noted earlier - missing steps and null actions, underspecified and steps, implicit bound- ary conditions - our analysis module performs two dif- ferent audit techniques on the scenario: First, a background theory of telephony, about 250 hand-coded temporal logic axioms at present, is used to audit each scenario step in isolation; the logical con- junction of this background knowledge and the sce- nario step itself (i.e., an action plus follow-on obser- vations) is fed into a theorem prover, which forward- chains additional facts about the step. The back- ground theory primarily describes user-telephone sig- naling conventions (ringing, tones, pickups, hangups, button-presses, lamp flashes, timeouts) and the proce- dural building blocks of all modern telephone call con- trol (call origination, acceptance, rejection, bridging, transfer, redirection, drop, and hold, etc.). Deliber- ately excluded from this theory is any formal specifi- cation of the particular features being described in our Trainable Natural Language Systems 807 scenarios. Second, at each step, the scenario so far is com- pared with a library of hierarchically structured plans for stereotypical telephone usages, currently number- ing about 70. This comparison is complex because telephone usages typically involve multiple agents, and multiple plan executions (e.g., several simultaneously active telephone calls) may be interleaved within a given scenario. Furthermore, plan recognition must be updated after each scenario step in order to syn- chronize with the translation process, and recognition and plan instantiation (i. e., fleshing out underspecified scenario actions) are freely intermixed. These require- ments have forced us to engineer a complex, incremen- tal plan recognition/instantiation algorithm based on propagating disjunctive sets of plan hypotheses regard- ing each scenario step. The final desired result of this analysis is a recon- struction of the original scenario as a threaded forest of plan executions, in which every node, whether leaf or interior point, has been examined and elaborated using the background domain theory. As this is being computed, a variety of corrections and clarifications are carried out: calculating the detailed “state” of a telephone switch test lab before and after each scenario action using the background theory of telephony, finding “hidden” linguistic referents in the scenario by plan recognition, which were missed by the phrase converter. For instance, consider the fragment Bl goes offhook. Bl dials the extension of B2. B2 answers the call. There is no immediate referent here for “the call” ; its identity can only be inferred by matching this fragment against plan knowledge about placing calls. deducing desirable but unstated intermediate obser- vations that should be made during the course of a test (such as always making sure a phone is actually ringing before attempting to answer a call), determining whether each action is possible and legal in the current interpolated state, and if not, whether one single missing prior step (such as forgetting to hang up a phone - the most common error - or not waiting for a timeout to expire) could account for the discrepancy, elaborating details of abstract or underspecified goals (such as how to make a station “busy”). Plan selection is guided by entity types (how many calls a particular class of station can handle simultane- ously), the current state (how many calls it is cur- rently handling; which other stations are now free), and pragmatic concerns (whether it is faster to make a station busy with outgoing or incoming calls). Using the plan library to diagnose missing finaliza- tions in a scenario and for selecting appropriate er- ror recovery actions. For instance, if some action sequence implicitly activates a feature, that feature should be explicitly deactivated in case of aborting the test. deducing any special capabilities or privileges re- quired by any of the participants in the scenario (e.g., th e a i i y o b 1 t t f orward telephone calls), which need special administrative setup actions. Finally, just as for the language translation mod- ule, the scenario analyzer had to be designed robustly so that its own failure to understand, especially on some fairly minor point, does not block KITSS from producing output. It accommodates manual on-the- fly “patching” of scenarios by cut-and-paste of WIL paraphrases, followed by incremental re-auditing. It permits the user to resolve contradictions by explic- itly denying facts it has deduced and asserting oth- ers. In the face of massi,ve apparent nonsense, it de- grades gracefully, amid much complaining, to a cred- ulous mode where most of its audits, except for plan instantiation, are disabled. This credulous mode per- sists until KITSS unambiguously recognizes the start of a new plan and resynchronizes itself with the sce- nario. Empirical Results and Lessons Learned We used KITSS experimentally to translate a gradu- ated corpus of 100 scenarios covering seven advanced calling features of a private telephone switch, written by five different authors. A very experienced user of KITSS successfully converted about 70% of these sce- narios into executable form, taking only a few min- utes apiece, as opposed to hours without machine aid. These 70% required either zero or minimal post-hoc editing of the final output to produce a “perfect” test script, as judged by experienced test engineers. The remaining 30% were evenly divided between those re- quiring enough manual touch-up to nullify KITSS’ pro- ductivity advantage over manual conversion, and those for which KITSS provided no useful help at all (com- monly due to fatal bugs in the tricky incremental re- auditing code of the analyzer). For about one-third of the 70% of scenarios suc- cessfully processed, KITSS degraded into “credulous” mode part of the time, requiring its human user to interpolate corrections which it otherwise would have provided. In our experiments, human performance on these troublesome cases varied over a three-fold range, even among members of our development team. This shows that although the basic technology and archi- tecture of KITSS may be sound, more work is needed to refine it into an industrial-strength tool for general engineering use. Emergent Natural Language Technology The chart parser is one of the clearest research suc- cesses on the KITSS project, and the most readily 808 Kelly transferable to other application domains. Evaluated in isolation through split-corpus experiments on a 429- sentence subset from our scenario corpus, the parser, even though still quite obviously undertrained, per- formed as follows: e 77% overall rectly. chance of parsing a novel sentence cor- e 85% chance of no new words. success when the sentence cant ains * 96% success in determining the correct parse where any possible parses were found. in cases e Parsing speed averaging a few seconds per and linear in the length of the sentence. sentence Thus, the parser is sufficiently accurate and predictable to be a productivity aid, and it is fast enough to be con- sidered “real-time” for interactive use. Furthermore, most of its non-vocabulary related failures occurred with long, awkward constructions or embedded paren- thetical expressions. Thus, writers can produce more parsable prose just by improving their writing style. Our positive experience with the statistical parser convinced us that natural language parsing technology is poised for much wider exploitation as a computer interface technique. The skills required to prepare a training set for an adaptive parser are much more mun- dane than those formerly required to write a compu- tational grammar from scratch; most high school stu- dents are trained in sentence diagramming, which is a form of parsing. Although the phrase converter worked, it did not scale up nearly as gracefully as the parser. Nominal reference resolution for physical objects like calls, sta- tions, buttons, and lamps was implemented using clas- sification combined with a simple recency heuristic and worked reasonably well. The variability of the English language, however, worked against finding any com- pact set of normalization heuristics for phrases in gen- eral. Each new idiom encountered, and each significant variation in verb tense or word order, needs its own normalization rule. The number of these continues to grow linearly with the number of scenarios. It should be noted that the 100 English scenarios which we attempted to process were originally writ- ten for humans and not machines. If a system such as KITSS were deployed earlier during the writing of the scenarios, there would be less inherent variabil- ity, and hence less of a problem with extensibility and maintainability in the phrase converter. In the end, we learned to live within the converter’s limited reper- toire of normalizations by performing manual touch-up on partially normalized sentences. This was still faster and easier than unassisted manual English-WIL trans- lation. Generic Domain Models The scenario analyzer was the largest component of KITSS, its greatest time bottleneck, and the largest source of lingering program bugs. Notwithstanding the obvious engineering complexity of the tightly cou- pled, incremental, interruptible, restartable audit rou- tines, the greatest lessons learned from the analyzer derive from its 250-axiom, 70-plan “generic” domain model for modern telephony. While simplified models of old-fashioned telephony are common textbook exer- cises in formal specification, efforts to formalize mod- ern telephony have been few and mostly unfruitful. To our knowledge, KITSS included the first formal, fully machine-interpretable model of the underpinnings of modern telephony, and it has directly catalyzed fur- ther work in this field [Zave and Jackson 19911. The largest intellectual challenge we faced in struc- turing this knowledge was separating general, re-usable constraints about telephony from specific knowledge of particular features. For instance, instead of directly describing the operation of the Call Forwarding capa- bility, we considered the general rules and constraints for redirecting calls from one location to another, of which Call Forwarding was but one example. In a pure logic notation where each axiom is textually indepen- dent, it is especially easy to write constraints that are either overly or underly general, and these have myste- rious effects on the advice offered by the analyzer. Our prior work on automated specification induction using the WATSON system ([Kelly and Nonnenmann 19911) yielded useful heuristics for tracking down and fixing such errors, but we could only apply these manually, since the size of KITSS’ domain model was beyond any- thing WATSON’s automated techniques could handle. Engineering the plan library posed two major chal- lenges. First of all, plans had to do double-duty for both recognition and instantiation. For instantiation, they had to be very complete and detailed, but this complicated recognizing a plan from a scenario in the presence of both interference from other interleaved plans (e.g., other telephone calls), and “observational noise” from omitted steps. We encountered a tradeoff between cluttering our plan representation with many explicit recognition cues or using a more complex set of recognition heuristics, which is the course we ul- timately chose. Second, in a testing application like KITSS, d l’b t 1 e 1 era e p an failures are frequent, such as attempting to complete a telephone call to a busy sta- tion. We need to reason about the many ways a plan might fail, without cluttering the plan library with variants for each possible failure mode. Our compro- mise was to associate failures only with goals, not with individual plans, while relying on the densely hierar- chical structure of the plan library (short plans, deeply nested) to help localize the point of plan failure. Again, we accepted more complex but less complete planning algorithms, in the interest of a simpler, clearer domain model. KITSS, like many other systems that reason about actions, had to handle the “frame problem” and other non-monotonic effects, but the size, scope, and incre- Trainable Natural Language System 809 mental growth of our domain model limited our solu- tion approaches. Since our domain model had to be inspectable by human telephony domain experts, we wanted to keep our logic notation as “clean” as pos- sible, uncluttered by abnormality predicates or other forms solely intended to guide non-monotonic infer- ence. After experimentation, we solved our frame problem extra-logically by defining an arbitrary per- sistence partial order on state-predicates, enforced by controlling the order in which clauses were fed to the theorem prover. While this approach is not state-of- the-art, it worked for our domain. We would have used a more modern, general technique, had we found one appropriate for our logic that did not entail massive obfuscation of the domain knowledge base and unac- ceptable slowdown of the analyzer. We suggest that more empirical scaling research is needed on the prac- tical computational demands for the currently favored methods of reasoning about actions and state. Acknowledgements Many individuals contributed to KITSS. Mark Jones wrote the parser, with help from his summer student, Jason Eisner. Robert Hall built the phrase converter. Van Kelly is responsible for the analyzer and its do- main model of generic telephony. John Eddy, an expe- rienced test engineer, was our in-house domain expert and designed our knowledge base ontology and taxo- nomic schema. Uwe Nonnenmann built the user inter- face and integration tested all our code. We also wish to thank Bruce Ballard, Lori Alperin Resnick, Tom Kirk, and Jim Piccarello for their technical assistance. This project would not have been possible without far- sighted management support and encouragement from G. D. Bergland, Ron Brachman, and Jim Shanley. References Ballard B. and Stumberger D. 1986. Semantic Acqui- sition in TELI: A Transportable, User-Customizable Natural Language Processor. In ACL-24 Proceedings, pp. 20-29. Association For Computer Linguistics Francis, W. and Kucera, H. 1982. Frequency Analysis of English Usage. Houghton Mifflin, Boston, 1982. Jackendoff, R. 1977. x Syntax: A Study of Phrase Structure. Cambridge, MA.: MIT Press. Jones, M.A. and Eisner, J.E. 1992. A Probabilistic Parser Applied to Software Testing Documents. In Proc. of the 10th National Conference on Artificial Intelligence, 322-328. San Jose, CA: AAAI Press. Kaplan, R. and Bresnan, J. Lexical-Functional Gram- mar: A Formal System for Grammatical Representa- tion. In The Mental Representation of Grammatical Relations, 173-281. New York: John Wiley & Sons Inc. Kelly, V. and Nonnenmann, U. Reducing the Com- plexity of Specification Acquisition in Automating Softurare Design, pp. 41-64, Menlo Park, AAAI Press. Kelly, V. and Nonnenmann, U. Inferring Formal Soft- ware Specifications from Episodic Descriptions, in Proc. of the Sixth National Conference on Artificial Intelligence, pp. 127-132 Menlo Park, AAAI. Nonnenmann, U., and Eddy J.K. 1992. KITSS - A Functional Software Testing System Using a Hybrid Domain Model. In Proc. of 8th IEEE Conference on Artificial Intelligence Applications. Monterey, CA: IEEE. Zave, P. and Jackson, M. Techniques for Partial Specification and Specification of Switching Systems. InVDM ‘91: Formal Software Development Methods (proceedings of the Fourth International Symposium of VDM Europe). pp. 511-525 Springer Verlag ISBN 3-540-54834-3 810 Kelly | 1993 | 120 |
1,322 | Ellen Riloff Department of Computer Science University of Massachusetts Amherst, MA 01003 riloff@cs.umass.edu Abstract Knowledge-based natural language processing systems have achieved good success with certain tasks but they are of- ten criticized because they depend on a domain-specific dictionary that requires a great deal of manual knowledge engineering. This knowledge engineering bottleneck makes knowledge-based NLP systems impractical for real-world applications because they cannot be easily scaled up orported to new domains. In response to this problem, we devel- oped a system called AutoSlog that automatically builds a domain-specific dictionary of concepts for extracting infor- mation from text. Using AutoSlog. we constructed a dictio- nary for the domain of terrorist event descriptions in only 5 person-hours. We then compared the AutoSlog dictionary with a hand-crafted dictionary that was built by two highly skilled graduate students and required approximately 1500 person-hours of effort. We evaluated the two dictionaries using two blind test sets of 100 texts each. Overall, the AutoSlog dictionary achieved 98% of the performance of the hand-crafted dictionary. On the first test set, the Auto- Slog dictionary obtained 96.3% of the perfomlance of the hand-crafted dictionary. On the second test set, the over- all scores were virtually indistinguishable with the AutoSlog dictionary achieving 99.7% of the performance of the hand- crafted dictionary. Introduction Knowledge-based natural language processing (NLP) sys- tems have demonstrated strong performance for informa- tion extraction tasks in limited domains [Lehnert and Sund- heim, 1991; MUC-4 Proceedings, 19921. But enthusiasm for their success is often tempered by real-world concerns about portability and scalability. Knowledge-based NLP systems depend on a domain-specific dictionary that must be carefully constructed for each domain. Building this dic- tionary is typically a time-consuming and tedious process that requires many person-hours of effort by highly-skilled people who have extensive experience with the system. Dic- tionary construction is therefore a major knowledge engi- neering bottleneck that needs to be addressed in order for information extraction systems to be portable and practical for real-world applications. We have developed a program called AutoSlog that au- tomatically constructs a domain-specific dictionary for in- formation extraction. Given a training corpus, AutoSlog proposes a set of dictionary entries that are capable of ex- tracting the desired information from the training texts. If the training corpus is representative of the targeted texts, the dictionary created by AutoSlog will achieve strong perfor- mance for information extraction from novel texts. Given a training set from the WC-4 corpus, AutoSlog created a dictionary for the domain of terrorist events that achieved 98% of the performance of a hand-crafted dictionary on 2 blind test sets. We estimate that the hand-crafted diction- ary required approximately 1.500 person-hours to build. In contrast, the AutoSlog dictionary was constructed in only 5 person-hours. Furthermore, constructing a dictionary by hand requires a great deal of training and experience whereas a dictionary can be constructed using AutoSlog with only minimal training. L We will begin with an overview of the information extrac- tion task and the MUC-4 performance evaluation that moti- vated this work. Next, we will describe AutoSlog, explain how it proposes dictionary entries for a domain, and show examples of dictionary definitions that were constructed by AutoSlog. Finally, we will present empirical results that demonstrate AutoSlog’s success at automatically creating a dictionary for the domain of terrorist event descriptions. Information Extraction from ‘I’ext Extracting information from text is a challenging task for natural language processing researchers as well as a key problem for many real-world applications. In the last few years, the NLP community has made substantial progress in developing systems that can achieve good performance on information extraction tasks for limited domains. As op- posed to in-depth natural language processing, information extraction is a more focused and goal-oriented task. For example, the MUC-4 task was to extract information about terrorist events, such as the names of perpetrators, victims, instruments, etc. Our approach to information extraction is based on a tech- nique called selective concept extraction. Selective concept extraction is a form of text skimming that selectively pro- cesses relevant text while effectively ignoring surrounding text that is thought to be irrelevant to the domain. The work presented here is based on a conceptual sentence analyzer called CIRCUS [Lehnert, 19901. Trainable Natural Language Systems 811 From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. To extract information from text, CIRCUS relies on a domain-specific dictionary of concept nodes. A concept node is essentially a case frame that is triggered by a lexical item and activated in a specific linguistic context. Each concept node definition contains a set of enabling condi- tions which are constraints that must be satisfied in order for the concept node to be activated. For example, our dictionary for the terrorism domain contains a concept node called $k.idnap-passive$ that extracts information about kid- napping events. This concept node is triggered by the word “kidnapped” and has enabling conditions that allow it to be activated only in the context of a passive construction. As a result, this concept node is activated by phrases such as “was kidnapped”, “were kidnapped”, etc. Similarly, the dictionary contains a second concept node called $kidnap- active$ which is also triggered by the word “kidnapped” but has enabling conditions that allow it to be activated only in the context of an active construction, such as “terrorists kidnapped the mayor”. In addition, each concept node definition contains a set of slots to extract information from the surrounding context. In the terrorism domain, concept nodes have slots for perpe- trators, victims, instruments, etc. Each slot has a syntactic expectation and a set of hard and soft constraints for its filler. The syntactic expectation specifies where the filler is expected to be found in the linguistic context. For example, $k.idnap-passive$ contains a victim slot that expects its filler to be found as the subject of the clause, as in “the mayor was kidnapped”. The slot constraints are selectional restrictions for the slot filler. The hard constraints must be satisfied in order for the slot to be filled, however the soft constraints suggest semantic preferences for the slot filler so the slot may be filled even if a soft constraint is violated. Given a sentence as input, CIRCUS generates a set of instantiated concept nodes as its output. If multiple trigger- ing words appear in a sentence then CIRCUS can generate multiple concept nodes for that sentence. However, if no triggering words are found in a sentence then CIRCUS will generate no output for that sentence. The concept node dictionary is at the heart of selective concept extraction. Since concept nodes are CIRCUS’ only output for a text, a good concept node dictionary is crucial for effective information extraction. The UMass/MUC- 4 system [Lehnert et al., 1992al used 2 dictionaries: a part-of-speech lexicon containing 5436 lexical definitions, including semantic features for domain-specific words and a dictionary of 389 concept node definitions for the domain of terrorist event descriptions. The concept node dictionary was manually constructed by 2 graduate students who had extensive experience with CIRCUS and we estimate that it required approximately 1500 person-hours of effort to build c The MUC-4 Task and Corpus In 1992, the natural language processing group at the Uni- versity of Massachusetts participated in the Fourth Message Understanding Conference (MUC-4). MUC-4 was a com- petitive performance evaluation sponsored by DARPA to evaluate the state-of-the-art in text analysis systems. Sev- enteen sites from both industry and academia participated in MUC-4. The task was to extract information about terror- ist events in Latin America from newswire articles. Given a text, each system was required to fill out a template for each terrorist event described in the text. If the text de- scribed multiple terrorist events, then one template had to be completed for each event. If the text did not mention any terrorist events, then no templates needed to be filled out. A template’is essentially a large case frame with a set of pre-defined slots for each piece of information that should be extracted from the text. For example, the MUC-4 templates contained slots for perpetrators, human targets, physical targets, etc. A training corpus of 1500 texts and instantiated templates (answer keys) for each text were made available to the participants for development purposes. The texts were selected by keyword search from a database of newswire articles. Although each text contained a keyword associated with terrorism, only about half of the texts contained a specific reference to a relevant terrorist incident. Behind the Design of AutoSlog Two observations were central to the design of AutoSlog. The first observation is that news reports follow certain stylistic conventions. In particular, the most important facts about a news event are typically reported during the ini- tial event description, Details and secondary information are described later. It follows that the first reference to a major component of an event (e.g., a victim or perpetra- tor) usually occurs in a sentence that describes the event. For example, a story about a kidnapping of a diplomat will probably mention that the diplomat was kidnapped before it reports secondary information about the diplomat’s fam- ily, etc. This observation is key to the design of AutoSlog. AutoSlog operates under the assumption that thefirst refer- ence to a targeted piece of information is most likely where the relationship between that information and the event is made explicit. Once we have identified the first sentence that contains a specific piece of information, we must determine which words or phrases should activate a concept node to ex- tract the information. The second key observation behind AutoSlog is that the immediate linguistic context surround- ing the targeted information usually contains the words or phrases that describe its role in the event. For example, consider the sentence “A U.S. diplomat was kidnapped by FMLN guerrillas today”. This sentence contains two impor- tant pieces of information about the kidnapping: the victim (“U.S. diplomat”) and the perpetrator (“FMLN guerrillas”). In both cases, the word “kidnapped” is the key word that relates them to the kidnapping event. In its passive form, we expect the subject of the verb “kidnapped” to be a victim and we expect the prepositional phrase beginning with “by” to contain a perpetrator. The word “kidnapped” specifies the roles of the people in the kidnapping and is therefore the most appropriate word to trigger a concept node. AutoSlog relies on a small set of heuristics to determine which words and phrases are likely to activate useful con- cept nodes. In the next section, we will describe these 812 RilOff heuristics and explain how AutoSlog generates complete concept node definitions. Automated Dictionary Construction Given a set of training texts and their associated answer keys, AutoSlog proposes a set of concept node definitions that are capable of extracting the infomlation in the answer keys from the texts. Since the concept node definitions are general in nature, we expect that many of them will be useful for extracting information from novel texts as well. The algorithm for constructing concept node definitions is as follows. Given a targeted piece of information as a string from a template, AutoSlog finds the first sentence in the text that contains the string. This step is based on the observation noted earlier that the first reference to an object is likely to be the place where it is related to the event. The sentence is then handed over to CIRCUS which generates a conceptual analysis of the sentence. Using this analysis, AutoSlog identifies the first clause in the sentence that contains the string. A set of heuristics are applied to the clause to suggest a good conceptual anchor point for a concept node definition. If none of the heuristics is satisfied then AutoSlog searches for the next sentence in the text that contains the targeted information and the process is repeated. The conceptual anchor point heuristics are the most im- portant part of AutoSlog. A conceptual anchor point is a word that should activate a concept; in CIRCUS, this is a triggering word. Each heuristic looks for a specific lin- guistic pattern in the clause surrounding the targeted string. The linguistic pattern represents a phrase or set of phrases that are likely to be good for activating a concept node. If a heuristic successfully identifies its pattern in the clause then it generates two things: (1) a conceptual anchor point and (2) a set of enabling conditions to recognize the com- plete pattern. For example, suppose AutoSlog is given the clause “the diplomat was kidnapped” along with “the diplomat” as the targeted string. The string appears as the subject of the clause and is followed by a passive verb “kid- napped” so a heuristic that recognizes the pattern <subject> passive-verb is satisfied. The heuristic returns the word “kidnapped” as the conceptual anchor point along with en- abling conditions that require a passive construction. To build the actual concept node definition, the concep- tual anchor point is used as its triggering word and the enabling conditions are included to ensure that the concept node is activated only in response to the desired linguistic pattern. For the example above, the final concept node will be activated by phrases such as “was kidnapped”, “were kidnapped”, “have been kidnapped”, etc. The current version of AutoSlog contains 13 heuristics, each designed to recognize a specific linguistic pattern. These patterns are shown below, along with examples that illustrate how they might be found in a text. The bracketed item shows the syntactic constituent where the string was found which is used for the slot expectation (cdobj> is the direct object and urp> is the noun phrase following a prepo- sition). In the examples on the right, the bracketed item is a slot name that might be associated with the filler (e.g., the subject is a victim). The underlined word is the conceptual anchor point that is used as the triggering word. Linguistic Pattern Example <subject> passive-verb <victim> was murdered <subject> active-verb <perpetrator> bombed <subject > verb infinitive <perpetrator> attempted to ki.lJ <subject> auxiliary noun <victim> was victim passive-verb <dobj>’ killed <victim > active-verb <dobj> bombed <target > infinitive <dobj > to &.lJ <victim> verb infinitive <dobj > threatened to attack <target > gerund <dobj > killing <victim> noun auxiliary <dobj> fatality was <victim> noun prep <np > bomb against <target> active-verb prep <np > killed with <instrument> passive-verb prep <np > was aimed at <target > Several additional parts of a concept node definition must be specified: a slot to extract the information2, hard and soft constraints for the slot, and a type. The syntactic constituent in which the string was found is used for the slot expectation. In the previous example, the string was found as the subject of the clause so the concept node is defined with a slot that expects its tiller to be the subject of the clause. The name of the slot (e.g., victim) comes from the tem- plate slot where the information was originally found. In order to generate domain-dependent concept nodes, Auto- Slog requires three domain specifications. One of these specifications is a set of mappings from template slots to concept node slots. For example, information found in the human target slot of a template maps to a victim slot in a concept node. The second set of domain specifications are hard and soft constraints for each type of concept node slot, for example constraints to specify a legitimate victim. Each concept node also has a type. Most concept nodes accept the event types that are found in the template (e.g., bombing, kidnapping, etc.) but sometimes we want to use special types. The third set of domain specifications are mappings from template types to concept node types. In general, if the targeted information was found in a kidnap- ping template then we use “kidnapping” as the concept node type. However, for the terrorism domain we used special types for information from the perpetrator and instrument template slots because perpetrators and instruments often appear in sentences that do not describe the nature of the event (e.g., “The FMLN claimed responsibility”could refer to a bombing, kidnapping, etc.). Sample Concept Node Definitions To illustrate how this whole process comes together, we will show some examples of concept node definitions gen- ‘In principle, passive verbs should not have objects. However, we included this pattern because CIRCUS occasionally confused active and passive constructions. 21n principle, concept nodes can have multiple slots to extract multiple pieces of information. However, all of the concept nodes generated by AutoSlog have only a single slot. Trainable Natural Language Systems 813 erated by AutoSlog. Figure 1 shows a relatively simple concept node definition that is activated by phrases such as “was bombed”, “were bombed”, etc. AutoSlog created this definition in response to the input string “public buildings” which was found in the physical target slot of a bombing template from text DEV-MUC4-0657. Figure 1 shows the first sentence in the text that contains the string “public buildings”. When CIRCUS analyzed the sentence, it iden- tified “public buildings” as the subject of the first clause. The heuristic for the pattern <subject> passive-verb then generated this concept node using the word “bombed” as its triggering word along with enabling conditions that require a passive construction. The concept node contains a single variable slot3 which expects its filler to be the subject of the clause (*S*) and labels it as a target because the string came from the physical target template slot. The constraints for physical targets are pulled in from the domain specifica- tions. Finally, the concept node is given the type bombing because the input string came from a bombing template. CONCEPT NODE Name: target-subject-passive-verb-bombed Trigger: bombed Variable Slots: (target (*S* 1)) Constraints: (class phys-target *S*) Constant Slots: (type bombing) Enabling Conditions: ((passive)) Figure 1: A good concept node definition Figure 2 shows an example of a good concept node that has more complicated enabling conditions. In this case, CIRCUS found the targeted string “guerrillas” as the sub- ject of the first clause but this time a different heuristic tied. The heuristic for the pattern <subject> verb infhi- tive matched the phrase “threatened to murder” and gener- ated a concept node with the word “murder” as its trigger combined with enabling conditions that require the preced- ing words “threatened to” where “threatened” is in an active construction. The concept node has a slot that expects its filler to be the subject of the clause and expects it to be a perpetrator (because the slot filler came from a perpetrator template slot). The constraints associated with perpetra- tors are incorporated and the concept node is assigned the type “perpetrator” because our domain specifications map the perpetrator template slots to perpetrator types in con- cept nodes. Note that this concept node does not extract the direct object of “threatened to murder” as a victim. We would need a separate concept node definition to pick up the victim. 3 Variable slots are slots that extract information. Constant slots have pre-defined values that are used by AutoSlog only to specify the concept node type. CONCEPT NODE Name: perpetrator-subject-verb-infinitive-threatened-to-murder Trigger : murder Variable Slots: (perpetrator (*S* 1)) Constraints: (class perpetrator *S*) Constant Slots: 0-w perpetrator) Enabling Conditions: ((active) (trigger-preceded-by? ‘to ‘threatened)) Figure 2: Another good concept node definition Although the preceding definitions were clearly useful for the domain of terrorism, many of the definitions that AutoSlog generates are of dubious quality. Figure 3 shows an example of a bad definition. AutoSlog finds the input string, “gilberto molasco”, as the direct object of the first clause and constructs a concept node that is triggered by the word “took” as an active verb. The concept node expects a victim as the direct object and has the type kidnapping. Although this concept node is appropriate for this sentence, in general we do not want to generate a kidnapping concept node every time we see the word “took”. Id: DEV-MUC4- 1192 Slot filler: “gilberto molasco” Sentence: (they took 2-year-old gilberto molasco, son of patricio rodriguez, and 17-year-old andres argueta, son of emimesto argueta.) CONCEPT NODE Name: victim-active-verb-dobj-took Trigger: took Variable Slots: (victim (*DOBJ* 1)) Constraints: (class victim *DOBJ*) Constant Slots: (type kidnapping) Enabling Conditions: ((active)) Figure 3: A bad concept node definition AutoSlog generates bad definitions for many reasons, such as (a) when a sentence contains the targeted string but does not describe the event (i.e., our first observation mentioned earlier does not hold), (b) when a heuristic pro- poses the wrong conceptual anchor point or (c) when CIR- CUS incorrectly analyzes the sentence. These potentially dangerous definitions prompted us to include a human in the loop to weed out bad concept node definitions. In the following section, we explain our evaluation procedure and present empirical results. To evaluate AutoSlog, we created a dictionary for the do- main of terrorist event descriptions using AutoSlog and compared it with the hand-crafted dictionary that we used 814 RiiOff in MUC-4. As our training data, we used 1500 texts and their associated answer keys from the MUC-4 corpus. Our targeted information was the slot fillers from six MUC-4 template slots that contained string tills which could be eas- ily mapped back to the text. We should emphasize that AutoSlog does not require or even make use of these com- plete template instantiations. AutoSlog needs only an an- notated corpus of texts in which the targeted information is marked and annotated with a few semantic tags denoting the type of information (e.g., victim) and type of event (e.g., kidnapping). The 1258 answer keys for these 1500 texts contained 4780 string fillers which were given to AutoSlog as input along with their corresponding texts.4 In response to these strings, AutoSlog generated 1237 concept node definitions. AutoSlog does not necessarily generate a definition for ev- ery string filler, for example if it has already created an identical definition, if no heuristic applies, or if the sentence analysis goes wrong. As we mentioned earlier, not all of the concept node definitions proposed by AutoSlog are good ones. Therefore we put a human in the loop to filter out definitions that might cause trouble. An interface displayed each dictionary definition proposed by AutoSlog to the user and asked him to put each definition into one of two piles: the “keeps” or the “edits”. The “keeps” were good definitions that could be added to the permanent dictionary without alteration.5 The “edits” were definitions that required additional editing to be salvaged, were obviously bad, or were of questionable value. It took the user 5 hours to sift through all of the definitions. The “keeps” contained 450 definitions, which we used as our final concept node dictionary. Finally, we compared the resulting concept node dictionary6 with the hand-crafted dictionary that we used for MUC-4. To ensure a clean comparison, we tested the AutoSlog dictionary using the official version of our UMass/MUC-4 system. The resulting “AutoSlog” system was identical to the official UMass/MUC-4 system except that we replaced the hand-crafted concept node dictionary with the new AutoSlog dictionary. We evaluated both sys- tems on the basis of two blind test sets of 100 texts each. These were the TST3 and TST4 texts that were used in the final MUC-4 evaluation. We scored the output generated by both systems using the MUC-4 scoring program. The results for systems are shown in Table 1.7 Recall refers to the percentage of the correct answers 4Many of the slots contained several possible strings (“dis- juncts”), any one of which is a legitimate filler. AutoSlog finds the first sentence that contains any of these strings. ‘The only exception is that the user could change the concept node type if that was the only revision needed. 6We augmented the AutoSlog dictionary with 4 meta-level concept nodes from the hand-crafted dictionary before the final evaluation. These were special concept nodes that recognized textual cues for discourse analysis only. 7The results in Table 1 do not correspond to our official MUC-4 results because we used “batch” scoring and an improved version of the scoring program for the experiments described here. that the system successfully extracted and precision refers to the percentage of answers extracted by the system that were actually correct. The F-measure is a single measure that combines recall and precision, in this case with equal weighting. These are all standard measures used in -the information retrieval community that were adopted for the final evaluation in MUC-4. Table 1: Comparative Results The official UMass/MUC-4 system was among the top- performing systems in MUC-4 [Lehnert er al., 1992bl and the results in Table 1 show that the AutoSlog dictionary achieved almost the same level of performance & the hand- crafted dictionary on both test sets. Comparing F-measures, we see that the AutoSlog dictionary achieved 96.3% of the performance of our hand-crafted dictionary on TST3, and 99.7% of the performance of the official MUC-4 system on TST4. For TST4, the F-measures were virtually indis- tinguishable and the AutoSlog dictionary actually achieved better precision than the original hand-crafted dictionary. We should also mention that we augmented the hand-crafted dictionary with 76 concept nodes created by AutoSlog be- fore the final MUC-4 evaluation. These definitions im- proved the performance of our official system by filling gaps in its coverage. Without these additional concept nodes, the AutoSlog dictionary would likely have shown even better performance relative to the MUC-4 dictionary. Conclusions In previous experiments, AutoSlog produced a concept node dictionary for the terrorism domain that achieved 90% of the performance of our hand-crafted dictionary [Riloff and Lehnert, 19931. There are several possible explanations for the improved performance we see here. First, the previ- ous results were based on an earlier version of AutoSlog. Several improvements have been made to AutoSlog since then. Most notably, we added 5 new heuristics to recognize additional linguistic patterns. We also made a number of improvements to the CIRCUS interface and other parts of the system that eliminated many bad definitions’ and gener- ally produced better results. Another important factor was the human in the loop. We used the same person in both experiments but, as a result, he was more experienced the second time. As evidence, he finished the filtering task in only 5 hours whereas it took him 8 hours the first time.’ *The new version of AutoSlog generated 119 fewer definitions than the previous version even though it was given 794 additional string fillers as input. Even so, this smaller dictionary produced better results than the larger one constructed by the earlier system. ‘For the record, the user had some experience with CIRCUS but was not an expert. Trainable Natural Language Systems 815 AutoSlog is different from other lexical acquisition sys- tems in that most techniques depend on a “partial lexicon” as a starting point (e.g., [Carbonell, 1979; Granger, 1977; Jacobs and Zemik, 19881). These systems construct a def- inition for a new word based on the definitions of other words in the sentence or surrounding context. AutoSlog, however, constructs new dictionary definitions completely from scratch and depends only on a part-of-speech lexicon, which can be readily obtained in machine-readable form. Since AutoSlog creates dictionary entries from scratch, our approach is related to one-shot learning. For exam- ple, explanation-based learning (EBL) systems [DeJong and Mooney, 1986; Mitchell et al., 19861 create complete con- cept representations in response to a single training instance. This is in contrast to learning techniques that incremen- tally build a concept representation in response to multi- ple training instances (e.g., [Cardie, 1992; Fisher, 1987; Utgoff, 19881). However, explanation-based learning sys- tems require an explicit domain theory which may not be available or practical to obtain. AutoSlog does not need any such domain theory, although it does require a few simple domain specifications to build domain-dependent concept nodes. On the other hand, AutoSlog is critically dependent on a training corpus of texts and targeted information. We used the MUC-4 answer keys as training data but, as we noted earlier, AutoSlog does not need these complete template instantiations. AutoSlog would be just as happy with an “annotated” corpus in which the information is marked and tagged with event and type designations. NLP systems often rely on other types of tagged corpora, for example part-of-speech tagging or phrase structure bracketing (e.g., the Brown Corpus [Francis and Kucera, 19821 and the Penn Treebank [Marcus et al.]). However, corpus tagging for automated dictionary construction is less demanding than other forms of tagging because it is smaller in scope. For syntactic tagging, every word or phrase must be tagged whereas, for AutoSlog, only the targeted information needs to be tagged. Sentences, paragraphs, and even texts that are irrelevant to the domain can be effectively ignored. We have demonstrated that automated dictionary con- struction is a viable alternative to manual knowledge en- gineering. In 5 person-hours, we created a dictionary that achieves 98% of the performance of a hand-crafted dictio- nary that required 1500 person-hours to build. Since our approach still depends on a manually encoded training cor- pus, we have not yet eliminated the knowledge engineering bottleneck. But we have significantly changed the nature of the bottleneck by transferring it from the hands of NLP ex- perts to novices. Our knowledge engineering demands can be met by anyone familiar with the domain. Knowledge- based NLP systems will be practical for real-world appli- cations only when their domain-dependent dictionaries can be constructed automatically. Our approach to automated dictionary construction is a significant step toward making information extraction systems scalable and portable to new domains. Acknowledgments We would like to thank David Fisher for designing and program- ming the AutoSlog interface and Stephen Soderland for being our human in the loop. This research was supported by the Office of Naval Research Contract NOOO14-92-J-1427 and NSF Grant no. EEC-9209623, State/Industry/University Cooperative Research on Intelligent Information Retrieval. References CarbonelI, J. G. 1979. Towards a Self-Extending Parser. In Proceedings of the 17th Meeting of the Association for Compu- tational Linguistics. 3-7. Cardie, C. 1992. Learning to Disambiguate Relative Pronouns. In Proceedings of the Tenth National Conference on Artificial Intelligence. 38-43. DeJong, G. and Mooney, R. 1986. Explanation-Based Learning: An Alternative View. Machine Learning 1:145-176. Fisher, D. H. 1987. Knowledge Acquisition Via Incremental Conceptual Clustering. Machine Learning 23139-172. Francis, W. and Kucera, H. 1982. Frequency Analysis of English Usage. Houghton M..if%n, Boston, MA. Granger, R. H. 1977. FOUL-UP: A Program that Figures Out Meanings of Words from Context. Tn Proceedings of the Fzfth International Joint Conference on Artificial Intelligence. 172- 178. Jacobs, P and Zemik, U. 1988. Acquiring Lexical Knowledge from Text: A Case Study. In Proceedingsof the Seventh National Conference on Artificial Intelligence. 739-744. Lehnert, W. 1990. Symbolic/Subsymbolic Sentence Analysis: Exploiting the Best of Two Worlds. In Bamden, J. and Pollack, J., editors 1990, Advances in Connectionist and Neural Compu- tation Theory, Vol. 1. Ablex Publishers, Nor-wood, NJ. 135-164. Lehnert, W.; Cardie, C.; Fisher, D.; McCarthy, J.; Riloff, E.; and Soderland, S. 1992a. University of Massachusetts: Description of the CIRCUS System as Used for MUC-4. In Proceedings of the Fourth Message Understanding Conference (MUC-4). 282-288. Lehnert, W.; Cardie, C.; Fisher, D.; McCarthy, J.; Riloff, E.; and Soderland, S. 1992b. University of Massachusetts: MIX-4 Test Results and Analysis. In Proceedings of the Fourth Message Understanding Conference (MUC-4). 15 1-158. Lehnert, W. G. and Sundheim, B. 1991. A Performance Evalua- tion of Text Analysis Technologies. AZ Magazine 12(3):8 l-94. Marcus, M.: Santorini, B.: and Maminkiewicz. M. Building a Large Annotated Corpus of English: The Penn Treebank. Com- putational Linguistics. Forthcoming. Mitchell, T. M.: Keller, R.: and Kedar-Cabelli, S. 1986. Explanation-Based Generalization: A Unifying View. Machine Learning 1:47-80. Proceedings of the Fourth Message Understanding Conference (MUC-4). 1992. Morgan Kaufmann, San Mateo, CA. Riloff, E. and Lehnert, W. 1993. Automated Dictionary Con- struction for Information Extraction from Text. In Proceedings of the Ninth IEEE Conference on Artijicial Intelligence for Ap- plications. IEEE Computer Society Press. 93-99. Utgoff, P. 1988. lD5: An Incremental ID3. In Proceedingsof the F@h International Conference on Machine Learning. 107-120. 816 Riloff | 1993 | 121 |
1,323 | Learning Semantic Grammars with Constructive nduct ive ogic John M. Zelle and Raymond J. Mooney Department of Computer Sciences University of Texas Austin, TX 78712 zelle@cs.utexas.edu, mooney@cs.utexas.edu Abstract Automating the construction of semantic gram- mars is a difficult and interesting problem for machine learning. This paper shows how the semantic-grammar acquisition problem can be viewed as the learning of search-control heuris- tics in a logic program. Appropriate control rules are learned using a new first-order induction algo- rithm that automatically invents useful syntactic and semantic categories. Empirical results show that the learned parsers generalize well to novel sentences and out-perform previous approaches based on connectionist techniques. Introduction Designing computer systems to “understand” natu- ral language input is a difficult task. The labori- ously hand-crafted computational grammars support- ing natural language applications are often inefficient, incomplete and ambiguous. The difficulty in con- structing adequate grammars is an example of the “knowledge acquisition bottleneck” which has moti- vated much research in machine learning. While nu- merous researchers have studied computer acquisi- tion of natural languages, most of this research has concentrated on learning the syntax of a language from example sentences [Wirth, 1989; Berwick, 1985; Wolff, 19821 In practice, natural, language systems are typically more concerned with extracting the meaning of sentences, usually expressed in some sort of case-role structure. Semantic grammars, which uniformly incorporate both syntactic and semantic constraints to parse sen- tences and produce semantic analyses, have proven ex- tremely useful in constructing natural language inter- faces for limited domains [Allen, 19571. Unfortunately, new grammars must be written for each semantic do- main, and the size of the grammar required for more *This research was supported by the National Science Foundation under grant IRI-9102926 and the Texas Ad- vanced Research Program under grant 003658114. general applications can make manual construction in- feasible. An interesting question for machine learning is whether such grammars can be automatically con- structed from an analysis of examples in a given do- main. The semantic grammar acquisition problem presents a number of difficult issues. First, there is little agree- ment on what constitutes an “adequate” set of cases for sentence analysis; different tasks may require differing semantic representations. Therefore, the learning ar- chitecture must be general enough to allow mapping to (more or less) arbitrary meaning representations. Sec- ond, domain specific semantic constraints must be au- tomatically recognized and incorporated into the gram- mar. This necessitates some form of constructive in- duction to identify useful semantic word and phrase classes. Finally, as in any learning system, it is crucial that the resulting grammar generalize well to unseen inputs. Given the generativity of natural languages, it is unreasonable to assume that the system will be trained on more than a small fraction of possible in- puts. In this paper we show how the problem of seman- tic grammar acquisition can be considered as learning control rules for a logic program. In this framework, the acquisition problem can be attacked using the tech- niques of inductive logic programming. We introduce a new induction algorithm that incorporates construc- tive induction to learn word classes and semantic re- lations necessary to support the parsing process. Em- pirical results show this to be a promising approach to the language acquisition problem. Learning Case-Role Mapping The Mapping Problem Traditional case theory decomposes a sentence into a proposition represented by the main verb and various arguments such as agent, patient, and instrument, rep- resented by noun phrases. The basic mapping problem is to decide which sentence constituents fill which roles. Though case analysis is only a part of the overall task of sentence interpretation, the problem is nontrivial even in simple sentences. Trainable Natural Language Systems 817 From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. Consider these sentences from [McClelland and Kawamoto, 19861: 1. The boy hit the window. 2. The hammer hit the window. 3. The hammer moved. 4. The boy ate the pasta with the cheese. 5. The boy ate the pasta with the fork. In the first sentence, the subject, boy, is an agent. In the second, the subject, hammer, is an instrument. The role played by the subject must be determined on the grounds that boys are animate and hammers are not. In the third sentence, the subject, hammer, is interpreted as a patient, illustrating the importance of the relationship between the surface subject and the verb. In the last two sentences, the prepositional phrase could be attached to the verb (making fork an instrument of ate) or the object (cheese is an accom- paniment of pasta). Domain specific semantic knowl- edge is required to make the correct assignment. Previous Approaches Recent research in learning the case-role assign- ment task has taken place under the connectionist paradigm [Miikkulainen and Dyer, 1991; McClelland and Kawamoto, 19861. They argue that proper case- role assignment is a difficult task requiring many inde- pendent sources of knowledge, both syntactic and se- mantic, and therefore well-suited to connectionist tech- niques. Connectionist models, however, face a number of dif- ficulties in handling natural language. Since the output structures are “flat” (nonrecursive) it is unclear how the embedded propositions in more sophisticated anal- yses can be handled. The models are also limited to producing a single output structure for a given input. If an input sentence is truly ambiguous, the system produces a single output that appears as a weighted average of the possible analyses, rather than enumer- ating the consistent interpretations. We believe that symbolic techniques are more appropriate, and our ap- proach does not suffer from these deficiencies. In ad- dition, empirical results demonstrate that our system trains faster and generalizes to novel inputs better than its neural counterparts. Shift-Reduce Case-Role Parsing Variations of shift-reduce parsing have proven practi- cal for many symbolic natural language applications [Tomita, 19861. 0 ur system adopts a simple shift- reduce framework for case-role mapping [Simmons and Yu, 19921. The process is best illustrated by way of ex- ample. Consider the sentence: “The man ate the pasta.” Parsing begins with an empty stack and an input buffer containing the entire sentence. At each step of the parse, either a word is shifted from the front of the input buffer onto the stack, or the top two elements Action Jtack Contents 0 (shift) [the1 (shift) [man, the] [l&y [[man, det:the]] fh;i$ [ate, [man, det:the]] [[ate, agt:[man, det:the]]] (:h;ft) [the, [ate, agt:[man, det:the]]] [pasta, the, [ate, agt:[man, det:the]]] (1 det) [[pasta, det:the], [ate, agt:[man, det:the]]] (2 pat) [[ate, pat:[pasta, det:the], agt:[man, det:the]]] Figure 1: Parsing “The man ate the pasta.” on the stack are popped and combined to form a new element which is pushed back onto the stack. The se- quence of actions and stack states for our simple exam- ple is shown Figure 1. The action notation (rc dabev, indicates that the stack items are combined via the role, label, with the item from stack position, 2, being the head. An advantage of assuming such a constrained pars- ing mechanism is that the form of structure building actions is limited. The operations required to construct a given case representation are directly inferable from the representation. In general, a structure building ac- tion is required for each unique case-role that appears in the analysis. The set of actions required to produce a set of analyses is the union of the actions required for each individual analysis. Overview of CHILL Our system, CHILL, (Constructive Heuristics Induc- tion for Language Learning) is a general approach to semantic grammar acquisition. The input to the sys- tem is a set of training instances consisting of sentences paired with the desired case representations. The out- put is a shift-reduce parser (in Prolog) which maps sentences into case representations. The parser may produce multiple analyses (on backtracking) for a sin- gle input sentence, allowing for true ambiguity in the training set. The CHILL algorithm consists of two distinct tasks. First, the training instances are used to formulate an overly-general parser which is capable of producing case structures from sentences. The initial parser is overly-general in that it produces many spurious anal- yses for any given input sentence. The parser is then specialized by introducing search control heuristics. These control heuristics limit the contexts in which certain program clauses are used, eliminating the spu- rious analyses. The following section details these two processes. The CHILL Algorithm Constructing an Overly-General Parser A shift-reduce parser is easily represented as a logic program. The state of the parse is reflected by the con- 818 Zelle tents of the stack and input buffer. Each distinct pars- ing action becomes an operator clause that takes the current stack and input and produces new ones. The overly-general parser is built by translating each ac- tion inferable from the training problems into a clause which implements the action. For example, the clause implementing the (1 agt) action is: op( CSl,S2ISRestl, Inp; [SNewlSRestl, Inp) :- combine(S1, agt , S2, SNew) . Building a program to parse a set of training examples is accomplished by adding clauses to the op predicate. Each clause is a direct translation of a required parsing action. As mentioned above, the identification of the necessary actions is straight-forward. A particularly simple approach is to include two actions (e.g., (1 agt) and (2 agt) ) f or each role used in the training exam- ples; any unnecessary operator clauses will be removed from the program during the subsequent specialization process. Parser Specialization General Framework The overly-general parser produces a great many spurious analyses for the train- ing sentences because there are no conditions specify- ing when it is appropriate to use the various opera- tors. The program must be specialized by including control heuristics which guide the application of oper- ator clauses. This section outlines the basic approach used in CHILL. More detail on incorporating clause selection information in Prolog programs can be found in [Zelle and Mooney, 19931. Program specialization occurs in three phases. First, the training examples are analyzed to construct pos- itive and negative control examples for each operator clause. Examples of correct operator applications are generated by finding the first correct parsing of each training pair with the overly-general parser; any sub- goal to which an operator is applied in a successful parse becomes a positive control example for that op- erator. A positive control example for any operator is considered a negative example for all operators that do not have it as a positive example. In the second phase, a general first-order induction algorithm is employed to learn a control rule for each operator. This control rule comprises a Horn-clause definition that covers the positive control examples for the operator but not the negative. The induction al- gorithm used by CHILL is discussed in the following subsection. The final step is to “fold” the control information back into the overly-general parser. A control rule is easily incorporated into the overly-general program by unifying the head of an operator clause with the head of the control rule for the clause and adding the in- duced conditions to the clause body. The definitions of any invented predicates are simply appended to the program. As an example, the (1 agt) clause of op is typically modified to: op([A,[B,det:thell,C,CDl,C) :- animate(B), combine(A,agt,B,D). animate (boy) . animate(girl) . . . . Here, a new predicate has been invented representing the concept “animate.” ’ This rule may be roughly interpreted as stating: “If the stack contains two items, the second of which is a completed noun phrase whose head is animate, then attach this phrase as the agent of the top of stack.” Inducing Control Rules The induction task is to generate a Horn-clause definition which covers the pos- itive control examples for an operator, but does not cover the negative. There is a growing body of research in inductive logic programming which addresses this problem. Our algorithm implements a novel combi- nation of bottom-up techniques found in systems such as CIGOL [Muggleton and Buntine, 19SS] and GOLEM [MuggleJ()on and Feng, 19921 and top-down methods from systems like FOIL [Quinlan, 19901 and CHAMP [Kijsirikul et al., 19921. Let Pos := Positive Examples Let Neg := Negative Examples Let Def := Positive examples as unit clauses. Repeat Let OldDef := Def Let S be a sampling of pairs of clauses in OldDef Let OldSize := TheorySize(OldDef) Let CurrSize := OldSize For each pair of clauses <Cl, C2> in S Find-Generalization(Cl,C2,Pos,Neg,NewClause,NewPreds) Reduce-Definition(Pos,OldDef,NewClause,NewPreds,NewDef) If TheorySize(NewDef) < CurrSize then CurrSize := TheorySize(NewDef) Def := NewDef Until CurrSize = OldSize % No compaction achieved Return Def Figure 2: CHILL Induction Algorithm Space does not permit a complete explanation of the induction mechanism, but the general idea is sim- ple. The intuition is that we want to find a small (hence general) definition which discriminates between the positive and negative examples. We start with a most specific definition (the set of positive examples) and introduce generalizations which make the defini- tion more compact (as measured by a CIGOL-like size metric). The search for more general definitions is car- ried out in a hill-climbing fashion. At each step, a number of possible generalizations are considered; the one producing the greatest compaction of the theory is implemented, and the process repeats. The basic algorithm is outlined in Figure 2. The heart of the algorithm is the Find-Generalization procedure. It takes two clauses in the current definition and constructs a new clause that (empirically) subsumes them and does not cover any ‘Invented predicates actually have system names. They are renamed here for clarity. generated Trainable Natural Language §ystems $19 negative examples. Reduce-Definition proves the pos- itive examples using the current definition augmented with the new generalized clause. Preferential treat- ment is given to the new clause (it is placed at the top of the Prolog definition) and any clauses which are no longer used in proving the positive examples are deleted to produce the reduced definition. Find-Generalization employs three levels of effort to produce a generalization. The first is construction of the least general generalization (LGG) [Plotkin, 19701 of the input clauses. If the LGG covers no negative ex- amples, further refinement is unnecessary. Otherwise, the clause is too general, and an attempt is made to refine it using a FOIL-like mechanism which adds lit- erals derivable either from background or previously invented predicates. If the resulting clause is still too general, it is passed to Invent-Predicate which invents a new predicate to discriminate the positive examples from the negatives which are still covered. Predicate invention is carried out in a manner anal- ogous to CHAMP. The first step is to find a minimal- arity projection of the clause variables such that the set of ground tuples generated by the projection when using the clause to prove the positive examples is dis- joint with the ground tuples generated in proving the negative examples. These ground tuples form positive and negative example sets for the new predicate, and the top-level induction algorithm is recursively invoked to create a definition of the predicate. Experimental Results The crucial test of any learning system is how well the learned concept generalizes to new input. CHILL has been tested on a number of case-role assignment tasks. Comparison with Connectionism In the first experiment, CHILL was tried on the baseline task reported in [Miikkulainen and Dyer, 19911 using 1475 sentence/case-structure examples from [McClel- land and Kawamoto, 1986] (hereafter referred to as the M & K corpus). The corpus was produced from a set of 19 sentence templates generating sentences/case- structure pairs for sentences like those illustrated above. The sample actually comprises 1390 unique sen- tences, some of which allow multiple analyses. Since our parser is capable (through backtracking) of gener- ating all legal parses for an input, training was done considering each unique sentence as a single example. If a particular sentence was chosen for inclusion in a training or testing set, the pairs representing all correct analyses of the sentence were included in that set. Training and testing followed the standard paradigm of first choosing a random set of test examples (in this case 740) and then creating parsers using increasingly larger subsets of the remaining examples. All reported results reflect averages over five trials. During test- ing, the parser was used to enumerate all analyses for a given test sentence. Parsing of a sentence can fail in two ways: an incorrect analysis may be generated, or a correct analysis may not be generated. In order to account for both types of inaccuracy, a metric was introduced to calculate the “average correctness” for a given test sentence as follows: Accuracy = (s + $)/2 where P is the number of distinct analyses produced, C is the number of the produced analyses which were correct, and A is the number of correct analyses pos- sible for the sentence. CHILL performs very well on this learning task as demonstrated by the learning curve shown in Figure 3. The system achieves 92% accuracy on novel sentences after seeing only 150 training sentences. Training on 650 sentences produces 98% accuracy. I II 90.00 80.00 if Figure 3: M & K corpus Accuracy Direct comparison with previous results is difficult, as connectionist learning curves tend to be expressed in terms of low level measures such as “number of cor- rect output bits.” The closest comparison can be made with the results in [Miikkulainen and Dyer, 19911 where an accuracy of 95% was achieved at the “word level” training with 1439 of the 1475 pairs from the M & K corpus. Since the output contains five slots, assum- ing independence of errors gives an estimate of 0.955 or 78% completely correct parses. Interestingly, the types of inaccuracies differ substantially between sys- tems. Neural networks always produce an output many of which contain minor errors, whereas CHILL tends to produce a correct output or none at all. From an en- gineering standpoint, it seems advantageous to have a system which “knows” when it fails; connectionists might be more interested in failing “reasonably.” With respect to training time, the induction algo- rithm employed by CHILL is a prototype implemented in Prolog. Running on a SparcStation 2, the creation of 820 Zelle the parsers for the examples in this paper required from a few minutes to half an hour of CPU time. This com- pares favorably with backpropagation training times usually measured in hours or days. It is also noteworthy that CHILL consistently in- vented interpretable word classes. One example, the invention of animat e, has already been presented. This concept is implicit in the analyses presented to the sys- tem, since only animate objects are assigned to the agent role. Other invented classes clearly picked up on the distribution of words in the input sentences. The system regularly invented semantic classes such as human, food, and possession which were used for noun generation in the M & K corpus. Phrase classes useful to making parsing distinc- tions were also invented. For example, the structure instrumental-phrase was invented as: instr-phrase ( [I > . instr-phrase ( [uith, the, Xl > : - instrument (X1. instrument (f ark) . instrument (bat). . . . It was not necessary in parsing the M & K corpus to distinguish between instruments of different verbs, hence instruments of various verbs such as hitting and eating are grouped together. Where the semantic re- lationship between words is required to make pars- ing distinctions, such relationships can be learned. CHILL created one such relation: can-possess(X,Y) *- human(X), possession(Y); which reflects the dis- . tributional relationship between humans and posses- sions present in the M & K corpus. Notice that this invented rule itself contains two invented word cate- gories. Although there is no a priori reason to suppose CHILL must invent interpretable categories, the nat- uralness of the invented concepts supports the empiri- cal results indicating that CHILL is making the “right” generalizations. A More Realistic Domain The M & K corpus was designed specifically to illus- trate the case mapping problem. As such, it does not necessarily reflect the true difficulty of semantic grammar acquisition for natural language applications. Another experiment was designed to test CHILL on a more realistic task. A portion of a semantic gram- mar was “lifted” from an extant prototype natural language database designed to support queries con- cerning tourist information [Ng, 19881. The portion of the grammar used recognized over 150,000 distinct sentences. A simple case grammar, which produced labellings deemed useful for the database query task, was devised to generate a sample of sentence/case- structure analyses. The example pair shown in Fig- ure 4 illustrates the type of sentences and analyses used. An average learning curve for this domain is shown in Figure 5. The curve shows generalization results on Show me the two below 65 dollars. star hotels in downtown LA with double rates [show, theme:[hotels, det:the, type:[star, mod:two], loc:[la, casemarktin, mod:downtown], attr:[rates, casemark:with, mod:double, less:[nbr(65), casemark:below, unit:dollars]]] dative:me] Figure 4: Example from Tourist Domain 500 sentences which differed from anv used in train- ing. The results are very encouraging. With only 50 training examples, the resulting parser achieved 93% accuracy on novel sentences. With 300 training exam- ples, accuracy is 99%. I I I I I I 0.00 50.00 100.00 150.00 200.00 250.00 3($oo -gExamples Figure 5: Tourist Domain Accuracy Related Work As noted in the introduction, most AI research in lan- guage acquisition has not focused on the case-role map- ping problem. However, a number of language acqui- sition systems may be viewed as the learning of search control heuristics. Langley and Anderson [Langley, 1982; Anderson, 19831 have independently posited ac- quisition mechanisms based on learning search control in production systems. These systems were cognitively motivated and addressed the task of language genera- tion rather than the case-role analysis task examined here. Berwick’s LPARSIFAL [Berwick, 19851 acquired syntactic parsing rules for a type of shift-reduce parser. His system was linguistically motivated and incorpo- rated many constraints specific to the theory of lan- Trainable Natural Language Systems 821 guage assumed. In contrast, CHILL uses induction techniques to avoid commitment to any specific model of grammar. More recently an exemplar-based acquisition system for the style of case grammar used in CHILL is de- scribed in [Simmons and Yu, 19921. Unlike CHILL, their system depends on an analyst to provide appro- priate word classifications and requires detailed inter- action to guide the parsing of training examples. Recent corpus-based natural-language research has addressed some issues in automated dictionary con- struction [Lehnert et al., 1992; Brent and Berwick, 19911. These systems use manually constructed parsers to “bootstrap” new patterns from analyzable text. They do not employ machine learning techniques to generalize the acquired templates or construct new fea- tures which support parsing decisions. In contrast, CHILL is a first attempt at applying modern machine learning methods to the more fundamental task of con- structing efficient parsers. Future Research The generalization results in the experiments so far un- dertaken are quite encouraging; however, further test- ing on larger, more realistic corpora is required to de- termine the practicality of these techniques. Another avenue of research is “deepening” the analyses pro- duced by the system. Applying our techniques to ac- tually construct natural language systems will require either modifying the parser to produce final represen- tations (e.g., database queries) or adding additional learning components which map the intermediate case structures into final representations. Conclusion Methods for learning semantic grammars hold the po- tential for substantially automating the development of natural language interfaces. We have presented a sys- tem that employs inductive logic programming tech- niques to learn a shift-reduce parser that integrates syntactic and semantic constraints to produce case- role representations. The system first produces an overly-general parser which it then constrains by in- ductively learning search-control heuristics that elim- inate spurious parses. When learning heuristics, con- structive induction is used to automatically generate useful semantic and syntactic classes of words and phrases. Experiments on two reasonably large cor- pora of sentence/case-role pairs demonstrate that the system learns accurate parsers that generalize well to novel sentences. These experiments also demonstrate that the system trains faster and produces more ac- curate results than previous, connectionist approaches and creates interesting and recognizable syntactic and semantic concepts. References Allen, James F. 1987. Natural Language Understanding. Benjamin/Cummings, Menlo Park, CA. Anderson, John R. 1983. The Architecture of Cognition. Harvard University Press, Cambridge, MA. Berwick, B. 1985. The Acquisition of Syntactic Ii’nowl- edge. MIT Press, Cambridge, MA. Brent, Micheal R. and Berwick, Robert C. 1991. Auto- matic acquisition of subcategorization frames from tagged text. In Speech and Natrual Language: Proceedings of the DARPA Workshop. Morgan Kaufmann. 342-345. Kijsirikul, B.; Numao, M.; and Shimura, M. 1992. Discrimination-based constructive induction of logic pro- grams. In Proceedings of the Tenth National Conference on Artificial Intelligence, San Jose, CA. 44-49. Langley, 6P. 1982. Language acquisition through error re- covery. Cognition and Brain Theory 5. Lehnert, W.; Cardie, C.; Fisher, D.; McCarthy, J.; Riolff, E.; and Soderland, S. 1992. University of massachusetts: Muc-4 test results and analysis. In Proceedings of the Fourth DARPA Message Understanding Evaluation and Conference. Morgan Kaufmann. 151-158. McClelland, J. L. and Kawamoto, A. H. 1986. Mechnisms of sentence processing: Assigning roles to constituents of sentences. In Rumelhart, D. E. and McClelland, J. L., editors 1986, Parallel Distributed Processing, Vol. II. MIT Press, Cambridge, MA. 318-362. Miikkulainen, R. and Dyer, M. G. 1991. Natural language processing with modular PDP networks and distributed lexicon. Cognitive Science 15:343-399. Muggleton, S. and Buntine, W. 1988. Machine invention of first-order predicates by inverting resolution. In Pro- ceedings of the Fifth International Conference on Machine Learning, Ann Arbor, MI. 339-352. Muggleton, S. and Feng, C. 1992. Efficient induction of logic programs. In Muggleton, S., editor 1992, Inductive Logic Programming. Academic Press, New York. 281-297. Ng, H. T. 1988. A computerized prototype natural lan- guage tour guide. Technical Report AI88-75, Artificial Intelligence Laboratory, University of Texas, Austin, TX. Plotkin, G. D. 1970. A note on inductive generalization. In Meltzer, B. and Michie, D., editors 1970, Machine In- telligence (Vol. 5). Elsevier North-Holland, New York. Quinlan, J.R. 1990. Learning logical definitions from re- lations. Machine Learning 5(3):239-266. Simmons, R. F. and Yu, Y. 1992. The acquisition and use of context dependent grammars for English. Compu- tational Linguistics 18(4):391-418. Tomita, M. 1986. Efi cient Parsing for Natural Language. Kluwer Academic Publishers, Boston. Wirth, Ruediger 1989. Completing logic programs by in- verse resolution. In Proceedings of the European Working Session on Learning, Montpelier, France. Pitman. 239- 250. Wolff, J. G. 1982. Language acquisition, data compres- sion, and generalization. Language and Communication 2:57-89. Zelle, J. M. and Mooney, R. J. 1993. Combining FOIL and EBG to speed-up logic programs. In Proceedings of the Thirteenth International Joint conference on Artificial intelligence, Chambery, France. 822 Zelle | 1993 | 122 |
1,324 | Range Estimation From Focus Using a Non-frontal Imaging Camera* Arun Krislwan and Narendra Ahuja l3eckina~n Institute University Of Illinois 405 North Makhews Ave., Urbana. IL 61SO1, U.S.A. e-mail: arunki~visioii.csl.uiric.eclu, aliuja.~&~ision.csl.uiuc.eclu Abstract This paper is concerned with active sensing of range information from focus. It describes a new type of camera whose image pla.ne is uot perpen- dicular to the optical a,xis as is sta.nda.rd. This special imaging geometry eliminakes t’he usua.1 fo- cusing need of image plane movement. Camera. movement, which is anywa,y necessary to process large visual fields, integrakes panning, focusing, and range estimakion. Thus the two stantla.rd mechanical actions of focusing a.nd panning arc replaced by pa.nning alone. Ra.nge estimakion is done at the speed of pa.nning.‘ An imple~nentat~ioll of the proposed camera. design is descril.wcl ant1 experiments with range estimakion are reported. INTRODUCTION This pa.per is concerned with active sensing of range information from focus. It describes a new type of camera which integrakes the processes 01 image acquisition and range estimakion. The cam- era ca,n be viewed as a computa.tiona.1 sensor wllich can perform high speed range estima tiou over large scenes. Typically, the field of view of a call)- era is much smaller t&an the entire visi1a.l field of interest. Consequently, the camera. must pan t#o sequentially acquire images of the visual field, a part at a. time, and for each pa.rt colllpute range estimates by acquiring a.nd searching illlages over many image plane locakions. Using the proposed approach, range can be computed a.t 18hc spee(l of panning the camera. At the heart of the proposed design is a ctive coil- trol of imaging geometry to elimin a 1. (f tlw stall- *The support of the National Science Foundatioll il.lld Defence Advanced Research Projects -4gellcy under grant IRI-89-02728 and U.S. Army Advance Const.ruct.io~~ l’ech- nology Center under grant. DXAL OS-87-Ii-OOOG is gril(c- fully acknowledged. (lard mechanical adjustment of image plane loca- t,ion, and Curther, integrakion of the only remain- ing mecllanical action of camera panning with fo- cusing a.nd range estima.tion. Thus, ima.ging ge- ometry a.nd optics a.re exploited to replace explicit sequential computation. Since the camera imple- ments a range from focus a,pproacli, the resulting estinia.tes have tlke following characteristics as is brue for any such a.pproa.ch [Das and Ahuja, 1990; Das a.ncl Ahuja., 1992b]. The scene surfaces of in- terest must have texture so ima.ge sharpness can be nleasured. The confidence of the estimates im- proves wi 1,h the amount of surface texture present. l:urther, tlie i*elial,ilit8y of estimates is inherently a function of the rauge to be estima.ted. IIow- ever, range estimation for wide scenes using the proposed approach is faster than any traditional range from focus a.pproa.ch, thus eliminating one of the major clra.wbacks. The next section describes in detail the pertinence of range estimation from focus, and some prob- lems t,lia.t cha.racterize previous range from focus approaches arid serve as the motivation for the work reported in I,his paper. The following sec- tion 1,resent.s the new, proposed imaging geometry ~liose centerpiece is a. Glting of the image plane from the st#a.nclnrd frontoparallel orientation. It shows how the design achieves the results of search over focus with high computational efficiency. The rest of t(lie paper presents a. range from focus al- gorit lun that uses the proposed camera, followed by the results of an experiment demonstraking the fea.sibiIity of our method. The last’ section presents some concluding rema.rks. BACKGROUND & MOTIVATION Range Estimation From Focus and Its Utility Focus I~nsecl methods usually obtain a depth esti- nlate of a scene point by inecha.nically relocaking 830 Krishnan From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. the ima*ge plane, thereby va.rying the focus dis- tance (v). When the scene point a.ppears in sharp focus, the corresponding u a.ncl 2) values satisfy the standard lens law: d + $ = i. The depth value u for the scene point ca.n then be calcu- lated by knowing the values of the focal length and the focus distance [Pentland, 1987; Darrell and Wohn, 1988; Ens and Lawrence, 19911. To deter- mine when a scene is ima.ged in sharp focus, sev- eral autofocus methods have been proposed in the past [Horn, 1968; Sperling, 1970; Teneba,uill, 1971; Jarvis, 1976; Ligthart a.nd Groen, 1982; Sch1a.g et al., 1985; Krotkov et nl., 1986; Krotkov, 1986; Darrell and Wohn, 1988; Na.yar and Nalca.gawa, 19901. Like any other visual cue, ra,nge estimation from focus is reliable under some conditions and not so in some other conditions. Therefore, to use the cue appropriately, its shortcomings and strengths must be recognized and the estimat#ion process should be suitably integra.tecl with other pro- cesses using different cues, so as to achieve su- perior estimates under broa.cler conditions of in- terest [Abbott and Ahuja, 1990; Iirot~kov, 1989; Das and Ahuja, 1992a.l. When a.ccurat#e depth in- formation is not needed, e.g., for obstacle avoicl- ante during navigation, range estimates from fo- cus or some other cue a.lone may suflice, eve]] though it may be less accurate than that, obt,ainc4 by an integrated ana.lysis of mult~iple cues. Motivation for Proposed Approach The usual range from focus a.lgorithms involve two mechanica. actions, those of panning a.nd for each chosen pan angle finding the best 21 value. These steps ma.ke the algorithms slow. The purpose of the first step is to acquire da.ta. over the ent,ire visual field since cameras typica.lly have na.rrower field of view. This step is therefore essent,ia.l to construct a range map of the entire scene. The proposed a.pproach is motiva,ted prima.rily by t#he desire to elimina.te the second step involving me- chanical control. Consider the set of scene points that will be im- aged with sharp focus for some const#a.nt value of focal length and focus dista.nce. Let us call this set of points the SF surface’. For the conven- tional ca,se where the image is formed on a, plane perpendicular to the optica. axis, and assuming that the lens has no optical a.berra.tions, this SF surface will be a surface that is a.pproxima.t,ely pla- nar and normal to the optica. axis. The size of SF surface will be a scaled version of the size of the lActua.lly, the dep th-of-field effect will cause t.he SF sur- face to be a 3-D volume. We ignore this for t,he momellt,, as the arguments being made hold irrespective of whether we have a SF surface, or a SF volume. image pla.ne, while its sha.pe will be the same as tha.t of the ima.ge plame. Figure l(a) shows the SF surface for a rectangu1a.r ima.ge plane. As the ima.ge plane distance from the lens, V, is changed, the SF surface moves away, or to- ward the ca.mera. As t,he entire range of v val- ues is tra.versed, the SF surface sweeps out a cone shaped volume in three-dimensional space, hence- forth called the SF cone. The vertex angle of the cone represents the magnification or scaling achieved and is proportional to the f value. Fig- ure l(b) shows a frustum of the cone. Only those points of the scene within the SF cone a.re ever imaged sharply. To increase the size of the imaged scene, the f va.lue used must be in- creased. Since in practice there is a limit on the usable range of f value, it is not possible to image the entire scene in one viewing. The camera must be panned to repea.tedly image different parts of t#he scene. If the solid angle of the cone is w, then to ima.ge an entire hemisphere one must clearly use at, least ?n viewing directions. This is a crude lower boundy5ince it does not take into account the constra.intls imposed by the packing and tes- sellability of the hemisphere surfa,ce by the shape of the camera visua.1 field. If specialized hardwa.re which can quickly identify focused regions in the image is used, then the time required to obtain the depth estimates is bounded by tl1a.t required to make a.11 pan angle changes and to process t,he data acquired for ea.ch pan angle. The goal of the a,pproach proposed in this paper is to select t,lie appropriate 21 value for each scene point, without conducting a. dedicated mechanical search over a.11 ‘[I values. The next section describes how this is accomplished by slightly changing the camera geometry and exploiting this in conjunc- t.ion with the pan motion to accomplish the same result, as traditionally provided by the two me- chanical mot,ions. A NQN-FRONTAL IMAGING CAMERA The following observations underlie the proposed approach. In a. normal camera, all points on the image plane lie a.t, a fixed distance (v) from the lens. So a.11 scene points are always imaged with a fixed va.lue of V, regardless of where on the image plane they are ima.ged, i.e., regardless of the cam- era pan angle. If we instea.d have an image surface such that the different image surface points are at different dista.nces from the lens, then depending upon where on the imaging surface the image of a scene point is formed (i.e., depending on the pan angle), the imaging parameter v will assume dif- ferent va.lues. This means that by controlling only t’he pa.11 a.ngle, we could achieve both goals of the Vision Processing 831 tra.ditional mechanical movements, namely, that of changing 2, values as well as that of scanning the visual field, in an integrated way. - optical Axis. Imaging plane of the camera. Typically. CCD my or pholographic film. . Sharp Focus surface Objecls in this plam will for one set of camera bc sharply focused by t.bc ptl?&IOCteFS. (a) SF surface (b) SF cone Figure 1: (a) Sharp Focus object surface for the stan- dard planar imaging surface orthogonal to the optical axis. Object points that lie on the SF surface are iln- aged with the least blur. The location of the SF surface is a function of the camera parameters. (b) A frustum of the cone swept by the SF surface as the value of II is changed. Only those points that lie inside the SF cone can be imaged sharply, and therefore, ra.nge-from-focus algorithms can only calculate the range of these points. In the rest of this pa.per, we will consider the sim- plest case of a nonstandard image surface, namely a. plane which has been tilted relative to the standard frontoparallel orientation. Consider the tilted image plane geometry shown in Figure 2(a). For different angles 0, the distance from the lens center to the ima.ge plane is different. Consider a point object at a.n angle 0. The following relation follows from the geometry: 102 I= czcoscu cos(8 - a) Since for a tilted image plane, w varies with po- sition, it follows from the lens 1a.w that the cor- responding SF surface is a surface whose u value also varies with position. The volume swept by the SF surface as the ca.mera is rotated is shown in Figure 2(b). If the camera. turns a.bout the lens center 0 by an a.ngle 4, then the object will now appear at a.n a.ngle 8 + d,. The new ima.ge distance (for the point object) will now be given by the following equation . I& I= clcosct cos(& + e - a) As the angle 6, changes, the image distance also changes. At some particular angle, the image will appear perfectly focused and as the angle keeps changing, the ima.ge will a.gain go out, of focus. By identifying the angle 4 at which any surface appears in sharp focus, we can calculate the image dista.nce, and then from the lens la.w, the object surface distance. As the camera rotates about the lens center, new parts of the scene enter the image at the left edge2 and some previously imaged parts are discarded at the right edge. The entire scene can be imaged and ranged by completely rotating the camera once. RANGE ESTIMATION ALGORITHM Let the ima.ge plane have N x N pixels and let the range ma.1) be a. large array of size N x sN, where s >= 1 is a number that depends on how wide a, scene is to be imaged. The kth image frame is represented by Ik and the cumulative, environ- ment centered range ma.p with origin at the cam- era. center is represented by R. Every element in the ra.nge arra.y is a. structure that contains the 20r t,he right edge, depending upon the direction of the rotation 832 Krishnan m )pticaI Axis. SF surface ‘\ / 1. (a) Tilted I nlage Surlac-e (b) SF cone Figure 2: (a) A point object, initially a.~ an allglc of 6, is ima.ged at point C. The focus dist,ance OC! va.ries as a function of t9. When the camera. undergoes a pan motion, 6 changes and so does the focus distance. The SF surface is not parallel to the lens and the optical axis is not perpendicular to the SF surface. (I,) Tl~e 3D volume swept by the proposed SF surface as tile non-frontal imaging ca.mera is rotat,ed. For t8he same rotation, a frontal ima.ging ca,niera would sweep out an SF cone having a sma.ller depth. Range Map Arra Figure 3: Panning camera., environment fixed range ar- ra.y, and the ima.ges obtained at successive pan angles. Ea.& ra.nge a,rray element is associated with multiple criterion function values which are computed from dif- ferent overla.pping views. The maximum of the values in a.ny radial direction is the one finally selected for the corresponding ra.nge array element, to compute the depth \.alue iii kliat direction. focus crikerion values for different ima.ge indices, i.e., for different pan angles. When the stored cri- terion value shows a ma.ximum, then the index corresponding to the maximum3 can be used to determine the range for that scene point. Let the camera. sta.rt from one side of the scene and pan to t,he other side. Figure 3 illustrates the geometrical relationships between successive pa.11 angles, pixels of the ima.ges obtained, and the ra.nge arra.y elements. Let i = 0. Q = 0. Initialize a.ll the a.rrays and then execute the following steps. e Capture the jlh image 1j. e Pass the ima.ge through a focus criterion filter tqo yield a.n a.rray Cj of criterion values. o For the a.ngle 4 (which is the angle that the calnera. has turned from its sta.rting position), calculate the offset into the range ma.p required t,o align image fj with the pievio& images. For esa.mple, Pixel Ij [50] [75] might correspond lo the same object as pixels Ij+1[50][125] and fj+:![50][175]. “Knowing the index value, we can find out the amount of ca.mera. rotation t,hat was needed before the scene point was sharply focused. Using the row and column indices for t,he ra.nge point, and the image index, we can then find out the exact, distance from the lens to the image plane (v). We can then use the lens law to calculate the range. Vision Processing 833 Check to see if the criterion function for any scene point has crossed the maximum. If so, compute the range for that scene point using the pan angle (and hence v va.lue) for the ima.ge with maximum criterion value. Rotate the camera by a. small amount. Updat,e 4 and j. Repeat the above steps until the entire scene is imaged. The paper [Krishnan and Ahuja, 19931 contains a. pictorial representation of the a.bove algorithm. EXPERIMENTAL RESULTS In the experiment we attempt to cletermine the range of scene points. The scene in experiment 1 consists of, from left to right, a. planar surface (range = 73 in), part of the background curtain (range = 132 in), a p1ana.r surface (range = 54in) and a planar surface (range = 35 in). The camera is turned in small steps of 50 units (of the stepper motor), that corresponds to a. shift of 15 pixels (in pixel columns) between ima.ges. A scene point will thus be present in a. maximum of thirty four4 images. In each ima.ge, for t,lie same scene point, the effective dista.nce from the image plane to lens is different. There is a. l-to-l reln- tionship between the ima.ge col~~nn number and the distance from lens to ima.ge, and therefore, by the lens law, a l-to-l rela.tionship belween t#he im- age column number and the range of the scene point. The column number a.t which a scene point is imaged with greatest shazpness, is tlhcrefore also a measure of the range. Results Among the focus criterion functions tlla.l, were tried, the Tennegrad function [Tenebaum, 19711 seemed to have the best performance/speed characteristics. In addition to problems like cleptOll of field, lack of detail, selection of window size etc., that are present in most ra.nge-from-focus al- gorithms, the range map 1la.s two problems as de- scribed below. e Consider a scene point, A, tl1a.t is imaged 011 pixels, 11[230][470], Iz[230][455], I3[230][440] . . . Consider also a neighboring scene point B, t(ha.t, is ima.ged on pixels II[230] [471], JL’ [230] [456], &$!30][441] . . . The focus criterion values for point A will pea.k at a. column number that. is 470 - n x 15 (where 0 5 71). If point, 13 is also at the same range as A, then the foclls criI.erioll values for point B will pea.1~ at, a. colulnn 11um- ber tl1a.t is 471 - n x 15, for the same n as t*hat, for point A. The peak column number for point A will therefore be 1 less than that of point, B. If we lia*ve a pa.tch of points tha.t arc all al, Llic 4 Roughly s same distance from the camera, then the peak column numbers obtained will be numbers that change by 1 for neighboring points5. The result- ing range ma.p therefore shows a local ramping behavior. As we mentioned before, a scene point is imaged about 34 times, a.t different levels of sharpness (or blur). It is very likely that the least blurred image would have been obtained for some cam- era, parameter that corresponds to a value be- tween two input frames. To reduce these problems, we fit a gaussian to the t#hree focus criterion values around the peak to determine the location of the rea.1 maximum. For brevity, we have not included some sample images from t’he experiments. Figure 4 shows two views of the range disparity values for experiment 1. Parts of the scene where we cannot determine the range dispa.rity va.lues due to a8 lack of sufficient texture are shown blank. The pa.per [Krishnan and Ahuja, 19931 contains more experimenta. results. SUMMARY AN-D CONCLUSIONS In this paper we have shown that using a cam- era. whose ima.ge plane is not perpendicular to the optical axis, allows us to determine estima*tes of range values of object points. We showed that the SF surface, which appea.rs in sharp focus when imaged by our non-frontal imaging camera., is ap- proxiniat~ely an inclined plane. When the camera’s pan angle direction changes, by turning about the lens center, an SF volume is swept, out by the SF surface. The points within this volume comprise those for which range can be estimated correctly. We llave described a.n algorithm tl1a.t determines the range of scene points that lie within the SF vol- ume. We point out some of the shortcomings that are unique to our method. We have also described the rc~sult~s of an experiment that was conducted t80 prove t#he feasibility of our met8hod. References Abbottl, A. Lynn and Ahuja, Narendra 1990. Aclive surfa.ce reconstruction by integrating fo- cus, vergence, stereo, and camera calibration. In Proc. Third Il1.11. Conf. Computer Vision. 489- 492. Darrell, T. and Wohn, I<. 1988. Pyra.mid based clept81i from focus. In Proc. IEEE CoTIf. Computer Vision cr.~~d Paltern Recognition. 504-509. Das, Subhoclev and Ahuja, Narendra 1990. Mul- tiresolution image acquisition and surfa.ce recon- struction. In Proc. Third I~~.tl. Conf. Computer Vision. 485-488. “Neighbours along vedical columns will not have this problem 834 Krishnan Exptl: Top View 480 Row 160 320 640 960 Column Exptl: Side View Figure 4: Range disparities for experiment 1. Parts of the scene for which ra.nge values could not, IW calcu- lated are shown blank. The further away a surface is from the camera., the smaller is its Iic>iglrl itI l,l~v 1*;111gt‘ disparity ma.p. Das, Subhodev and Ahuja., Narendra 1992a. Ac- tive surfa.ce estimation: Integrating coarse-to- fine image a.cquisition and estimation from multi- ple cues. Technica. R.eport CV-92-5-2, Beckman Institute, University of Illinois. Das, Subhodev and Ahuja., Narendra 1992b. Per- formance ana.lysis of stereo, vergence, a.nd focus as deptlh cues for active vision. Technical Report CV-92-6-1, Beckman Institute, University of Illi- nois. Ens, John and La.wrence, Peter 1991. A matrix based met~hod for determining depth from focus. 111 Proc. IEEE Co~~f. Computer Vision. and Pat- tern Recognilioll. 600-606. Horn, Berthold Klaus Paul 1968. Focusing. Tech- nical Report’ 160, MIT Artificial Intelligence Lab, Cambridge, Mass. Jarvis, R. A. 1976. Focus optimisation criteria for comput.er inlage processing. Microscope 24:163- 180. Iirishnaii, Arun and Ahuja, Nazendra 1993. R.angc est~ima tion from focus using a non-frontal imaging camera.. In Proc. DARPA Image Under- sl(rndiug IT~orkshop. Iirotkov, E. P.; Summers, J.; and Fuma., F. 1986. Computing range with an active camera, system. Iii Eighth Ii~lernnlional Coi2ference 011 Pattern Recogllilion. 1156-115s. Krol,kov, E. P. 1986. Focusing. Technical Re- port, RIS-CIS-86-22, GRASP Laboratory, Univer- sity of Pennsylva.nia.. Iirot,I;ov, Eric P. 1989. Actiue Computer Vi- ~1011 by Cooperdive Focus and Stereo. New York: Springer-Verlag. Ligl#hart,, G. and Green, F. C. A. 1982. A com- parison of different a.utofocus algorithms. In Proc. Si.rlh IH il. COIIJ Patlern Recognition. 597- 600. Nayar, Shree Ii. and Naka.gawa, Yasuo 1990. Sllape from focus: An effective a.pproach for r011g11 surfaces. In Proc. IEEE Intl. Conf. Rol~olics (end .4ulomation. 218-225. Pentla.ncl, Alex Paul 1987. A new sense for depth of field. IEEE Trans. Pattern Anal. and Machine Iulell. l’AMI-9:523-531. S&lag, J. F.; Sanderson, A. C.; Neuman, C. P.; and Wilnberly, F. C. 1985. Implementation of aut,oma.tic focusing algorithms for a. computer vi- sion q*st~em with camera. control. Technical Re- port C:hITJ-RI-TR-83-14, Ca.rnegie-h/Iellon Uni- versi t’y. Sperlillg, C,. 1970. Binocular vision: A physical alld a. neural t.heory. Amer. J. Psychology 83:461- 534. Tellel>aum, Jay h/Ia.rtin 1971. Acconzodation in Colupuler b’isioib. Ph.D. Dissertation, Stanford IJniversitl\~. Palo Alto, Calif. Vision Processing 835 | 1993 | 123 |
1,325 | Learning Object Models from Appearance * Hiroshi Murase NTT Basic Research Labs 3-9-l 1 Midori-cho, Musashino-shi Tokyo 180, Japan murase@siva.ntt .jp Abstract We address the problem of automatically learning object models for recognition and pose estimation. In contrast to the traditional approach, we formu- late the recognition problem as one of matching visual appearance rather than shape. The ap- pearance of an object in a two-dimensional im- age depends on its shape, pose in the scene, reflectance properties, and the illumination conditions. While shape and reflectance are intrinsic proper- ties of an object and are constant, pose and illu- mination vary from scene to scene. We present a new compact representation of object appearance that is parametrized by pose and illumination. For each object of interest, a large set of images is ob- tained by automatically varying pose and illumina- tion. This large image set is compressed to obtain a low-dimensional subspace, called the eigenspace, in which the object is represented as a hypersur- face. Given an unknown input image, the recogni- tion system projects the image onto the eigenspace. The object is recognized based on the hypersurface it lies on. The exact position of the projection on the hypersurface determines the object’s pose in the image. We have conducted experiments using several objects with complex appearance charac- teristics. These results suggest the proposed ap- pearance representation to be a valuable variety of machine vision applications. Introduction tool for a One of the primary goals of an intelligent vision system is to recognize objects in an image and compute their pose in the three-dimensional scene. Such a recognition system has wide applications ranging from autonomous navigation to visual inspection. For a vision system to be able to recognize objects, it must have models *This research was conducted at the Center for Research in Intelligent Systems, Department of Computer Science, Columbia University. This research was supported in part by the David and Lucile Packard Fellowship and in part by ARPA Contract No. DACA 76-92-C-0007. Shree EC. Nayar Department of Computer Science Columbia University New York, N.Y. 10027 nayar@cs.columbia.edu of the objects stored in its memory. In the past, vi- sion research has emphasized on the use of geometric (shape) models [l] for recognition. In the case of manu- factured objects, these models are sometimes available and are referred to as computer aided design (CAD) models. Most objects of interest, however, do not come with CAD models. Typically, a vision programmer is forced to select an appropriate representation for object geometry, develop object models using this representa- tion, and then manually i nput this information into the system. This procedure is cumbersome and impracti- cal when dealing with large sets of objects, or objects with complicated geometric properties. It is clear that recognition systems of the future must be capable of acquiring object models without human assistance. In other words, recognition systems must matically learn the objects of interest. be able to auto- Visual learning is clearly a well-developed and vital component of biological vision systems. If a human is handed an object and asked to visually memorize it, he or she would rotate the object and study its appearance from different directions. While little is known about the exact representations and techniques used by the human mind to learn objects, it is clear that the over- all appearance of the object plays a critical role in its perception. In contrast to biological systems, machine vision systems today have little or no learning capabili- ties. Hence, visual learning is now emerging as an topic of research interest [6]. The goal of this-paper is to ad- Vance this important but relatively unexplored area of machine vision. Here, we present a technique for automatically learn- ing object models from images. The appearance of an object is the combined effect of its shape, reflectance properties, pose in the scene, and the illumination con- ditions. Recognizing objects from brightness images is therefore more a problem of appearance matching rather than shape matching. This observation lies at the core of our work. While shape and reflectance are intrin- sic properties of the object that do not vary, pose and illumination vary from scene to scene. We approach the visual learning problem as one of acquiring a com- pact model of the object’s appearance under different 836 Murase From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. illumination directions and object poses. The object is surface in two different eigenspaces; the universal “shown” to the image sensor in several orientations and eigenspace and the object’s own eigenspace. The uni- illumination directions. This can be accomplished us- versal eigenspuce is computed by using the image sets ing, for example, two robot manipulators; one to rotate of all objects of interest to the recognition system, and the object while the other varies the illumination direc- the object eigenspace is computed using only images of tion. The result is a very large set of object images. the object. We show that the universal eigenspace is Since all images in the set are of the same object, any best suited for discriminating between objects, whereas two consecutive images are correlated to large degree. the object eigenspace is better for pose estimation. Ob- The problem then is to compress this large image set into a low-dimensional representation of object appear- ante. A well-known image compression or coding technique is based on principal component analysis. Also known as the Karhunen-Loeve transform [5] [2], this method computes the eigenvectors of an image set. The eigen- vectors form an orthogonal basis for the representation of individual images in the image set. Though a large number of eigenvectors may be required for very ac- curate reconstruction of an object image, only a few eigenvectors are generally sufficient to capture the sig- nificant appearance characteristics of an object. These eigenvectors constitute the dimensions of what we refer to as the eigenspuce for the image set. From the per- spective of machine vision, the eigenspace has a very attractive property. When it is composed of all the eigenvectors of an image set, it is optimal in a corretu- tion sense: If any two images from the set are projected onto the eigenspace, the distance between the corre- sponding points in eigenspace is a measure of the simi- larity of the images in the l2 norm. In machine vision, the Karhunen-Loeve method has been applied primar- ily to two problems; handwritten character recognition [3] and human face recognition [8], [9]. These applica- tions lie within the domain of pattern classification and do not use complete parametrized models of the objects of interest. In this paper, we develop a continuous and com- pact representation of object appearance that is parametrized by the variables, namely, object pose and illumination. This new representation is referred to as the parametric eigenspuce. First, an image set of the object is obtained by varying pose and illumination in small increments. The image set is then normalized in brightness and scale to achieve invariance to image magnification and the intensity of illumination. The eigenspace for the image set is obtained by comput- ing the most prominent eigenvectors of the image set. Next, all images in the object’s image set (the learning samples) are projected onto the eigenspace to obtain a set of points. These points lie on a hypersurfuce that is parametrized by object pose and illumination. The hy- persurface is computed from the discrete points using the cubic spline interpolation technique. It is impor- tant to note that this parametric representation of an object is obtained without prior knowledge of the ob- ject’s shape and reflectance properties. It is generated using just a sample of the object. ject recognition and pose estimation can be summarized as follows. Given an image consisting of an object of interest, we assume that the object is not occluded by other objects and can be segmented from the remain- ing scene. Th e segmented image region is normalized in scale and brightness, such that it has the same size and brightness range as the images used in the learning stage. This normalized image is first projected onto the universal eigenspace to identify the object. After the object is recognized, the image is projected onto the object eigenspace and the location of the projection on the object’s parametrized hypersurface determines its pose in the scene. The learning of an object requires the acquisition of a large image set and the computationally intensive process of finding eigenvectors. However, the learn- ing stage is done off-line and hence can afford to be relatively slow. In contrast, recognition and pose es- timation are often subject to strong time constraints, and our approach offers a very simple and computa- tionally efficient solution. We have conducted exten- sive experimentation to demonstrate the power of the parametric eigenspace representation. The fundamen- tal contributions of this paper can be summarized as follows. (a) The parametric eigenspace is presented as a new representation of object appearance. (b) Us- ing this representation, object models are automatically learned from appearance by varying pose and illumi- nation. (c) Both learning and recognition are accom- plished without prior knowledge of the object’s shape and reflectance. Visual Learning of Objects In this section, we discuss the learning of object models using the parametric eigenspace representation. First, we discuss the acquisition of object image sets. The eigenspaces are computed using the image sets and each object is represented as a parametric hypersurface. Throughout this section, we will use a sample object to describe the learning process. In the next section, we discuss the recognition and pose estimation of objects using the parametric eigenspace representation. Normalized Image Sets While constructing image sets we need to ensure that all images of the object are of the same size. Each digitized image is first segmented (using a threshold) into an object region and a background region. The background is assigned a zero brightness value and the object region is re-sampled such that the larger of its Each object is represented as a parametric hyper- Vision Processing 837 two dimensions fits the image size we have selected for the image set representation. We now have a scale nor- malized image. This image is written as a vector x by reading pixel brightness values from the image in a raster scan manner: k=[ h, 22, ““ hJ1* (1) The appearance of an object depends on its shape and reflectance properties. These are intrinsic properties of the object that do not vary. The appearance of the object also depends on the pose of the object and the illumination conditions. Unlike the intrinsic properties, object pose and illumination are expected to vary from scene to scene. If the illumination conditions of the en- vironment are constant, the appearance of the object is affected only by its pose. Here, we assume that the object is illuminated by the ambient lighting of the en- vironment as well ‘ one additional distant light source whose direction may vary. Hence, all possible appear- ances of the object can be captured by varying object pose and the light source direction with respect to the viewing direction of the sensor. We will denote each image as $?/ where T is the rotation or pose parame- ter, I represents the illumination direction, and p is the object number. The complete image set obtained for The above described normalizations with respect to scale and brightness give us normalized object image sets and a normalized universal image set. In the fol- lowing discussion, we will simply refer to these as the object and universal image sets. The images sets can be obtained in several ways. If the geometrical model and reflectance properties of an object are known, its images for different pose and illumination directions can be synthesized using well- known rendering algorithms. In this paper, we do not assume that object geometry and reflectance are given. Instead, we assume that we have a sample of each ob- ject that can be used for learning. One approach then is to use two robot manipulators; one grasps the object and shows it to the sensor in different poses while the other has a light source mounted on it and is used to vary the illumination direction. In our experiments, we have used a turntable to rotate the object in a single plane (see Fig. 1). This gives us pose variations about a single axis. A robot manipulator is used to vary the illumination direction. If the recognition system is to be used in an environment where the illumination (due to one or several sources) is not expected to change, the image set can be obtained by varying just object pose. an object is referred to as the object image set and can be expressed as: 1 ’ $1 ) . . . . . . ) kk” iiip; > “““9 I , (2) Here, R and L are the total number of discrete poses and illumination directions, respectively, used to obtain the image set. If a total of P objects are to be learned by the recognition system, we can define the universal We assume that the imaging sensor used for learning and recognizing objects has a linear response, i.e. image brightness is proportional to scene radiance. We would like our recognition system to be unaffected by varia- tions in the intensity of illumination or the aperture of the imaging system. This can be achieved by normaliz- ing each of the images in the object and universal sets, such that, the total energy contained in the image is unity, i.e. ] ] x I] = 1, This brightness normalization transforms each measured image % to a normalized im- age x: Figure 1: Setup used for automatic acquisition of ob- ject image sets. The object is placed on a motorized turntable. Computing Eigenspaces Consecutive images in an object image set tend to be correlated to a large degree since pose and illumination variations between consecutive images are small. Our first step is to take advantage of this correlation and compress large image sets into low-dimensional repre- sentations that capture the gross appearance charac- teristics of objects. A suitable compression technique is the Karhunen-Loeve expansion [2] where the eigen- vectors of the image set are computed and used as or- thogonal basis functions for representing individual im- ages. Though, in general, all the eigenvectors of an where: X = [Xl, x2, ““‘Y WIT (4) xn = $(?,), CT = d &&) ” (5) n=i 838 Murase image set are required for the perfect reconstruction of an object image, only a few are sufficient for the repre- sentation of objects for recognition purposes. We com- pute two types of eigenspaces; the universal eigenspace that is obtained from the universal image set, and ob- ject eigenspaces computed from individual object image sets. To compute the universal eigenspace, we first sub- tract the average of all images in the universal set from each image. This ensures that the eigenvector with the largest eigenvalue represents the dimension in eigenspace in which the variance of images is maximum in the correlation sense. In other words, it is the most important dimension of the eigenspace. The average c of all images in the universal image set is determined as: (6) p=1 r=1 jr, “ -- A new image set is obtained by subtracting the average image c from each image in the universal set: x ik { x1,1(l) - c, .“., XRp) - c, . ...) xj#) - c } (7) The image matrix X is N x Ad, where M = RLP is the total number of images in the universal set, and N is the number of pixels in each image. To compute prominent eigenvectors of the universal image set are computed. The result is a set of eigenvalues { & ] i = 1, 2, . ..) k} where {Xl > X2 2 . . . . . 2 Xk}, and acor- responding set of eigenvector { ei ] i = 1,2, . . . . k }. Note that each eigenvector is of size N, i.e. the size of an image. These k eigenvectors constitute the univer- sal eigenspace; it is an approximation to the complete eigenspace with N dimensions. We have found from our experiments that less than ten dimensions of the eigenspace are generally sufficient for the purposes of visual learning and recognition (i.e. k 5 10). Later, we describe how objects in an unknown input image are recognized using the universal eigenspace. Once an object has been recognized, we are inter- ested in finding its pose in the image. The accuracy of pose estimation depends on the ability of the recogni- tion system to discriminate between different images of the same object. Hence, pose estimation is best done in an eigenspace that is tuned to the appearance of a single object. To this end, we compute an object eigenspace from each of the object image sets. The procedure for computing an object eigenspace is similar to that used for the universal eigenspace. In this case, the average c(P) of all images of object p is computed and subtracted from each of the object images. The resultin are used to compute the covariance matrix Q PI. 7 images The eigenspace for the object p is obtained by solving the eigenvectors matrix as: of the image set we define the covariance system: X;(P) eilP) = Q(P) ,i(P) (10) QeXXT (8) The covariance matrix is N x N, clearly a very large ma- trix since a large number of pixels constitute an image. The eigenvectors ei and the corresponding eigenvalues Xi of Q are to be determined by solving the well-known eigenvector decomposition problem: Xi ei = Q ei (9) All N eigenvectors of the universal set together con- stitute a complete eigenspace. Any two images from the universal image set, when projected onto the eigenspace, give two discrete points. The distance be- tween these points is a measure of the difference be- tween the two images in the correlation sense. Since the universal eigenspace is computed using images of all objects, it is the ideal space for discriminating be- tween images of different objects. Determining the eigenvalues and eigenvectors of a large matrix such as Q is a non-trivial problem. It is computationally very intensive and traditional tech- niques used for computing eigenvectors of small matri- ces are impractical. Since we are interested only in a small number (k) of eigenvectors, and not the complete set of N eigenvectors, efficient algorithms can be used. In our imnlementation, we have used the spatial tempo- Once again, we compute only a small number (k< 10) of the largest eigenvalues { AiCp) ] i = 1,2, . . . . k } where . . . . . { Xl(P) > xp > - - > XkCp) }, and a corresponding - set of eigenvector { ei(P) ] i = 1,2, . . . . k}. An object eigenspace is computed for each object of interest to the recognition system. Parametric Eigenspaee Representation We now represent each object as a hypersurface in the universal eigenspace as well as its own eigenspace. This new representation of appearance lies at the core of our approach to visual learning and recognition. Each ap- pearance hypersurface is parametrized by two parame- ters; object rotation and illumination direction. A parametric hypersurface for the object p is con- strutted in the universal eigenspace as follows. Each image x,,l(P) (learning sample) in the object image set is projected onto the eigenspace by first subtracting the average image c from it and finding the dot product of the result with each of the eigenvectors (dimensions) of the universal eigenspace. The result is a point g,,{(P) in the eigenspace: g,,l(P) = [el, e2, . . . . . . eklT ( &-,I(~) - C) (11) A ral adaptive (STA) algorithm proposed by Murase and Lindenbaum [4]. Th is algorithm was recently demon- strated to be substantially more efficient than previ- ous algorithms. Using the STA algorithm the k most Once again the subscript r represents the rotation pa- rameter and I is the illumination direction. By pro- jetting all the learning samples in this manner, we ob- tain a set of discrete points in the universal eigenspace. Vision Processing 839 Since consecutive object images are strongly correlated, their projections in eigenspace are close to one another. Hence, the discrete points obtained by projecting all the learning samples can be assumed to lie on a k- dimensional hypersurface that represents all possible poses of the object under all possible illumination direc- tions. We interpolate the discrete points to obtain this hypersurface. In our implementation, we have used a standard cubic spline interpolation algorithm [7]. Since cubic splines are well-known we will not describe them here. The resulting hypersurface can be expressed as: dp)(el, 02) (12) where 01 and 02 are the continuous rotation and il- lumination parameters. The above hypersurface is a compact representation of the object’s appearance. In a similar manner, a hypersurface is also con- structed in the object’s eigenspace by projecting the learning samples onto this space: fr,l(p) = [e,(P), e2(P),.....,ee(p)lT (x,,~(~) - dp)) (13) where, c(P) is the average of all images in the object image set. Using cubic splines, the discrete points f,.l(") - are interpolated to obtain the hypersurface: f(“) ( 6, 02 > Once again, 01 and 82 are the rotation and illumination parameters, respectively. This continuous parameteri- zation enables us to find poses of the object that are not included in the learning samples. It also enables us to compute accurate pose estimates under illumination directions that lie in between the discrete illumination directions used in the learning stage. Fig.2 shows the parametrized eigenspace representation of the object shown in Fig.1. The figure shows only three of the most significant dimensions of the eigenspace since it is diffi- cult to display and visualize higher dimensional spaces. The object representation in this case is a curve, rather than a surface, since the object image set was obtained using a single illumination direction while the object was rotated (in increments of 4 degrees) about a single axis. The discrete points on the curve correspond to projections of the learning samples in the object image set. The continuous curve passing through the points is parametrized by the rotation parameter 01 and is obtained using the cubic spline algorithm. Recognition and Pose Estimation Consider an image of a scene that includes one or more of the objects we have learned using the parametric eigenspace representation. We assume that the objects are not occluded by other objects in the scene when viewed from the sensor direction, and that the image regions corresponding to objects have been segmented away from the scene image. First, each segmented image region is normalized with respect to scale and brightness as described in the previous section. This Figure 2: Parametric eigenspace representation of the object shown in Fig.1. Only the three most prominent dimensions of the eigenspace are displayed here. The points shown correspond to projections of the learning samples. Here, illumination is constant and therefore we obtain a curve with a single parameter (rotation) rather than a surface. ensures that (a) the input image has the same dimen- sions as the eigenvectors (dimensions) of the parametric eigenspace, (b) the recognition system is invariant to object magnification, and (c) the recognition system is invariant to fluctuations in the intensity of illumination. As stated earlier in the paper, the universal eigenspace is best tuned to discriminate between dif- ferent objects. Hence, we first project the normalized input image y to the universal eigenspace. First, the average c of the universal image set is subtracted from y and the dot product of the resulting vector is com- puted with each of the eigenvectors that constitute the universal space. The k coefficients obtained are the co- ordinates of a point z in the eigenspace: Z = [el, e2 ,....., eklT(y - c) = [xl, z2 ,....., zk] (1~1 The recognition problem then is to find the object p whose hypersurface the point z lies on. Due to fac- tors such as image noise, aberrations in the imaging system, and digitization effects, z may not lie exactly on an object hypersurface. Hence, we find the object p that gives the minimum distance dlcp) between its hypersurface g (PI (81, 02) and the point z: cp = oyi; 11 z - dP)(h, 02) II (16) If dlcp) is within some pre-determined threshold value, we conclude that the input image is of the object p. If not, we conclude that input image is not of any of the objects used in the learning stage. It is important to note that the hypersurface representation of objects in eigenspace results in more reliable recognition than if 840 Murase the object is represented as just a cluster of the points the object is represented as just a cluster of the points g,,l(J The hypersurfaces of different objects can inter- g,,l(J The hypersurfaces of different objects can inter- sect each other or even be intertwined, in which cases, sect each other or even be intertwined, in which cases, using nearest cluster algorithms could easily lead to in- using nearest cluster algorithms could easily lead to in- correct recognition results. correct recognition results. Once the object in the input image is recognized, Once the object in the input image is recognized, we project the input image y to the eigenspace of the we project the input image y to the eigenspace of the object. This eigenspace is tuned to variations in the object. This eigenspace is tuned to variations in the appearance of a single object and hence is ideal for pose appearance of a single object and hence is ideal for pose estimation. estimation. Mapping the input image to the object Mapping the input image to the object eigenspace gives the k-dimensional point: eigenspace gives the k-dimensional point: zcp) zcp) = = [cl(P), e2(P), . . . . . [cl(P), e2(P), . . . . . , ek(P) ] T ( y - c(P)) , ek(P) ] T ( y - c(P)) (17) (17) = = [ Zl(P), &), . .. .,(qT [ Zl(P), &), . .. .,(qT (18) (18) The-pose estimation problem may be stated as follows: The-pose estimation problem may be stated as follows: Find the rotation parameter 01 and the illumination Find the rotation parameter 01 and the illumination parameter 62 that minimize the distance da(p) between parameter 62 that minimize the distance da(p) between the point z(P) and the hypersurface f(P) of the object p: the point z(P) and the hypersurface f(P) of the object p: czp = oyignz 11 czp = oyignz 11 z - f(Wh, 0,) 11 z - f(Wh, 0,) 11 (19) (19) The 01 value obtained represents the pose of the object The 01 value obtained represents the pose of the object in the input image. Fig. 3(a) shows an input image of in the input image. Fig. 3(a) shows an input image of the object whose parametric eigenspace was shown in the object whose parametric eigenspace was shown in Fig. 2. This input image is not one of the images in Fig. 2. This input image is not one of the images in the learning set used to compute the object eigenspace. the learning set used to compute the object eigenspace. In Fig. 3b, the input image is mapped to the object In Fig. 3b, the input image is mapped to the object eigenspace and is seen to lie on the parametric curve eigenspace and is seen to lie on the parametric curve of the object. The location of the point on the curve of the object. The location of the point on the curve determines the object pose in the image. Note that determines the object pose in the image. Note that the recognition and pose estimation stages are compu- the recognition and pose estimation stages are compu- tationally very efficient, each requiring only the pro- tationally very efficient, each requiring only the pro- jection of an input image onto a low-dimensional (gen- jection of an input image onto a low-dimensional (gen- erally less than 10) eigenspace. Customized hardware erally less than 10) eigenspace. Customized hardware can therefore be used to achieve real-time (frame-rate) can therefore be used to achieve real-time (frame-rate) recognition and pose estimation. recognition and pose estimation. Experimentation Experimentation We have conducted several experiments using complex We have conducted several experiments using complex objects to verify the effectiveness of the parametric objects to verify the effectiveness of the parametric eigenspace representation. eigenspace representation. This section summarizes This section summarizes some of our results. Fig. some of our results. Fig. 1 in the introduction shows 1 in the introduction shows the set-up used to conduct the experiments reported the set-up used to conduct the experiments reported here. The object is placed on a motorized turntable here. The object is placed on a motorized turntable and its pose is varied about a single axis, namely, the and its pose is varied about a single axis, namely, the axis of rotation of the turntable. The turntable position axis of rotation of the turntable. The turntable position is controlled through software and can be varied with is controlled through software and can be varied with an accuracy of about 0.1 degrees. Most objects have a an accuracy of about 0.1 degrees. Most objects have a finite number of stable configurations when placed on finite number of stable configurations when placed on a planar surface. For such objects, the turntable is ad- a planar surface. For such objects, the turntable is ad- equate as it can be used to vary pose for each of the equate as it can be used to vary pose for each of the object ’ s stable configurations. object ’ s stable configurations. We assume that the object is illuminated by the am- We assume that the object is illuminated by the am- bient lighting conditions of the environment that are bient lighting conditions of the environment that are not expected to change between the learning and recog- not expected to change between the learning and recog- nition stages. This ambient illumination is of relatively nition stages. This ambient illumination is of relatively low intensity. The main source of brightness is an addi- low intensity. The main source of brightness is an addi- tional light source whose direction can vary. In most of tional light source whose direction can vary. In most of (b) Figure 3: (a) An input image. (b) The input image is mapped to a point in the object eigenspace. The lo- cation of the point on the parametric curve determines the pose of the object in the input image. our experiments, the source direction was varied manu- ally. We are currently using a 6 degree-of-freedom robot manipulator (see Fig. 1) with a light source mounted on its end-effector. This enables us to vary the illu- mination direction via software. Images of the object are sensed using a 512x480 pixel CCD camera and are digitized using an Analogies frame-grabber board. Table 1 summarizes the number of objects, light source directions, and poses used to acquire the image sets used in the experiments. For the learning stage, a total of 4 objects were used. These objects (cars) are shown in Fig. 4(a). For each object we have used 5 different light source directions, and 90 poses for each source direction. This gives us a total of 1800 images in the universal image set and 450 images in each ob- ject image set. Each of these images is automatically normalized in scale and brightness as described in the previous section. Each normalized image is 128x 128 pixels in size. The universal and object image sets were used to compute the universal and object eigenspaces. The parametric eigenspace representations of the four objects in their own eigenspaces are shown in Fig. 4(b). Table 1: Image sets obtained for the learning and recog- nition stages. The 1080 test images used for recognition are different from the 1800 images used for learning. A large number of images were also obtained to test the recognition and pose estimation algorithms. All Vision Processing 841 (a) 0.2 e3 0 (W Figure 4: (a)The four objects used in the experiments. (b) The parametric hypersurfaces in object eigenspace computed for the four objects shown in (a). For dis- play, only the three most important dimensions of each eigenspace are shown. The hypersurfaces are reduced to surfaces in three-dimensional space. 0 2 4 6 6 10 12 Dimensions of Parametric Eigenspace 1000 600 90 Poses for 3 Learning 1 600 rY g 400 200 0 -6 -7-6-5-4-3-2-l 0 1 2 3 4 5 6 7 6 9 Pose Error (degrees) 400 M 300 1 18 Poses for Learning y 200 z 100 0 -0-7-6-5-4-3-2-l 0 1 2 3 4 5 6 7 8 9 Pose Error (degrees) Figure 5: (a) Recognition rate plotted as a function of the number of universal eigenspace dimensions used to represent the parametric hypersurfaces. (b) Histogram of the error (in degrees) in computed object pose for the case where 90 poses are used in the learning stage. (c) Pose error histogram for the case where 18 poses are used in the learning stage. The average of the absolute error in pose for the complete set of 1080 test images is 0.5 in the first case and 1.2 in the second case. 842 Murase of these images are different from the ones used in the learning stage. A total of 1080 input (test) images were obtained. The illumination directions and object poses used to obtain the test images are different from the ones used to obtain the object image sets for learning. In fact, the test images correspond to poses and illu- mination directions that lie in between the ones used for learning. Each input image is first normalized in scale and brightness and then projected onto the uni- versal eigenspace. The object in the image is identified by finding the hypersurface that is closest to the input point in the universal eigenspace. Unlike the learning process, recognition is computationally simple and can be accomplished on a Sun SPARC 2 workstation in less than 0.2 second. To evaluate the recognition results, we define the recognition rate as the percentage of input images for which the object in the image is correctly recognized. Fig. 5(a) illustrates the sensitivity of the recogni- tion rate to the number of dimensions of the univer- sal eigenspace. Clearly, the discriminating power of the universal eigenspace is expected to increase with the number of dimensions. For the objects used, the recog- nition rate is poor if less than 4 dimensions are used but approaches unity as the number of dimensions ap- proaches 10. In general, however, the number of di- mensions needed for robust recognition is expected to increase with the number of objects learned by the sys- tem. It also depends on the appearance characteristics of the objects used. From our experience, 10 dimensions are sufficient for representing objects with fairly com- plex appearance characteristics such as the ones shown in Fig. 4. Finally, we present experimental results related to pose estimation. Once the object is recognized, the in- put image is projected onto the object’s eigenspace and its pose is computed by finding the closest point on the parametric hypersurface. Once again we use all 1080 input images of the 4 objects. Since these images were obtained using the controlled turntable, the actual ob- ject pose in each image is known. Fig. 5(b) and (c) shows histograms of the errors (in degrees) in the poses computed for the 1080 images. The error histogram in Fig. 5(b) is for the case where 450 learning samples (90 poses and 5 source directions) were used to compute the object eigenspace. The eigenspace used has 8 dimen- sions. The histogram in Fig. 5(c) is for the case where 90 learning samples (18 poses and 5 source directions) were used. The pose estimation results in both cases were found to be remarkably accurate. In the first case, the average of the absolute pose error computed using all 1080 images is found to be 0.5 degrees, while in the second case the average error is 1.2 degrees. Conclusion In this paper, we presented a new representation for machine vision called the parametric eigenspace. While representations previously used in computer vision are based on object geometry, the proposed representa- tion describes object appearance. We presented a method for automatically learning an object’s para- metric eigenspace. Such learning techniques are funda- mental to the advancement of visual perception. We developed efficient object recognition and pose esti- mation algorithms that are based on the parametric eigenspace representation. The learning and recogni- tion algorithms were tested on objects with complex shape and reflectance properties. A statistical analysis of the errors in recognition and pose estimation demon- strates the proposed approach to be very robust to fac- tors, such as, image noise and quantization. We believe that the results presented in this paper are applicable to a variety of vision problems. This is the topic of our current investigation. Acknowledgements The authors would like to thank Daphna Weinshall for several useful comments on the paper. PI PI PI PI PI PI VI PI PI References R. T. Chin and C. R. Dyer, “Model-Based Recog- nition in Robot Vision,” A CM Computing Surveys, Vol. 18, No. 1, pp. March 1986. K. Fukunaga, Introduction to Statistical Pattern Recognition, Academic Press, London, 1990. H. Murase, F. Kimura, M. Yoshimura, and Y. Miyake, “An Improvement of the Auto-Correlation Matrix in Pattern Matching Method and Its Ap- plication to Handprinted ‘HIRAGANA’,” Trans. IECE, Vol. J64-D, No. 3, 1981. H. Murase and M. Lindenbaum, “Spatial Temporal Adaptive Method for Partial Eigenstructure De- compisition of Large Images,” NTT Technical Re- port No. 6527, March 1992. E. Oja, Subspuce methods of Pattern Recognition, Research Studies Press, Hertfordshire, 1983. T. Poggio and F. Girosi, “Networks for Approx- imation and Learning,” Proceedings of the IEEE, Vol. 78, No. 9, pp. 1481-1497, September 1990. W. Press, B. P. Flannery, S. A. Teukolsky, and W. T. Vetterling, Numerical Recipes in C, Cam- bridge University Press, Cambridge, 1988. L. Sirovich and M. Kirby, “Low dimensional pro- cedure for the characterization of human faces,” Journal of Optical Society of America, Vol. 4, No. 3, pp. 519-524, 1987. M. A. Turk and A. P. Pentland, “Face Recogni- t ion Using Eigenfaces ,” Proc. of IEEE Conference on Computer Vision and Pattern Recognition, pp. 586-591, June 1991. Vision Processing 843 | 1993 | 124 |
1,326 | n the qualitative structure of temporally evolving visual motion fields Richard P. Wildes SRI David Sarnoff Research Center Princeton, New Jersey 08543-5300 wildes@sarnoff.com Abstract This paper presents a qualitative analysis that re- lates stable structures in visual motion fields to properties of corresponding three-dimensional en- vironments. Such an analysis is fundamental in the development of methods for recovering use- ful information from dynamic visual data with- out the need for highly accurate and precise sens- ing. Methodologically, the techniques of singu- larity theory are used to describe the mapping from image space to velocity space and to re- late this mapping to the three-dimensional envi- ronment. The specific results of this paper ad- dress situations where an optical sensor is under- going pure rotational or pure translational motion through its environment. For the case of pure ro- tational motion it is shown that the qualitative structure of visual motion provides information about the axes and relative magnitudes of rota- tion. For the case of pure translational motion it is shown that the qualitative structure of vi- sual motion provides information a.bout the shape and orientation of viewed surfaces as well a,s infor- mation about the translation itself. Further, the temporal evolution of the visual motion field is de- scribed. These results suggest that valuable infor- mation regarding three-dimensional environmen- tal structure and motion can be recovered from qualitative consideration of visual motion fields. Introduction The visual motion field is the image projection of an environment that is moving relative to an optical sen- sor. As such, this field is a potentially rich source of information about the environment as well as the rel- ative motion between the environment and sensor. In response to this possibility, this paper concentlrates on developing an understanding of the qualitative prop- erties of the motion field and of its relationship to an impinging visual world. In essence, this understanding is based on an analysis of stable structures in tempo- rally evolving visual motion fields. Structural stability refers to properties that persist independently of minor perturbations to the visual motion field. In practice, this is of considerable importance as the visual motion field is not directly recoverable from optical data. In- stead, only a near relative, the optical flow, the appar- ent motion of brightness patterns is recoverable (Horn 1986). Further, even obtaining good estimates of the optical flow has proven to be fraught with numerical difficulties. Happily, by concentrating on structurally stable properties of the visual motion field one has a rich source of information without reliance on highly accurate and precise recovery of the flow. A great deal of research has focused on the inter- pretation of visual motion; general reviews are avail- able (e.g., Hildreth & Koch 1987). Most relevant for current purposes are other qualitative analyses: The visual motion field has been decomposed into primi- tive fields to expose its underlying structure (Hoffman 1966; Koenderink & van Doorn 1975). The signifi- cance of stationary points has been addressed (Verri et al. 1989). Issues of uniqueness have received atten- tion (Carlsson 1988). Interestingly, the bulk of these studies have couched their analyses in the language of dyna.mical systems theory (Hirsch & Smale 1974). In contrast to prior work, this paper employs sin- gularity theory (Arnold 1991) and its application to vector fields (Thorndike e2 al. 1978) to uncover and study information rich yet structurally stable proper- ties of the flow. Presently, consideration is restricted to cases of visual motion due to either pure rotational or pure translational 3D motion. Specific contributions of this research include: First, for pure rotational 3D motion, it is shown t1ia.t in principle qualitative con- siderations allow for the recovery of the axis of angular rotation, the direction of rotation and the ratio of the magnitudes of angular and radial rotation. Second, for pure translational 3D motion, it is shown that in prin- ciple qualitative considerations allow for the recovery of a description of viewed surface shape, information about the direction of viewed surface gradient a.nd in- formation about the direction of angular translation. Third, the temporal evolution of the visual motion field is described in terms of smooth changes and a set of three events marking more abrupt transitions. 844 Wildes From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. Preliminary developments For current purposes, it is useful to conceptualize of the visual motion field as a vector mapping, f, assigning to each point p = (~,y) in the source space, P, of image positions a single velocity vector v = (21, v) in the target space, V, of image velocities f : P ----f V or v = f(p). It also is useful to introduce the Jacobian of the velocity mapping au au J = ( ) E 3 . bG ay Singular points of the mapping f : P ---f V are de- fined to be points p where det(J) = 0. Points that are not singular are said to be regular. In the plane, these distinctions have simple geometric interpretations: A small circle centered about a regular point in P will be mapped to an ellipse in V. Correspondingly, at each of these points there is a direction, a, that leads to the maximal change in length as the circle deforms into the ellipse. The magnitude of the determinant, det( J), gives the ra.tio of corresponding areas in V and P. However, at singular points the image ellipse de- generates into a line segment; the ratio of areas is zero, i.e., Ja = 0. The structurally stable singularities of any mapping f : ZR2 + ZR2 form lines (Whitney 1955). Structurally stable properties are of interest as they are the proper- ties that are robust to slight perturbations of the ma.p- ping, e-g., as due to varying observation conditions; for a formal definition of structural stability see (Gol- ubitsky & %uillemin 1973). In the source space, P, the structurally stable lines of singularity are smooth and are referred to as fold-lines. In the target space, V, the images of the fold lines also are lines and are called folds. However, along the fold-lines are special points, called cusps, distinguished by tangency with the a-trajectories. The image of these points in V ap- pear as cusps along the folds. Since fold-lines and folds as well as cusp points and cusps form stable structures in a 2D to 2D ma.pping, they will serve as the focus in the following structural analysis of the visual motion field. Figure 1 illustrates the geometry of folds and cusps in the velocity mapping. Structural analysis of visual motion Define a Cartesian coordinate system at the center of an optical sensor with the Z-axis pointing along the optical axis. Under perspective projection a 3D point P = (X, Y, 2) is mapped to an image point P <gg = ) = (z:, y), where appropriate scaling is taken so that the focal length is unity. Let visual mo- tion derive from a sensor moving through a static en- vironment. (Alternatively, it could be assumed that a fixed sensor observes a dynamic environment.) Take the sensor’s translational velocity as T = (tG, t,, t,), while its rotational velocity is 0 = (wa:, wy, wZ). Then, the equations of rigid body motion and perspective / Figure 1: The velocity mapping f can be thought of as stretching and bending the source space of image po- sitions, P, in three dimensions and then projecting it into the target space of image velocities, V. P is folded along fold-lines that project to folds. Cusp-points cor- respond to pleats along the fold-lines that project to cusps. projection, allow the image velocity, v = (u, v), of an environmental point, P, to be written as 11 ~~~z~:x~~~~~~)~2~x~~~~~z (2) 2 Y z (Horn 1986). The visual motion field is an array of velocities v, for an imaged 3D environment. Corre- spondingly, the terms of-the velocity Jacobian (1) can be expanded as g = (kg (qz - tx) + 3 + yw, - 2wyx au _ ay- ( > g+ (xtz -tx)+xw,+w, g.= (&$> (?.A - ty) - wyy - wz dv= aY ( > 6; (yfz -ty)+ ~+2w~y-w,x Now, consider purely rotational 3D motion. The governing conditions are ‘I’ = 0, while fi = @Xc, wy, wZ). In these situations the visual motion field specializes to u = wxxy - Wy(X2 + 1) + w,Z y v = w,(y2 + 1) - wyxy - WZX (4 An illustration of a visual motion field due to 3D rota- tion is shown in the left side of Figure 2. Under pure 3D rotation, the condition for singularity, det(J) = 0, dicta.tes that the expression 2wy2x2 + 2w; y2 -4wxwy2y+w,w,2+wyw,y+w~ (5) evaluates to zero. To understand the form of the singu- larity in the source space, P, consider the discriminant (Korn & Korn 1968) of the condition (5) viewed as a conic section, (-4wZwy)2 - 4(2w;)(2wz). Since the discriminant (6) is identically equal to zero, the singular points are manifest in P as a parabola. This parabola describes the fold line for the case of pure rotationa. 3D motion. Vision Processing 845 Figure 2: A visual motion field for rotation about the 2 and Y axes (left). The fold-lines in the source space, P, of image positions (middle). The corresponding folds with a cusp in the target space, V, of image velocities (right). In order to facilitate further analysis, a new co- ordinate system, (x’, y/) is now adopted so that the parabola shaped fold-line is symmetric about the d- axis. Consideration of the related equations for the rotation of coordinate axes shows that the d-axis is parallel to the direction e,, i.e., the direction of the angular component of 3D’rotation. Therefore, in the (xl, y’) coordinate system 3D rotation is given as 0’ = (w~,w;,w~) = (O,(wz +w;‘)i,w,). The form of the fold-line now can be given as d= -2zjx’2 - w: , .I Wi W. Y From the equation for the fold-line in the (x’, yI) co- ordinate system (7) a number of observations are im- mediate: First, this parabola intercepts the d-axis at -$-, the ratio of radial to angular components of 3D ., rotation. Second, by computing the derivative with respect to x it is seen that the parabola opens at the rate -4$x, 4x’ times the inverse of the previous ra- tio. Third, given the agreement in the signs of these two ratios, the parabola always opens away from the origin. An example fold-line parabola is illustrated in the middle of Figure 2. To study the locus of singularities in the target space, V’, the equation describing the fold-line in the source space, P’, (7) can be substituted into the equa- tions of the visual motion field (4) to yield cut, vf) = ( -3w;xf2 _ wG2: wi2 , sxf3 1 * (8) Y c This set of equations can be taken as a parametric rep- resentation of the fold in V’ with parameter x’. This curve intercepts the u’-axis at - wl”+wp , wY , the negative of the ratio of the squared magnitude of rotational mo- tion to the angular component of rotational motion. As x’ differs from zero the curve branches out symmetri- cally from its u/-intercept, leaving a cusp in its wake. The rate at which the fold opens can be determined by (implicitly) computing the derivative $$ = -9~’ to see that the rate of opening (as a function of Z? ) is determined by minus the ratio of angular to radial rotation, -w&. An illustrative example fold with cusp is shown on ihe right side of Figure 2. At this point it is useful to review by noting the ways that the singularities of the velocity mapping could be used to make inferences about 3D rotational motion: First, the appearance of the fold-lines as a parabola with a single cusp-point could be taken as a signature indicative of rotational 3D motion. Second, the axis of angular rotation can be recovered as the axis of symme- try of the fold-line parabola. Third, the distance of the parabola from the origin as well as the rate of open- ing of the parabolic fold-line and cusped fold are all directly indicative of the relative magnitude of angular and radial rotations. Finally, notice that the singu- larities say nothing about 3D environmental structure. This reflects the fact that instantaneous visual motion due to purely rotational 3D motion is independent of environmental layout. Next, consider purely translational 3D motion. The governing conditions are T = (tz, t,, tL) while s1 = 0. Correspondingly, the visual motion field specializes to b, 4 = (;(“t* - tx), &ytz - ty)) . \A LJ / Illustrations of three different visual motion fields due to 3D translation are shown in the first column of Fig- ure 3. Under pure 3D translation, the condition for singularity, det(J) = 0, dictates that the expression tz z 2 cc > ;; (XL - t;c> + (-g) (yt, - ty) + ;) Y (10) evaluates to zero. The translation-based singularity equation (10) in- volves the translation as well as the shape and pose of a surface of regard. To understand this matter con- sider a surface described at each point by its local tangent plane, then n,X + n,Y + n,Z = d, where the (nz, ny, n,) are normals at points on the surface (X, Y, 2). -In this case, (& 6) . s = (n=PY) Z(n,x+~yY+n,) * Substitution into (10) then yields (nx,ny,nt). 2x- :,2y- :,I) = 0. 2 t This expression shows that the singularity condition holds when the local surface normal is orthogonal to 846 Wildes Figure 3: The left column shows visual motion fields for an observer approaching parabolic (top), elliptic (middle) and hyperbolic (bottom) surfaces. The middle column shows the fold-lines in the source space, P, of image positions. The right column shows the folds in the target space, V, of image velocities. the view direction scaled by two and displaced by the focus of expansion, t z1 (tS, t,); the locus of singularity is indicative of translation, surface shape and pose. It is illustrative to consider in detail a particu- lar set of examples: Let a surface be represented as a Monge patch, (X,Y, Z(X,Y)) with Z(X,Y) = $K~X~ + +&2Y2 + r;sXY + pX + qY + r so that 1 -= l- z- y q --3XY - $1x’ - irc2y2, through second- &der. J$hen this surf&e mode is made use of in the I singularity condition (10) it is found that the locus of singularities in the source space, P, is a quartic in x and y that can be written as the product of two conic sections. However, one of these conic sections corre- sponds to a degenerate situation where the underly- ing 3D surface recedes to infinity. Along this contour all the velocities map to a single point, (0,O). Conse- quently, subsequent attention will be restricted to the other conic section. To study this curve, it is conve- nient to immediately adopt a new coordinate system, (x’, y’), by rotating the axes so as to eliminate the cross terms in xy. Following this operation, the contour can be written as 3KzrtZxi2 + 3Kyrtz y12 + 2(2p’t, - kczrtL)x +2(2q’t, - fhyt&)Y/ - 2(p’t/, + q’t’y +tz) = 0 (11) where it turns out that K, and Q are given by diago- nalization of the coefficients of the quadratic terms in the Monge patch representation of the surface. As a point of departure on understanding the equation de- scribing the singular points (11) suppose that there is no angular component to translation, i.e., t; = tb = 0, and that the surface gradient vanishes at the origin, i.e., p’ = q’ = 0. Also, for ease of notation, primes will be dropped for the rest of this section of the paper; the fact that all calculations are being performed in the (z’, y’) coordinate system will be implicit. Under these conditions, equation (11) evaluates to 2 K,X2 + KyY2 = -. 3r (12) Consideration of this simplified singularity condition (12) shows that it describes an origin centered conic section that is: an ellipse if sgn(K;,) = sgn(rc?), a hy- perbola if sgn(r;,) = -sgn(Ky ) or parallel straight lines if K= = 0 or 6iy = 0, i.e., the curves are indicative of Dupin’s Indicatrix for the underlying 3D surface 2. Also, the major axis of the conic section is along the x- axis if llKZ1l < Il~ll or along the y-axis if ll~~ll > ll~~ll. The second column of Figure 3 shows representative examples. What conclusions can be reached about the form of the fold-lines under less restricted conditions? First, suppose that the restriction against angular translation is removed, i.e., T = (tz, t,, tZ). In this case, the form of the fold-line (12) becomes In words, the addition of angular motion does not change the qualitative shape of the curve. However, Vision Processing 847 the size is ad-iusted and the center moves along the axis of angular translation. Second, suppose that the surface gradient is no longer required to-vanish at the origin. In this case, the fold-line equation can be writ- ten as This equation shows that a surface gradient also does not change the qualitative shape of-the fold-lines, al- though the size is altered. The center of the curve again moves in the direction specified by the surface gradient, but as weighted by the surface curvatures K~ and yiy. Finally, suppose that both angular translation and non-vanishing gradient are both allowed. In this case, the equation of the fold-line can be written as Kx x+& ( ( -&)2+%(x+3+$)2=& X 2(Ph +;t, + &) + (p t, -rIEst, QrK,rt+ > (3 qt, --rn,t, 2 + rfcyrtz 0 As with the other cases, it is seen that the qualitative shape of the contour remains the same. However, now the-contour’s center is displaced to a point that is the vector sum of the centers for angular translation and non-vanishing gradient. To study &e locus of singularities in the target space, V, begin by again considering the restricted sit- uation where there is no angular translation and the surface gradient vanishes at the image origin. In this case, the corresponding equation describing the fold- line in the source space, P, (12) can be substituted into the equations of the translational visual motion field (9). This operation yields This set of equations can be taken as a parametric - a representation of the velocity with parameter Z. The shape of the corresponding fold in the target space is a parabola, ellipse of hyperbola depending on the curva- ture terms, K, and K~, in exactly the same way as did the fold-lines in the source space; the folds are indica- tive of Dupin’s Indicatrix as were the fold-lines. Three examples of folds are shown in the third column of Fig- ure 3. As in the source space, the addition of angular translation and surface gradient causes the fold con- its to drift in position. However, unlike the fold-lines the folds slowly deform as they drift toward the target space periphery. Again, it is useful to review by explicitly noting sev- eral ways that the singularities of the visual motion field can be used to interpret 3D translational motion: First, the shape of the fold-lines are indicative of the qualitative 3D surface shape. For the particular case Figure 4: The swallowtail singularity describes the con- d&on when a fold sheet and rib intersect. The swal- lowtail event occurs when a r slice contains this type of intersection. of quadratic surface patches, the fold-lines form an el- lipse, hyperbola or a pair of parallel straight lines ac- cording to whether the surface is locally elliptic, hy- perbolic or parabolic in shape. This same signature is apparent in the corresponding folds in the target space; however, here they can be deformed by angular trans- lation and surface gradient. Second, the major and minor axes of the surface can be recovered from the corresponding fold-line conic section major and minor axes. Third, the directions of angular translation and surface gradient are constrained by the off-set of the fold-lines from the image origin. Temporal evolution In general, visual motion fields evolve in time. Cor- respondingly, the patterns of the singular points, the folds and cusps, vary with time. More precisely, con- sider a family of flows, {ft : P - V}, --00 < LI < 00, parameterized by t, time. -t can be thought of as assign- ing a particular time to each of the mappings in a given series. Another way of looking at matters is given by taking the family of functions ft in tandem to define a single function g : S3 3 ?R3 i.e., g : (z, y, t) -+ (u, 21,~) with the form (u, V, 7) = (u(x,y,t),v(x,y,t),t). The velocity map at any given time now corresponds to a 7 slice. In the (u, V, r)-space, the folds define surfaces called fold sheets; while, the cusps define lines in those surfaces called ribs. Additionally, a third structure of interest now presents itself, the swallowtail. A swal- lowtail occurs when a fold sheet and a rib intersect. The canonical form for the swallowtail singularity is (24, 2r, 7) = (x4 + x2y + xt, y, t). (14) Figure 4 illustrates the associated geometry. Of all el- ementary singularities, only three, the fold, cusp and swallowtail are stable with respect to general pertur- bations to the time dependent flow (Golubitsky SC Guillemin 1973). Typically, the stable structures of the visual motion field evolve smoothly in time. Reference to t,he pre- vious section’s discussion reveals much of wha,t can be expected. For pure rota.tional3D motion: As the direc- tion of angular rotation changes, the orientateion of the 848 Wildes Figure 5: The lips (left) and beak to beak (right) events can occur when a 7 slice is tangent to a rib. fold-line parabola in the source space also changes as does the corresponding fold in the target space. As the ratio of angular to radial rotation changes, the fold-line parabolaopens and closes and moves toward and away from the origin. Corresponding changes take place with the rate of opening of the two arms of the fold and the distance of the cusp from the origin. As time passes the fold-line and fold sweep out surfaces; the cusp creases the fold sheet with a rib. For pure trans- lational 3D motion: As the observer moves toward or away from a surface of regard the fold-lines and folds expand and contract, without changing their charac- teristic shape. With angular translation the contours drift in spatial position. Again, surfaces are traced by the smooth evolution of the fold-lines and folds. In addition to the smooth evolution of the singular- ities in time more abrupt change can occur. In par- ticular, there are special points along the ‘ribs, called events, that demark more abrupt change and that de- termine the overall temporal evolution. Strikingly, there are only three distinct types of events for con- cern in the analysis of vector fields (Thorndike et al. 1978). The first of these events is associated with the swallowtail singularity (14). This event occurs when a r slice contains an intersection of a fold sheet with a rib, see Figure 4. Two additional types of events can occur when a 7 slice is tangent to a curved rib: (i) In the lips event an initially structureless region becomes tangent to a rib and subsequently gives rise to a dou- bly cusped region. (ii) In the beak to beak event two target space regions lose their identity as the event is passed. Lips and beak to beak events are described by (u, 0,7) = (x3 rt xy2 + xt, y, t), respectively. Figure 5 illustrates these events. Summary Qualitative consideration of a visual motion field yields information about the 3D geometry and motion of an impinging environment. In this paper attention has fo- cused on cases where an optical sensor undergoes pure rotation or pure translation. For rotation, information is available about the axes and relative magnitudes of angular and radial rotation. For translation, informa- tion is available about the shape and orientation of visible surfaces as well as about the translation itself. In either case, the structure of the flow typically evolves smoothly in time. However, on occasion discrete events occur that add greater richness to the structure. References Arnold, V. I. 1991. The Theory of Singularities and Its Applications. Cambridge University Press, NY, NY. Carlsson, S. 1988. Information in the geometric struc- ture of retinal flow fields. In Proceedings of the Inter- national Coilference on Computer Vision. 629-633. Golubitsky, M. and Guillemin, V. 1973. Stable Map- pings and their Singularities. Springer, NY, NY. Hildreth, E. C. and Koch, C. 1987. The analysis of vi- sual motion: From computational theory to neuronal mechanisms. Annual Review of Neuroscience. Hirsch, M. W. and Smale, S. 1974. Differential equa- tions, dynamical systems and linear algebra. Aca- demic Press, NY, NY. Hoffman, W. C. 1966. Lie group theory of visual per- ception. Journal of hfathematical Psychology 3:65- 165. Horn, B. K. P. 1986. Robot Vision. MIT Press, Cam- bridge, MA. Koenderink, J. J. and van Doorn, A. J. 1975. Invari- ant properties of the motion parallax field due to the movement of rigid bodies relative an observer. Optica Acta 22~717-723. Kern, G. A. and Kern, T. M., editors 1968. Mathe- matical Handbook for Scientists and Engineers, Sec- ond Edition. McGraw-Hill, NY, NY. Thorndike, A. S.; Cooley, C. R.; and Nye, J. F. 1978. The structure and evolution of flow fields and other vector fields. Journal of Physics A: Mathematical and General 11(8):1455-1490. Verri, A.; Girosi, F.; and Torre, V. 1989. Mathemat- ical properties of the two-dimensional motion field: From singular points to motion parameters. Journal of the Optical Society of America A 6(5):698-712. Whitney, H. 1955. On singularities of mappings of euclidean spaces. i. mappings of the plane into the plane. Annals of hlath.ematics 62(3):374-410. Vision Processing 849 | 1993 | 125 |
1,327 | Polly: A Visisn- ased Artificial Agent Ian Horswill MIT AI Lab 545 Technology Square Cambridge, MA 02139 ian@ai.mit.edu Abstract In this paper I will describe Polly, a low cost vision- based robot that gives primitive tours. The system is very simple, robust and efficient, and runs on a hardware platform which could be duplicated for less than $lOK US. The system was built to ex- plore how knowledge about the structure the envi- ronment can be used in a principled way to simplify both visual and motor processing. I will argue that very simple and efficient visual mechanisms can of- ten be used to solve real problems in real (unmod- ified) environments in a principled manner. I will give an overview of the robot, discuss the prop- erties of its environment, show how they can be used to simplify the design of the system, and dis- cuss what lessons can drawn for the design of other systems.r Introduction In this paper, I will describe Polly, a simple artificial agent that uses vision to give primitive tours of the 7th floor of the MIT AI lab (see figure 1). Polly is built from minimalist machinery that is matched to its task and environment. It is an example of Agre’s principle of machinery parsimony [Agre, 1988], and is intended to demonstrate that very simple visual machinery can be used to solve real tasks in real, unmodified environ- ments in a principled manner. Polly roams the hallways of the laboratory looking for visitors. When someone approaches the robot, it stops and introduces itself and offers the vistor a tour, asking them to answer by waving their foot around (the robot can only see the person’s legs and feet). When the person waves their foot, the robot leads them around the lab, recognizing and describing places as it comes to them. ‘Support for this research was provided in part by the University Research Initiative under Office of Naval Re- search contract N00014-86-K-0685, and in part by the Ad- vanced Research Projects Agency under Office of Naval Re- search contract N00014-85-K-0124. 824 Horswill Polly is very fast and simple. Its sensing and con- trol systems run at 15Hz, so all percepts and motor commands are updated every 66ms. It also uses very little hardware (an equivalent robot could be built for approximately $lOK), and consists of less than a thou- sand lines of Scheme code. 2 All computation is done on-board on a low-cost digital signal processor (a TI C30 with 64KW of ram). Polly is also among the best tested mobile robots to date, having seen hundreds of hours of service, and has a large behavioral repertoire. Polly falls within the task-based or active approach to vision [Horswill, 1988][Aloimonos, 1990][Ballard, 1991][Ikeuchi and Herbert, 1990][Blake and Yuille, 19921. While the work descrived here cannot prove the efficacy this approach, it does give an example of a large system which performs an interesting high-level task us- ing these sorts of techniques. Polly’s efficiency is due to its specialization to its task and environment. Many authors have argued that sim- ple machinery is often sufficient for performing intelli- gent behavior because of the special organizing struc- tures of the environment (see, for example, [Rosen- schein and Kaelbling, 1986][Brooks, 1986][Agre, 19881). If we are to use such structures in a routine manner to engineer intelligent systems, then we must be able to isolate individual structures or properties of the en- vironment and explain their computational rammifica- tions. In this work, I have used the technique of step- wise transformation to draw out the relationships be- tween a system specialized to its environment and a more general system. We look for a series of trans- formations, each of which conditionally preserves be- havior given some constraint on the environment, that will transform the general system into the specialized system. The resulting derivation of the specialized system from the general system makes the additional assumptions made by the specialized system explicit. It also makes their computational rammifications ex- plicit by putting them in correspondence with particu- lar transformations which simplify particular computa- tional subproblems. In effect, we imagine that the gen- eral system has been run through a “compiler” that has 2Not including device drivers and data structures. From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. 100 N’” t 10 0 LO 30 40 60 70 80 90 loo Figure 1: The approximate layout of Polly’s environ- ment (not to scale), and its coordinate system. used declarations about the environment (constraints) In next section, I will discuss some of the useful prop- to progressively optimize it until the specialized system erties of Polly’s environment and allude to the transfor- is obtained. If we can “solve” for the declarations and associated optimizations which derive a system from a mations which they allow. Then I will discuss the actual more general system, then we can reuse those optimiza- tions in the design of future systems. Space precludes design of the system, including abbreviated forms of the either a formal treatment of this approach or detailed analyses. See [Horswill, 1993a] or [Horswill, 1993131 for constraint derivations for parts of the visual system. Fi- detailed discussions. nally, I will discusses the performance and failure modes of the system in some detail and close with conclusions. Computational properties of office environments Office buildings are actively structured to make nav- igation easier [Passini, 19841. The fact that they are structured as open spaces connected by networks of cor- ridors means that much of the navigation problem can be solved by corridor following. In particular, we can reduce the problem of finding paths in space to find- ing paths in the graph of corridors. The AI lab is even simpler because the corridor graph has a grid structure, and so we can attach coordinates to the verticies of the graph and use difference reduction to get from one pair of coordinates to another. Determining one’s position in the grid is also made easier by special properties of office environments: the lighting of ofhces is generally controlled; the very nar- rowness of their corridors constrains the possible view- points from which an agent can see a landmark within the corridors. I will refer to this latter property as the constant viewpoint constraint: that the configura- tion space of the robot is restricted so that a landmark can only be viewed from a small number of directions. These properties make the recognition of landmarks in a corridor an easier problem than the fully general recog- nition problem. Thus very simple mechanisms often suffice. Another useful property of office buildings is that they are generally carpeted and their carpets tend to Figure 2: Portion of visual system devoted to naviga- tion. be either regularly textured or not textured at all. The predictability of the texturing of the carpet means that any region of the image which isn’t textured like the car- Finally, office buildings have the useful property that pet is likely an object resting on the ground (or an ob- ject resting on an object resting on the ground). Thus they have flat floors and so objects which are farther obstacle detection can be reduced to carpet detection, which may be a simpler problem admitting simpler so- away will appear higher in the image, provided that the lutions. In the case of the MIT AI lab, the carpet has no texture and so a texture dectector suffices for find- objects rest on the floor. ing obstacles. We will refer to this as the buckground- This provides a very simple texture constraint (see [Horswill, 1993a] for a more de- tailed discussion). way of determining the rough depth of such an object. We will refer to this as the ground-plane constraint: that all obstacles rest on a flat floor (see [Horswill, 1993a]). Architecture Conceptually, Polly consists of a set of parallel processes connected with fixed links (see [Brooks, 1986][Agre and Chapman, 1987][R osenschein and Kaelbling, 19861 for examples of this type of methodology)). The actual implementation is a set of Scheme procedures, roughly one per process, with variables used to simulate wires. On each clock tick (66ms), the robot grabs a new image, runs each process in sequence to recompute all visual system outputs, and computes a new motor command which is fed to the base computer. Physically, the robot consists of an RWI B12 robot base which houses the motors and motor control logic, a voice synthesizer, a front panel for control, a serial port for downloading, a TMS320C30-based DSP board (the main computer), a frame grabber, and a microprocessor for servicing peripherals. All computation is done on board. Visual system The visual system processes a 64 x 48 image every 66ms and generates a large number “percepts” from it (see figure 3) which are updated continuously. Most of these are related to navigation, although some are devoted to person detection or sanity checking of the image. Because of space limitations, we will restrict ourselves to the major parts of the navigation section. Vision Processing 825 open-left? open-region? person-direction open-right? blind? wall-ahead? blocked? light-floor? wall-far-ahead? left-turn? dark-floor? vanishing-point right-turn? person-ahead? farthest-direction Figure 3: Partial list of percepts generated by the visual system. The central pipeline in figure 2 (“smoothing” . . . “compress map”) computes depth information. The major representation used here is a radial depth map, that is, a map from direction to distance, similar to the output of a sonar ring. Computing depth is a notori- ously difficult problem. The problem is greatly simpli- fied for Polly by the use of domain knowledge. By the ground plane constraint, we can use height in the image plane as a measure of distance. Thus the system3 can be substituted for any system which computes a radial depth map, where F/G is any system which does figure/ground separation (labeling of each pixel as fig- ure or background), and RDM is a transformation from a bitmap to a vector defined by RDM(z) = min{ ylthe point (z, y) isn’t floor} The effect of this is to reduce the problem of depth recovery to the figure-ground problem. The figure- ground problem is, if anything, more difficult than the depth-recovery problem in the general case so one might expect this to be a bad move. However, by the background-texture constraint, we can use any oper- ator which responds to the presence of texture. Polly presently uses a simple edge detector (thresholded mag- nitude of the intensity gradient): The visual system then compresses the depth map into three values, left-distance, right-distance, and center-distance, which give the closest distance on the left side of the image, right side, and the center third, respectively. Other values are then derived from these. For example, open-left? and open-right? are true when the corresponding distance is over threshold. left-turn? and right-turn? are true when the depth map is open on the correct side and the robot is aligned with the corridor. The visual system also generates the vanishing point of the corridor. Bellutta et al [Bellutta et al., 19891 describe a system which extracts vanishing points by running an edge finder, extracting straight line seg- ments, and performing 2D clustering on the pairwise 3Here the + y s mbol is used to denote input from the sensors, while + denotes signals moving within the system. Constraint Computational problem Ground plane Depth perception Figure 4: Constraints assumed by the visual system and the problems they helped to simplify. Note that “known camera tilt” is more a constraint on the agent, than on the habitat. intersections of the edge segments. We can represent it schematically as: 3 edges --+ lines --+ intersect + cluster -+ We can simplify the system by making stronger assump- tions about the environment. We can remove the step of grouping edge pixels into segments by treating each edge pixel as its own tiny segment. This will weight longer lines more strongly, so the lines of the corridor must dominate the scene for this to work properly. If the edges are strong, then a simple edge detector will suffice. Polly uses a gradient threshold detector. If the tilt-angle of the camera is held constant by the camera mount, then the vanishing point will always have the same y coordinate, so we can reduce the clustering to a 1D problem. Finally, if we assume that the positions and orientations of the non-corridor edges are uniformly distributed, then we can replace the clustering opera- tion, which looks for modes, the mean. After all these optimizations, we have the following system: * 101 --+ y intersect -+ Z -+ The system first computes the gradient threshold edges, then intersects the tangent line of each edge pixel with the horizontal line in which the vanishing point is known to lie, then computes the mean of the x coordinate of the intersections. The variance is also reported as a confidence measure. The constraints assumed by these systems are sum- marized in figure 4. The discussion here has been nec- essarily brief. For a more detailed derivation, see [Hor- swill, 1993a]. Low-level navigation The robot’s motion is controlled by three parallel sys- tems. The distance control system drives forward with a velocity of a(center-distance-dst,p), where dstop is the threshold distance for braking and (Y is a gain pa- rameter. Note that it will back up if it overshoots or if it is aggressively approached. The corridor follower drives the turning motor so as to keep the vanishing point in the middle of the screen and keep left-distance and 826 Horswill Figure 5: Example place frames. Figure 6: Architecture of the complete navigation sys- tem. right-distance equal. The corridor follower switches into wall-following mode (keeping the wall at a constant distance) when only sees one wall. Finally, the turn unit drives the base in open-loop turns when instructed to by higher-level systems. The corridor follower is over- ridden during open-loop turns. During large turns, the distance control system inhibits forward motion so as to avoid suddenly turning into a wall. For more de- tailed of the low-level navigation system discussion, see [Horswill, 1993b]. all the frames and find the best match at 15Hz using only a fraction of the CPU. Place recognition Polly generally runs in the corridors and open spaces of the 7th floor of the AI lab at MIT. It keeps track of its position by recognizing landmarks and larger-scale “districts,” which are given to it in advance. The lab, and some of its landmarks are shown in figure 1. The system can also recognize large-scale “districts” and correct its position estimate even if it cannot deter- mine exactly where it is. There is evidence that humans use such information (see [Lynch, 1960]). The robot presently recognizes the two long east/west corridors as districts. For example, when the robot is driving west and sees a left turn, it can only be in the southern ew corridor, so its y coordinate must be 10, regardless of its x position. This allows to robot to quickly re- cover from getting lost. At present, the recognition of districts is implemented as a separate computation, but I intend to fold it into the frame system. igh-level navigation The corridors of the lab provide a great deal of natural constraint on the recognition of landmarks. Since corridors run in only two perpendicular direc- tions, which we will arbitrarily designate as north-south (ns) and east-west (ew), they form natural coordinate axes for representing position. The robot’s base pro- vides rough rotational odometry which is good enough for the robot to distinguish which of four directions it is moving in, and so, in what type of corridor it must be. Each distinctive place in the lab is identified by a pair of qualitative coordinates, shown in the figure. These coordinates are not metrically accurate, but they do preserve the ordering of places along each of the axes. Information about places is stored in an asso- ciative memory which is exhaustively searched on ev- ery clock tick (66ms). The memory consists of a set of frame-like structures, one per possible view of each landmark (see figure 5). Each frame gives the ex- pected appearance of a place from a particular direc- tion (north/south/east/west). Frames contain a place name, qualitative coordinates, and a direction and some specification of the landmark’s appearance: ei- ther a 16 x 12 grey-scale image or a set of qualitative features (left-turn, right-turn, wall, dark-floor, light- floor). No explicit connectivity information is repre- sented. Frames can also be tagged with a speech to give during a tour or an open-loop turn to perform. By default, the corridor follower is in control of the robot at all times. The corridor follower will always attempt to go forward and avoid obstacles unless it is overridden. Higher-level navigation is implemented by a set of independent processes which are parasitic upon the corridor follower. These processes control the robot by enabling or disabling motion, and by forcing open-loop turns. The navigator unit chooses corridors by performing difference reduction of the (qualitative) goal coordinates and the present coordinates. When there is a positive error in the y coordinate, it will at- tempt to drive south, or north for negative error, and so on. This technique has the advantages of being very simple to implement and very tolerant of place recog- nition errors. If a landmark is missed, the system need not replan its path. When the next landmark in the corridor is noticed, it will automatically return to the missed landmark. The “unwedger” unit forces the robot to turn when the corridor follower is unable to move for a long period of time (2 seconds). Finally, a set of action-sequencers (roughly plan executives for hand- written plans) are used to implement tour giving and operations such as docking. The sequencers execute a language of high level commands such as “go to place” which are implemented by sending commands to lower- level modules such as the navigator. Performance While at first glance, this may seem to be an in- At present the system runs at 15Hz, but is I/O bound. efficient mechanism, it is in fact quite compact. The The navigation system can safely run the robot at up complete set of frames for the 7th floor requires ap- to 1.5m/s, however, the base becomes unstable at that proximately 1KByte of storage. The system can scan speed. The system is very robust, particularly the low- Vision Processing 827 level locomotion system, which has seen hundreds of hours of service. The place recognition and navigation systems are newer and so less well tested. The fail- ure modes of the component systems are summarized below. Low-level navigation All locomotion problems were obstacle detection prob- lems. The corridor follower runs on all floors of the AI lab building on which it has been tested (floors 3-9) except for the 9th floor, which has very shiny floors; there the system brakes for the reflections of the over- head lights in the floor. The present system also has no memory and so cannot brake for an object unless it is actually in its field of view. The system’s major failure mode is braking for shadows. If shadows are suf- ficiently strong they will cause the robot to brake when there is in fact no obstacle. This is less of a problem than one would expect because shadows are generally quite diffuse and so will not necessarily trigger the edge detector. Finally, several floors have multiple carpets, each with a different color. The boundaries between these carpets can thus be mistaken for obstacles. This problem was dealt with by explicitly recognizing the boundary when directly approaching it and informing the edge detector to ignore horizontal lines. The system has also been tested recently at Brown university. There the robot had serious problems with light levels, but performed well where there was suffi- cient light for the camera. In some areas the robot had problems because the boundary between the floor and the walls was too weak to be picked up by the edge detector. We would expect any vision system to have problems in these cases however. Place recognition Place recognition is the weakest part of the system. While recognition by matching images is quite general, it is fragile. It is particularly sensitive to changes in the world. If a chair is in view when a landmark tem- plate is photographed, then it must continue to be in view, and in the same place and orientation, forever. If the chair moves, then the landmark becomes unrec- ognizable until a new view is taken. Another problem is that the robot’s camera is pointed at the floor and there isn’t very much interesting to be seen there. For these reasons, place recognition is restricted corridor intersections represented by feature frames, since they are more stable over time. The one exception is the kitchen which is recognized using images. In ten trials, the robot recognized the kitchen eight times while going west, and ten times while going east. Westward recog- nition of the kitchen fails completely when the water bottles in the kitchen doorway are moved however. Both methods consistently miss landmarks when there is a person standing the the way. This often leads it to miss a landmark immediately after picking up a visitor. They also fail if the robot is in the process of readjusting its course after driving around an obstacle or if the corridor is very wide and has a large amount of junk in it. Both these conditions cause the constant- viewpoint constraint to fail. The former can sometimes cause the robot to hallucinate a turn because one of the walls is invisible, although this is rare. Recognition of districts is very reliable, although it can sometimes become confused if the robot is driven in a cluttered open space rather than a corridor. High-level navigation High-level navigation performance is determined by the accuracy of place recognition. In general, the system works flawlessly unless the robot gets lost. For exam- ple, the robot has often run laps (implemented by alter- nately giving opposite corners as goals to the navigator) for over an hour without any navigation faults. When the robot gets lost, the navigator will generally over- shoot and turn around. If the robot gets severely lost, the navigator will flail around until the place recogni- tion system gets reoriented. The worst failure mode is when the place recognition system thinks that it is east of its goal when it is actually at the western edge of the building. In this case, the navigator unit and the unwedger fight each other, making opposite course cor- rections. The place recognition system should probably be modified to notice that it is lost in such situations so that the navigator will stop making course corrections until the place recognition system relocks. This has not yet been implemented however. Getting lost is a more serious problem for the action sequencers, since they are equivalent to plans but there is no mechanism for replanning which a plan step fails. This can be mitigated by using the navigator to exe- cute individual plan steps, which amounts to shifting responsibility from plan-time to run-time. Conclusions Many vision-based mobile robots have been devel- oped in the past (see for example [Kosaka and Kak, 1992][Kriegman et al., 871 [Crisman, 1992][Turk et al., 19871). The unusual aspects of Polly are its relatively large behavioral repertoire, simple design, and princi- pled use of special properties of its environment. Polly’s efficiency and reliability are due to a number of factors. Specialization to a task allows the robot to compute only the information it needs. Specialization to the environment allows the robot to substitute sim- ple computations for more expensive ones. The use of multiple strategies in parallel reduces the likelihood of catastrophic failure (see [Horswill and Brooks, 1988]). Thus if the vanishing point computation generates bad data, the depth-balancing strategy will compensate for it and the distance control system will prevent colli- sions until the vanishing point is corrected. Finally, the speed of its perception/control allows it to rapidly recover from errors. This relaxes the need for perfect 828 Horswill perception and allows simpler perceptual and control strategies to be used. Scalability is a major worry for all approaches to AI. We don’t know whether an approach will scale until we try to scale it and so the field largely runs on existence proofs. Polly is an existence proof that a robust system with a large behavioral repertoire can be built using simple components which are specialized to their task and environment, but it does not show how far we can extend the approach. One of the benefits of making constraints explicit and putting them in correspondence with transformations is that it gives us some degree of leverage in generaliz- ing our designs. Although space precluded a detailed analysis of Polly’s systems, we can see from the brief analysis of the low level navigation system that the role of the background texture constraint was to simplify the figure ground problem by allowing the substitution of a edge detector. This tells us several useful things. First, any linear filter restricted to the right band will do (see [Horswill, 1993a]). Second, if the environment does not satisfy the BTC, then any other transforma- tion which simplifies figure ground will also do. We can even use multiple figure/ground systems and switch between them depending on the properties of the envi- ronment. Another possibility is two implement both the general system and a specialized system and switch at the behavioral level. This effectively moves the opti- mization from compile-time to run-time and makes the specialized system a sort of a hardware accelerator on a par with a cache memory. Thus specialized systems need not simply be hacks. We can learn things from the design of one specialized system which are applicable to the designs of other sys- tems, even traditional systems. References [Agre and Chapman, 19871 Philip E. Agre and David Chapman. Pengi: An implementation of a theory of activity. In Proceedings of the Sixth National Confer- ence on Artificial Intelligence, pages 268-272, 1987. [Agre, 19881 Philip E. Agre. The dynamic structure of everyday life. Technical Report 1085, October 1988. [Aloimonos, 19901 John Aloimonos. Purposive and qualitative active vision. In DARPA Image Under- standing Workshop, 1990. [Ballard, 19911 D ana H. Ballard. Animate vision. Ar- tificial Intelligence, 48( 1):57-86, 1991. [Bellutta et al., 19891 P. Bellutta, 6. Collini, A. Verri, and V. Torre. Navigation by tracking vanishing points. In AAAI Spring Symposium on Robot Navi- gation, pages 6-10, Stanford University, March 1989. AAAI. [Blake and Yuille, 19921 Andrew Blake and Alan Yuille, editors. Active Vision. MIT Press, Cambridge, MA, 1992. [Brooks, 19861 Rodney A. Brooks. A robust layered control system for a mobile robot. IEEE Journal of Robotics and Automoation, 2(1):14-23, March 1986. [Crisman, 19921 Jill D. Crisman. Color Region Track- ing for Vehicle Guidance, chapter 7. In Blake and Yuille [1992], 1992. [Norswill and Brooks, 1988] Ian Horswill and Rodney Brooks. Situated vision in a dynamic environment: Chasing objects. In Proceedings of the Seventh Na- tional Conference on Artificial Intelligence, August 1988. [Horswill, 19881 I an D. Horswill. Reactive navigation for mobile robots. Master’s thesis, Massachusetts In- stitute of Technology, June 1988. [Horswill, 1993a] Ian Horswill. Analysis of adaptation and environment. In submission, 1993. [Horswill, 1993131 Ian H orswill. Specialization of percep- tual processes. PhD thesis, Massachusetts Institute of Technology, Cambridge, 1993. forthcoming. [Ikeuchi and Herbert, 19901 Katsushi Ikeuchi and Mar- tial Herbert. Task oriented vision. In DARPA Image Understanding Workshop, 1990. [Kosaka and Kak, 19921 A. Kosaka and A. C. Kak. Fast vision-guided mobile robot navigation using model-based reasoning and prediction of uncertain- ties. Computer Vision, Graphics, and Image Pro- cessing, 56(3), September 1992. [Kriegman et al., 871 David J. Kriegman, Ernst Triendl, and Tomas 0. Binford. A mobile robot: Sensing, planning and locomotion. In 1987 IEEE Internation Conference on Robotics and Automation, pages 402-408. IEEE, March 87. [Lynch, 19601 Kevin Lynch. The Image of the City. MIT Press, 1960. [Passini, 19841 Romedi Passini. Wayfinding in Archi- tecture, volume 4 of Environmental Design Series. Van Norstrand Reinhold, New York, 1984. [Rosenschein and Kaelbling, 19861 Stanley J. Rosen- schein and Leslie Pack Kaelbling. The synthesis of machines with provable epistemic properties. In Joseph Halpern, editor, Proc. Conf. on Theoretical Aspects of Reasoning about Knowledge, pages 83-98. Morgan Kaufmann, 1986. [Turk et al., 19871 Matthew A. Turk, David G. Mor- genthaler, Keith Gremban, and Martin Marra. Video road following for the autonomous land vehicle. In 1987 IEEE Internation Conference on Robotics and Automation, pages 273-280. IEEE, March 1987. Vision Processing 829 | 1993 | 126 |
1,328 | ation of the Tire D. Richard Hipp & Ronnie W. Smith Department of Computer Science Duke University Durham, NC 27706 Abstract The “Circuit Fix-it Shoppe” is a voice interactive dialog system which has been constructed in our laboratory. The mission of the system is to help people repair electronic circuits. The system contains a domain modeler, a reasoning system, a dialog controller, a user modeling system, an error-correcting natural language parser, and a natural language generator. A commercial speech recognizer and speech synthesizer are used for voice input and output. More detailed information about our dialog system can be found in [l] and [2]. This videotape records two live dialogs between the Circuit Fix-it Shoppe program and a user who has no special knowledge of computers, electronic repair, or our system. A brief description of the experimental setup and of the Circuit Fix-it Shoppe program precedes these dialogs. The Circuit Fix-it Shoppe program is capable of varying its level of initiative. It can be highly directive, in which case it controls the conversation, or it may be passive, in which case the user controls the dialog, or it may take some level of initiative between these two extremes. In the first videotape demonstration, the system is running in directive mode. In this second demonstration, the system is set to operate in declarative mode. In this mode, the user is free to take the initiative and to control the conversation. Declarative mode is appropriate for users who are much more familiar with the circuit and require only minimal help from the computer. Duration: 11 minutes 50 seconds. Tape format: VHS. References [l] R. W. Smith, D. R. Hipp and A. W. Biermann, “A Dialog Control Algorithm and Its Performance,” Proceedings of the Third Conference on Applied Natural Language Processing, Trento, Italy, April l-3,1992. [2] R. W. Smith and D. R. Hipp, “Using Expectation to Enable Spoken Variable Initiative Dialog,” Proceedings of the 1992 Symposium on Applied Computing, Kansas City, Missouri, March l-3,1992. 856 HiPP From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. | 1993 | 127 |
1,329 | Instructo-Soar: Lea language iinst Scott B. Huffman and John E. Laird Artificial Intelligence Laboratory The University of Michigan Ann Arbor, Michigan 48109-2110 huffman@umich.edu (a) (b) Move the red block letk of the green block. Move to the yellow table. Move the am letk of the green block. Move the am up. Move down Open the hand. Move up. The operator is finished. Figure 1: (a). An agent in an initial situation; (b) Instructions to teach a new procedure. Despite its ubiquity in human 1bW1 1 fiapning, very little work has been done in artificial intellig pence on lea rning from natural language instructions. In this video, we present a system, Instructo-Soar, that can both behave and learn from natural language instructions. The system is described in papers elsewhere [Huffman and Laird, 1993a; Huffman and Laird, 1993b]. The type of instruction we particularly address is situated, inter- active instruction. Situated means that the student is within the task domain, attempting to perform tasks, when instruction is given. Interactive means that the student can request instruction as needed. Instructo-Soar can learn completely new procedures from sequences of interactive instruction, and can also learn how to extend its knowledge of previously known procedures to new situations. The video demonstrates .* a. . . . . 1 .A., . -1 its applicatron in a simple robotic aomam. 1 ne system starts with a small set of primitive operators. Given instructions in the form of imperative natural language sentences, it is able to learn a hierarchy of complex operators. An example instruction scenario is shown in Figure 1. Learning procedures from instructions involves more than simply memorization of instruction sequences. Acquiring a new procedure involves learning both the procedure’s goal concept, and a general implementa- tion for the procedure. The instructed agent can learn the goal concept of a ‘This research was sponsored by NASA/ONR under contract NCC 2-517. new procedure after performing it (an inductive learn- ing task). Instruct&Soar uses a simple difference-of- states heuristic to induce goal concepts; everything that has changed from the initial state to the final state during execution of the new procedure is consid- ered part of the goal of the procedure. Recent versions of the system allow the instructor to give instructional feedback to alter the induced goal as needed. To learn a general implementation for the proce- dure, the applicability conditions of each instruction in the implementation sequence must be determined. Instruct&Soar uses an explanation based approach for this: the agent attempts to explain to itself (via an in- ternal forward simulation) how each instruction leads to achievement of the goal. This explanation process indicates which features of the situation and instruc- tion are crucial for goal achievement. Instructo-Soar exhibits a multiple execution learning process to learn a new procedure. Initial learning is rote and episodic in nature. After executing the new procedure the first time, the system can induce the goal concept of the procedure. During future executions, the system recalls the instructions it learned by rote initially, and explains how they contribute to reaching the procedure’s goal, resulting in general learning. The learning curve that results closely matches the power law of practice. This work represents first steps towards our long- term goal of building general, instructable autonomous agents. References [Huffman and Laird, 1993a] Scott B. Huffman and John E. Laird. Learning from instruction: A knowledge-level capability within a unified theory of cognition. In Proceedings of the Fifteenth Annual Conference of the Cognitive Science Society, 1993. [Huffman and Laird, 1993b] Scott B. Huffman and John E. Laird. Learning procedures from interac- tive natural language instructions. In P. E. Utgoff, editor, Machine Learning: Proceedings of the Tenth International Conference, June 1993. Video Abstracts 857 From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. | 1993 | 128 |
1,330 | ot eti David Kortenkamp, Marcus Huber, Charles Cohen, Ulrich Raschke, Clint Bidlack, Clare Bates Congdon, Frank KOSS, an Terry Weymouth Artificial Intelligence Laboratory The University of Michigan Ann Arbor, MI 48 109 korten @ engin.umich.edu Abstract Last summer, AAAI sponsored a mobile robot competition in conjunction with the AAAI-92 conference in San Jose, California. Ten robots from across the country competed in the competition, with CARMEL from the University of Michigan finishing first. CARMEL is a Cybermotion K2A mobile platform with a ring of 24 sonar sensors and a sin- gle black and white CCD camera. For computing, CARMEL has three processors: one for motor control, one for sonar ring firing and one executing high-level routines such as obstacle avoidance and object recognition. All computation and power is contained entirely on-board. The competition consisted of three stages, all taking place in a 22m by 22m arena. The first stage involved roaming the arena while avoiding obstacles (cardboard boxes) and wandering judges. The second stage involved searching for 10 distinctive objects and then visiting each of the objects. Visiting was defined as moving to within two robot diameters of the object. The robots had 20 min- utes to perform this task. The third stage was a timed race to three of the objects found in stage 2 and then back home. The arena boundaries were defined by walls, and the arena floor was strewn with obstacles. Objects were ten foot tall, three-inch diameter poles. Teams could attach their own tags to the poles to allow their sensors to detect them. The objects could be seen above the obstacles, while the clear- ance between obstacles was a minimum of 1Sm. Obstacle avoidance on CARMEL is done solely with its sonar sensors and has two components: (a) a unique method for detecting and rejecting noise and crosstalk with ultrasonic sensors, called error eliminating rapid ultrasonic firing (EERUF) [3]; and (b) an obstacle avoidance method called the vector field histogram (VFH) [1,2]. The VFH method uses a two-dimensional Cartesian grid, called the Histogram Grid, to represent data from ultrasonic (or other) range sensors. Each cell in the Histogram Grid holds a cer- tainty value that represents the confidence of the algorithm in the existence of an obstacle at that location. This repre- sentation was derived from the certainty grid concept that was originally developed by Moravec and Elfes in [5]. Based on data in the Histogram Grid, the VFH method cre- ss Kortenkamp ates an intermediate data representation called the Polar Histogram. The spatial representation in the Polar His- togram can be visualized as a mountainous panorama around the robot, where the height and size of the peaks represent the proximity of obstacles, and the valleys repre- sent possible travel directions. The VFH algorithm steers the robot in the direction of one of the valleys, based on the direction of the target location. Using VFH, CARMEL avoided obstacles while moving at speeds of up to 780 mm/set. Objects recognition was facilitated by tagging each pole with an omni-directional barcode. The object tag design used for CARMEL consists of a black and white stripe pat- tern placed upon PVC tubing with a four inch diameter, allowing the tags to be slipped over the object poles. The vision algorithm for extracting objects from an image re- quired no preprocessing of the image. The algorithm makes a single pass over the image, going down each column of the image looking for a white-to-black transition that would mark the start of a potential object. A finite state machine keeps track of the number and spacing of the bands. After finding enough bands to comprise a tag the algorithm stores the tag id and pixel length. Once a column is com- plete, the eligible objects are heuristically merged with ob- jects found in previous columns. The algorithm has an ef- fective range of about 19 meters. CARMEL successfully integrated high-speed obstacle avoidance with long-range vision to win the AAAI Robot competition. CARMEL placed third in stage 1 and first in stages 2 and 3. In the second stage, CARMEL found and visited all ten objects in under ten minutes; no other robot could find and visit all ten objects in under the allotted 20 minutes for stage 2. In stage 3, CARMEL finished first by visiting the three objects and returning to the start position in just under three minutes. For more details on CARMEL see [4]. References [l] Johann Borenstein and Yoram Koren. Histogramic in- motion mapping for mobile robot obstacle avoidance. IEEE Transactions on Robotics and Automation, 7(4), 1991. From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. [2] Johann Borenstein and Yoram Koren. The Vector Field Histogram for fast obstacle avoidance for mobile robots. IEEE Transactions on Robotics and Automation, 7(3), 1991, pp. 278-288. [3] Johann Borenstein and Yoram Koren. Noise rejection for ultrasonic sensors in mobile robot applications. In Proceedings of the IEEE Conference on Robotics and Automation, 1992, pp. 1727-1732. [4] David Kortenkamp, Marcus Huber, Charles Cohen, Ulrich Raschke, Clint Bidlack, Clare Bates Congdon, Frank Koss, and Terry Weymouth. Winning the AAAI Robot Competition: A case study in integrated mobile robot design. To appear in IEEE Expert, 1993. [5] Hans P. Moravec and Albert0 Elfes. High resolution sonar maps from wide angle sonar. In Proceedings of the IEEE Conference on Robotics and Automation, 1985, pp. 19-24. Video Abstracts 859 | 1993 | 129 |
1,331 | aphic Limitations se Logic Programs William W. Cohen AT&T Bell Laboratories 600 Mountain Avenue Murray Hill, NJ 07974 wcohen@research.att.com Abstract An active area of research in machine learning is learn- ing logic programs from examples. This paper inves- tigates formally the problem of learning a single Horn clause: we focus on generalizations of the language of constant-depth determinate clauses, which is used by several practical learning systems. We show first that determinate clauses of logarithmic depth are not learnable. Next we show that learning indeterminate clauses with at most k indeterminate variables is equiv- alent to learning DNF. Finally, we show that recursive constant-depth determinate clauses are not learnable. Our primary technical tool is the method of prediction- preserving reducibilities introduced by Pitt and War- muth [1990]; as a consequence our results are inde- pendent of the representations used by the learning system. Introduction Recently, there has been an increasing amount of re- search in learning restricted logic programs, or in- ductive logic programming (ILP) [Cohen, 1992; Mug- gleton and Feng, 1992; Quinlan, 1990; Muggleton, 1992a]. One advantage of using logic programs than alternative first-order logic formalisms t rather Cohen and Hirsh, 19921) is that its semantics and computa- tional complexity have been well-studied; this offers some hope that learning systems based on it can also be mathematically well-understood. Some formal results have in fact been obtained; the strongest positive result in the pat-learning model [Valiant, 19841 shows that a single constant-depth de- terminate clause is pat-learnable, and that a non- recursive logic program containing Ic such clauses is learnable against any %imple” distribution [DZeroski et al., 19921. Some very recent work [Kietz, 19931 shows that a single clause is not pat-learnable if the constant-depth determinacy condition does not hold; specifically, it is shown that neither the language of in- determinate clauses of fixed depth nor the language of determinate clauses of arbitrary depth is pat-learnable. These negative results are of limited practical impor- tance because they assume that the learner is required 80 Cohen to output a single clause that covers all of the ex- amples; however, most practical ILP learning systems do not impose this constraint. Such negative learn- ability results are sometimes called representation- dependent.’ This paper presents representation independent neg- ative results for three languages of Horn clauses, all ob- tained by generalizing the language of constant depth determinate clauses. These negative results are ob- tained by showing that learning is as hard as breaking a (presumably) secure cryptographic system, and thus are not dependent on assumptions about the repre- sentation used by the learning system. Specifically, we will show that determinate clauses of log depth are not learnable, and that recursive clauses of constant depth are not learnable. We will also show that indetermi- nate clauses with Ic “free” variables are exactly as hard to learn as DNF. Due to space constraints, detailed proofs will not be given; the interested reader is referred to a longer version of the paper [Cohen, 1993a]. We will focus instead on describing the basic intuitions behind the proofs. Formal preliminaries Eearning models Our basic notion of learnability is the usual one intro- duced by Valiant [1984]. Let X be a set, called the domain. Define a concept C over X to be a represen- tation of some subset of X, and a language L to be a set of concepts. Associated with X and fZ are two size complexity measures; we use the notation X, (re- spectively ,&) to stand for the set of all elements of X (respectively L) of size complexity no greater than n. An example of C is a pair (x, b) where b = 1 if x E C and b = 0 otherwise. If D is a probability distribution ‘The prototypical example of a learning problem which is hard in a representation-dependent setting but not in a broader setting is learning &term DNF. Pat-learning Ic- term DNF is NP-hard if the hypotheses of the learning system must be k-term DNF; however it is tractable if hy- potheses can be expressed in the richer language of k-CNF. From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. function, a sample of C from X drawn according to D size complexity measure for background theories, and is a pair of multisets S + , S- drawn from the domain X allow the sample and time complexities of the learner according to D, S+ containing only positive examples to also grow (polynomially) with the size complexity of C, and S- containing only negative ones. of the background theory provided by the user. Finally, a language L is pat-learnable iff there is an algorithm LEARN and a polynomial function m($, $,ne,nt) so that for every nt > 0, every n8 > 0, every C E iCni, every E : 0 < e < 1, every S : 0 < S < 1, and every probability distribution function D, for any sample S+ , S- of C from Xn, drawn according to D containing at least nz( $, $, n,, nt) examples 1. LEARN, on inputs S+, S-, E, and 6, outputs a hy- pothesis H such that ilities among prediction problems Prob(D(H - 6) + D(C - H) > E) < 6 Our main analytic tool in this paper is prediction- preserving reducibility, as described by Pitt and War- muth [1990]. This is essentially a method of showing that one language is no harder to predict than another. If Cr is a language over domain X1 and ,C2 is a lan- guage over domain X2, then we say that predicting ,Cl reduces to predicting &, written Icrg&, if there is a function fi : X1 + X2, henceforth called the instance mapping, and a function fc : Li + &, henceforth called the concept mapping, so that the following all hold: 2. LEARN runs in time polynomial in $, $, ne, nt, and the number of examples; and 3. The hypothesis H of the learning systems is in L. The polynomial function m( $, i, ne, nt) is called the sample complexity of the learning algorithm LEARN. With condition 3 above, the definition of pac- learnability makes a relatively strong restriction on the hypotheses the learner can generate; this can lead to some counterintuitive results.2 If a learning algorithm exists that satisfies all the conditions above except con- dition 3, but does output a hypothesis that can be evaluated in polynomial time, we will say that t is (polynomial/y) predictable.3 These learning models have been well-studied, and are quite appropriate to modeling standard inductive learners. However, the typical ILP system is used in a somewhat more complex setting, as the user will typi- cally provide both a set of examples and a background theory K: the task of the learner is then to find a logic program P such that P, together with K, is a good model of the data. To deal with this wrinkle, we will re- quire some additional definitions. If L is some language of logic programs4 and I< is a logic program, then C[K] denotes the set of all pairs of the form (P, K) such that P E C: each such pair represents the set of all atoms e such that P A K I- e. If K: is a set of background theo- ries, then the family of languages C[K] represents the set of all languages QK] where K E /c. We will con- sider C[K] to be predictable (pat-learnable) only when every C[K] E ,!J[Ic] is predictable (pat-learnable.) This requires one slight modification to the definitions of predictability pat-learnability: we must now assume a 21n particular, it may be that a language is hard to learn even though an accurate hypothesis can be found, if it is hard to encode this hypothesis in the language C. 3Such learning algorithms are also sometimes called ap- proxhation cdgorithms, as in the general case an approxi- mation to the target concept may be produced. *We assume that the reader is familiar with the basic elements of logic programming; see Lloyd [1987] for the necessary background. 1. z E C if and only if fi(x) E fc(C) - i.e., concept membership is preserved by the mappings; 2. the size complexity of fe(C) is polynomial in the size complexity of C - i.e. the size of concepts is preserved within a polynomial factor; and 3. fi(z) can be computed in polynomial time. Intuitively, fe(Cr) returns a concept C2 E & that will “emulate” Cr (i.e., make the same decisions about concept membership) on examples that have been “preprocessed” with the function fi. If predicting Lr reduces to predicting & and a learning algorithm for ,& exists, then one possible scheme for learning a con- cept Cr E Cr would be to preprocess all examples of Cr with fi, and then use these preprocessed exam- ples to learn some H that is a good approximation of c2 = fe(Cr). H can then be used to predict mem- bership in Ci: given an example x from the original domain Xr , one can simply predict a: E Cr to be true whenever fi(x) E H. Pitt and Warmuth [1990] give a rigorous argument that this approach leads to a predic- tion algorithm for -Cl, leading to the following theorem. Theorem 1 (Pitt and rmuth) If icl9C2 and L2 is polynomially predictable, then -Cl is podynomi- ally predictable. Conversely, if Lc,a La and Cl is not podynomially predictable, then C2 is not polynomially predictable. estrictions on Logic Programs In this paper, logic programs will always be function- free and (unless otherwise indicated) nonrecursive. A background theory K will always be a set of ground unit clauses (aka relations, a set of atomic facts, or a model) with arity bounded by the constant a; the sym- bol a-X: (K if a is an arbitrary constant) will denote the set of such background theories. The size complexity of a background theory K is its cardinality, and usually will be denoted by nb. Examples will be represented by a single atom of arity ne or less; thus we allow the Complexity in Machine Learning 81 head of a Horn clause to have a large arity, although literals in the body have constant arity.5 Muggleton and Feng [1992] have introduced sev- eral additional useful restrictions on Horn clauses. If AtBl A . . . A B, is an (ordered) Horn clause, then the input variables of the literal Bi are those vari- ables appearing in Bi which also appear in the clause A+Bl A.. . A Bi-1; all other variables appearing in Bi are called output variables. A literal Bi is determinate (with respect- to K and X) if for every possible sub- stitution u that unifies A with some e E X such that K I- (B1 A . . . A Bi-l)a there is at most one substi- tution 19 so that K I- B&. Less formally, a literal is determinate if its output variables have only one pos- sible binding, given K and the binding of the input variables. A clause is determinate if all of its literals are determinate. Next, define the depth of a variable appearing in a clause AtBl A. . . A B, as follows. Variables appearing in the head of a clause have depth zero. Otherwise, let Bi be the first literal containing the variable V, and let d be the maximal depth of the input variables of Bi; then the depth of V is d+ 1. The depth of a clause is the maximal depth of any variable in the clause. A determinate clause of depth bounded by a con- stant i over a background theory K E j-X: is called ij- determinate. The learning program GOLEM, which has been applied to a number of practical problems [Muggleton and Feng, 1992; Muggleton, 1992b], learns ij-determinate programs. Closely related restrictions also have been adopted by several other inductive logic programming systems, including FOIL [Quinlan, 19911, LINUS [LavraE and Dzeroski, 19921, and GREN- DEL [Cohen, 1993c]. As an example, in the determinate clause multiply(X,Y,Z) 4- decrement(Y,W) A multiply(X,W,V) A plus(X,V,Z). W has depth one and V has depth two; thus the clause is 23-determinate. Learning log-depth determinate clauses We will first consider generalizing the definition of ij-determinacy by relaxing the restriction that clauses have constant depth. The -key result of this section is an observation about the expressive power of deter- minate clauses: there is a background theory K such that every depth d boolean circuit can be emulated by a 51t should be n oted that the parameters n,, !nb, and nt (the complexity of the target concept) all measure differ- ent aspects of the complexity of a learning problem; as the requirement on the learner is that it is polynomial in all of these parameters, the reader can simply view them as different names for a single complexity parame- ter n = maX(n,, nb, nt). output circuit(XP,X2,X3,X4,X5) not(Xl,Yl) A and(X2,X3,Y2) A or(X4,X5,Y3) A or(Yl,Y2,Y4) A or(Y2,Y3,Y5) A and(Y4,Y5,0utput) true(Output) t Figure 1: Reducing a circuit to a clause depth d determinate clause over K. The background theory K contains these facts: and(O,O,O) and(O,l,O) and(l,O,O) and(l,l,l) or@4 60) 0 f(O, 41) orP,O,O) 041, Al) not(O, 1) not(l,O) true (I) The emulation is the obvious one, illustrated by exam- ple in Figure 1. Notice an example from the circuit domain is a binary vector br . . . b, encoding an assign- ment to the n boolean inputs to the circuit, and hence must be preprocessed to the atom circuit&, . . . , b,). The learnability of circuits has been well-studied. In particular, the language of log-depth circuits (also familiar as the complexity class iVC1) is known to be hard to learn, under cryptographic assumptions [Kearns and Valiant, 19891. Thus we have the fol- lowing theorem; here CijmDET denotes the language of logic programs containing a single non-recursive ij- determinate clause. Theorem 2 There is a small background theory K E 3-K: such that NC1 5 Ccl,, n,13-DET[K]. Thus, for j 2 3, the family of languages Cc,,, n,Ij-DET [j-xc] is not polynomially predictable, and hence not pat-learnable, under cryptographic assumptions. It has recently been shown that NC1 is not predictable even against the uniform distribution [Kharitonov, 19921, suggesting that distributional re- strictions will not make log-depth determinate clauses predictable. Learning indeterminate clauses We will now consider relaxing the definition of i j- determinacy by allowing indeterminacy. Let the free variables of a Horn clause be those variables that ap- pear in the body of the clause but not in the head; we will consider the learnability of the language ChqFREE, defined as all nonrecursive clauses with at most Ic free variables. Clauses in C,_,,,, are necessarily of depth at most k; also restricting the number of free variables is required to ensure that clauses can be evaluated in polynomial time. Notice that CrmFREE is the most restricted language possible that contains indeterminate clauses. We be- gin with an observation about the expressive power of 82 Cohen Background theory: fori=l,...,k truei (b, y) for all b, y : b = 1 or y E 1, . . . , T but y # i falsei(b,y) forallb,y:b=OoryEl,...,rbuty#i DNF formula: (VI A EA vq) V (ETA%) V (VI AK) Equivalent clause: dnf(Xl,X2,X3,X4) + true1 (X,,Y) A false1 (&,Y) A truel(X4,Y) A false:!(Xz,Y) A falsez(&,Y)A trues(Xr,Y) A falses(X4,Y). Figure 2: Reducing a DNF formula to a clause this language: for every r, there is a background the- ory K, such that every DNF formula with r or fewer terms can be emulated by a clause in CImFREEIKr]. The emulation is a bit more indirect than the emu- lation for circuits, but the intuition is simple: it is based on the observation that a clause p(X)+q(X, Y) classifies an example p(a) as true exactly when K I- a(a,h) v . . - V K I- q(a, b,), where br, . . . , b, are the possible bindings of the (indeterminate) variable Y; thus indeterminate variables allow some “disjunctive” concepts to be expressed by a single clause. Specifically, K, will contain sufficient atomic facts to define the binary predicates truel, falsel, . . . , true,, false, which behave as follows: e truei (X, Y) succeeds if X = 1, 0rifY E {l,..., i-l,i+l,..., r}. 8 falsei (X, Y) succeeds if X = 0, 0rifY E {l,..., i-l,i+l,..., r}. The way in which a formula is emulated is illus- trated in Figure 2. The free variable in the emulating clause is Y, and the r possible bindings of Y corre- spond to the r terms of the emulated DNF formula. Assume without loss of generality that the DNF has exactly r terms6 and let the i-th term of the DNF be Ti = r\;L, lij; this will be emulated by a conjunction of literals Ci = A,“!-, Litij designed so that Ci will suc- ceed exactly when Ti succeeds and Y is bound to i, or when Y is bound to some constant other than i. This can be accomplished by defining Litij E truei (Xk, Y) if lij = Vk falsei (xk, Y) if Zij = @ Now, assume that Xi is bound to the value of the i-th boolean variable vi that is used in the DNF formula; then the conjunction AI==, Ci will succeed when Y is bound to 1 and Tl succeeds, or when Y is bound to 2 and Tz succeeds, or . . . or Y is bound to r and TT suc- ceeds. Hence, if we assume that each example bl . . . b, is preprocessed to the atom dnf&, . . . , b,), the clause 61f necessary, null terms ZII- can be added to make the DNF T terms long. dnf (Xl,. . . ,&-J + A;=, ci ( in which Y can be bound to any of the r values) will correctly emulate the orig- inal formula. As a consequence, we have the following theorem: Theorem 3 Let DNF, denote the language of DNF expressions of complexity n or less; for all n there is a polynomial sized background theory Kn such that DNF, a C1sFREE [KJ. Thus, for all constant k, the family of languages Ck-,REE[K] is predictable only if DNF, is predictable. An important question is whether there are lan- guages in cksFREE that are harder to learn than DNF. The answer to this questions is no: for every k and every background theory K E a-x, every clause in c k-FREE[K] can be emulated by a DNF formula. Let Cl = A t B,, A . . . A B,, be a clause in CkVFREE[K]. As we assume clauses are nonrecursive, Cl covers an example e iff 3a:KI-(B,, A...AB,,)d, (1) where 0, is the most general unifier of A and e. How- ever, since the background theory K is of size nb, and all predicates are of arity a or less, there are at most a?ab constants in K, and hence only (unb)k possible substitutions 01,. . . , U(~Q to the Ic free variables. Also (as we assume clauses are function-free) if K de- fines i different predicates, there are at most 1 - (n, + k)” < nb -(n, +/c)a possible literals B1, . . . , B,,(,,+rcja that can appear in the body of a ckwFn,, clause. So, let us introduce the boolean variables vij where i ranges from one to nb . (?a, + /c)a and .i ranges from one to (anb)lc. We will preprocess an example e by constructing an assignment r], to these variables: vij will be true in qe if and only if K t- BigjO,. This means that Equation 1 is true exactly when the DNF formula (anb)= 1 V // %j j=l ix1 is true; hence the clause Cl can be emulated by DNF over the vij ‘s. We can thus strengthen Theorem 3 as follows: Theorem 4 The family of languages ckWFREE is pre- dictable if and only if DNF is predictable. This does not actually settle the question of whether indeterminate clauses are predictable, but does show that answering the question will require substantial ad- vances in computational learning theory; the learnabil- ity and predictability of DNF has been an open prob- lem in computational learning theory for several years. Learning recursive clauses Finally, we will consider learning a single recursive ij-determinate clause. A realistic analysis of recur- sive clauses requires a slight extension of our formal- ism: rather than representing an example as a sin- gle atom of arity n,, we will (in this section of the Complexity in Machine Learning 83 paper only) represent an example as a ground goal atom, e, plus a set of up to ne ground description atoms D = (dl,... , dne) of arity bounded by a. A program P now classifies an example (e, D) as true iff PA K A D I- e. This also allows structured objects like lists to be used in examples; for example, an example of the predicate member(X, Ys) might be represented as the goal atom e =member(b,list-ab) together with the description D = { head(list-ab,a), tail(list,ab,list-b), head(list-b,b), tail(list-b,nil) } This formalism follows the actual use of learning sys- tems like FOIL. Finally, in order to talk sensibly about one-clause recursive programs, we will also assume that the non-recursive “base case” of the target program is part of D or K. Again, the key result of this section is an observation about expressive power: there is a background theory Kn such that every log-space deterministic (DLOG) Turing machine M can be emulated, on inputs of size n or less, by a single recursive ij-determinate clause. Since DLOG Turing machines are cryptographically hard to predict, this will lead to a negative predictabil- ity result for recursive clauses. Before describing the emulation, we will begin with some basic facts about DLOG machines. First, we can assume, without loss of generality, that the tape al- phabet is (0, 1). Then a DLOG machine M accepting only inputs of size n can be encoded by a series of transitions of the following form: if xi - - b and CONFIG=cj then let CONFIG:=ci where xi denotes the value of the i-th square of the in- put tape, b E (0, l}, and cj and cs are constants from a polynomial-sized alphabet CON = {cl, . . . , ~~(~1) of constant symbols denoting internal configurations of M.7 One can also assume without loss of gener- ality that there is a unique starting configuration CO, a unique accepting configuration c,,,, and a unique “failing” configuration cfail, and that there is ex- actly one transition of the form given above for ev- ery combination of i : 1 5 i 5 n, b E {O,l}, and cj E CON - {cacc, cfaia}. On input X = x1 . . . X~ the machine M starts with CONFIG=co, then ex- ecutes transitions until it reaches CONFIG=c,,, or CONFIG=C~,~~, at which point X is accepted or re- jected (respectively). To emulate M, we will preprocess an example S = h . . . b, into the goal atom e and the description D defined below: e s tm( news, CO) 7An internal configuration encodes the contents of M’s worktape, along with all other properties of its internal state relevant to the computation. Since M’s worktape is of length logn it can have only n different contents; thus there are only a polynomial number p(n) of internal con- figurations for M. D E { b%(news, bJ)Yzl U {tm(news, cacc>} Now we will define the following predicates for the background theory K,. First, for every possible b E {O,l) and j : 1 5 j 5 p(n), the predicate statb,j (B, C, Y) will be defined so that given bindings for variables B and c, statb,j (B, C, Y) will fail if C = cfail ; otherwise it will succeed, binding Y to active if B = b and C = cj and binding Y to inactive otherwise. Sec- ond, for j : 1 5 j 5 p(n), the predicate nextj (Y, C) will be true if Y = active and C = cj, or if Y = inactive and C = cacc. It is easy to show that these definitions require only polynomially many facts to be in Kn. TRANSibj - bit;(S,Bibj) A Statb,j(c,Bibj,Yibj) A IRXtjf (Yibj,clibj) A kIl(S,Clibj) Given K, and D, and assuming that S is bound to news and C is bound to some configuration c, this conjunction will fail if c = cfail; otherwise, it will suc- ceed trivially if xi # b or c # cj8 ; finally, if xi = b and c = cj, TRANSibj will succeed only if the atom tm(news, cjl) is provable. g From this it is clear that the one-clause logic program tm(S,C) + A TRANSibj iE(1 ,...,n}, bE{O,l) jE{l ,...dn>l will correctly emulate the machine M. Thus, let- ting Rij-DET denote the language of one-clause ij- determinate recursive logic programs, and letting DLOGn represent the class of problems on inputs of size n computable in deterministic space log n, we have the following theorem. Theorem 5 For all n there is a background theory Kn such that DLOG,a R,,-,,, [Kn]. Thus the family of lfqw-wes &j-DET [K] is not polynomially predictable, and hence not pat-learnable, under cryptographic as- sumptions. Although this result is negative, it should be noted that (unlike the case in the previous theorems) the preprocessing step used here distorts the distribution of examples: thus this result does not preclude the possibility of distribution-specific learning algorithms for recursive clauses. Concluding remarks This paper presented three negative results on learn- ing one-clause logic programs. These negative results are stronger than earlier results [Kietz, 19931 in a num- ber of ways; most importantly, they are not based on restricting the learner to produce hypotheses in some ‘In this case Yibj will be bound to inactive and Clibj will be bound to cacc -the recursive call to tm/2 succeeds because tm(news ,cacc) E D. ‘In this case, Yibj will be bound to active and clibj will be bound to cjj. 84 Cohen designated language. Instead, they are obtained by showing that learning is as hard as breaking a secure cryptographic system. In particular, we have shown that several extensions to the language of constant-depth determinate clauses lead to hard learning problems. First, allowing depth to grow as the log of the problem size makes learn- ing a single determinate clause as hard as learning a log-depth circuit, which is hard under cryptographic assumptions; this shows that all learning algorithms for determinate clauses will require time worse than exponential in the depth of the target concept. Sec- ond, indeterminate clauses with Ic “free” variables (a restriction of the language of constant-depth indeter- minate clauses) are exactly as hard to learn as DNF; the learnability of DNF is a long-standing open prob- lem in computational learning theory. Finally, adding recursion to the language of i j- determinate clauses makes them as hard to learn as a log-space Turing machine; again, this learning problem is cryptographically hard. There are however, learn- able classes of one-clause linearly recursive clauses; a discussion of this is given in a companion paper [Co- hen, 1993b]. Acknowledgements Thanks to Mike Kearns and Rob Schapire for several helpful discussions, and to Susan Cohen for proofread- ing the paper. eferences (Cohen and Hirsh, 1992) William W. Cohen and Haym Hirsh. Learnability of description logics. In Proceedings of the Fourth Annual Workshop on Computational Learning Theory, Pittsburgh, Penn- sylvania, 1992. ACM Press. (Cohen, 1992) William W. Cohen. Compiling knowl- edge into an explicit bias. In Proceedings of the Ninth International Conference on Machine Learning, Ab- erdeen, Scotland, 1992. Morgan Kaufmann. (Cohen, 1993a) William W. Cohen. Learnability of restricted logic programs. In Proceedings of the Third International Workshop on Inductive Logic Programming, Bled, Slovenia, 1993. (Cohen, 199313) William W. Cohen. A pat-learning al- gorithm for a restricted class of recursive logic pro- grams. In Proceedings of the Tenth National Con- ference on Artificial Intelligence, Washington, D.C., 1993. (Cohen, 1993c) William W. Cohen. Rapid prototyp- ing of ILP systems using explicit bias. In prepara- tion, 1993. 1992 Workshop on Computational Learning Theory, Pittsburgh, Pennsylvania, 1992. (Kearns and Valiant, 1989) Micheal Kearns and Les Valiant. Cryptographic limitations on learning Boolean formulae and finite automata. In 21th An- nual Symposium on the Theory of Computing. ACM Press, 1989. (Kharitonov, 1992) Michael Kharitonov. Crypto- graphic lower bounds on the learnability of boolean functions on the uniform distribution. In Proceed- ings of the Fourth Annual Workshop on Computa- tional Learning Theory, Pittsburgh, Pennsylvania, 1992. ACM Press. (Kietz, 1993) Jorg-Uwe Kietz. Some computational lower bounds for the computational complexity of inductive logic programming. In Proceedings of the 1993 European Conference on Machine Learning, Vienna, Austria, 1993. (Lavrae and DZeroski, 1992) Nada LavraE and Saso DZeroski. Background knowledge and declarative bias in inductive concept learning. In K. P. Jantke, editor, Analogical and Inductive Inference: Interna- tional Workshop AII’92. Springer Verlag, Daghstuhl Castle, Germany, 1992. Lecture in Artificial Intelli- gence Series #642. (Lloyd, 1987) J. W. Lloyd. Foundations of Logic Pro- gramming: Second Edition. Springer-Verlag, 1987. (Muggleton and Feng, 1992) Steven Muggleton and Cao Feng. Efficient induction of logic programs. In Inductive Logic Programming. Academic Press, 1992. (Muggleton, 1992a) S. H. Muggleton, editor. Induc- tive Logic Programming. Academic Press, 1992. (Muggleton, 1992b) Steven Muggleton. Inductive logic programming. In Inductive Logic Program- ming. Academic Press, 1992. (Pitt and Warmuth, 1990) Leonard Pitt and Manfred Warmuth. Prediction-preserving reducibility. Jour- nal of Computer and System Sciences, 41:430-467, 1990. (Quinlan, 1990) J. Ross Quinlan. Learning logical def- initions from relations. Machine Learning, 5(3), 1990. (Quinlan, 1991) J. Ross Quinlan. Determinate liter- als in inductive logic programming. In Proceedings of the Eighth International Workshop on Machine Learning, Ithaca, New York, 1991. Morgan Kauf- mann. (Valiant, 1984) L. G. Valiant. A theory of the learn- able. Communications of the ACM, 27( ll), Novem- ber 1984. (DZeroski et al., 1992) Savso DZeroski, Stephen Mug- gleton, and Stuart Russell. Pat-learnability of de- terminate logic programs. In Proceedings of the Complexity in Machine Learning 85 | 1993 | 13 |
1,332 | AIR-SOAR: Intelligent Multi-Level Control Douglas J. Pearson, Randolph M. Jones, and John E. Laird Artificial Intelligence Laboratory The University of Michigan 1101 Beal Ave. Ann Arbor, Michigan 48109-2122 dpearson@caen.engin.umich.edu Autonomous systems must be able to deal with dy- namic, unpredictable environments in real time. Our video describes a system for intelligent control of an airplane, within a realistic flight simulator (the Sili- con Graphics flight simulator). The simulator allows asynchronous control of the plane’s throttle, ailerons, elevator and other control surfaces by an external sys- tem, and it provides limited asynchronous sensing of the plane’s motion. The result is a highly dynamic, real time domain in which models of the plane (and, potentially, other aircraft) are updated 20 times a sec- ond. Control of flight is complex. Unexpected events such as wind or turbulence must be responded to in a timely fashion. Further, identical control movements have different effects depending on the plane’s position and environmental conditions, making precise predic- tion of action effects difficult. The agent must also deal with delays in feedback from its actions, waiting for the plane to respond to changes in the control surfaces. The domain requires simultaneous execution of a range of tasks at different levels of complexity and granular- ity, from high level maneuvers like takeoff, landing and banked turns to low level tasks such as maintaining altitude, keeping the wings level and controlling the stick. Our autonomous agent for the flight domain is Air- Soar [Pearson et al., 19931. The agent is built within Soar [Laird et al., 19871, a general problem solving and learning architecture. Soar solves problems by succes- sively applying operators within problem spaces. Air- Soar reasons about flight with five problem spaces, each reasoning at a different level of granularity. In addition, the system achieves and maintains multi- ple goals simultaneously, both within and across lev- els. For example, at the highest level the system may be both climbing and turning to a new heading. Across levels, lower-level constraints may be achieved while performing higher-level goals, such as leveling the wings during a climb to a new altitude. Thus, Air-Soar supports achievement goals, where the goal is to reach a particular state (such as a new altitude), and homeo- static goals, in which constraints must be continuously maintained [Covrigaru and Lindsay, 1991; Kaelbling, 19861. Homeostatic goals often interact with achieve- ment goals in the flight domain. Examples include keeping the wings level while taking off, and maintain- ing the current altitude during a turn. Air-Soar must combine the requirements of the different types of goals to make steady progress along a flight path, without losing control by focusing only on a single aspect of the current task (such as only monitoring the altitude during a climb). Typically, all of Air-Soar’s levels are active simul- taneously, trying to maintain or achieve their current goals. This hierarchical approach supports reactive be- havior at multiple levels of granularity. Rather than explicitly monitoring the fact that all of the plane’s flight parameters are within expected ranges, each problem space notices when the values deviate from constraints it is trying to achieve or maintain, and moves to correct them. These corrections produce changes at lower granularity levels, ultimately result- ing in stick commands to control the plane. Sensitivity at different grain sizes means that Air- Soar is able to respond to a wide range of unexpected events. For instance, after completing a turn, the plane might not be perfectly level, causing the heading to slowly change. Although the rate of change is low, after a while Air-Soar would notice the heading was no longer within range and would turn to correct it. Alternatively, if a sudden burst of turbulence caused the plane’s wings to vere suddenly, then the system would react to the sudden increase in turn rate directly, before the heading had changed enough to be noticed. Air-Soar is currently able to take off, level off and then follow a pre-set flight pattern including a series of turns and altitude changes, returning to land on (or near) the runway. We have simulated “turbulence” during Air-Soar runs by manually moving the mouse controlling the plane’s stick while Air-Soar controls the plane. Air-Soar responds immediately to the “turbu- lence” and continually attempts to keep the plane on course. If the plane is flying level and is pulled out of level flight by the mouse, Air-Soar recovers by re- sponding to the change in the plane’s roll to reestab- lish level flight. Air-Soar’s successful execution of the 860 Pearson From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. flight plan, together with our experiments with “tur- bulence”, demonstrates the system’s ability to perform robustly in a highly reactive, real-time domain. In ad- dition, it highlights Air-Soar’s ability to reason about multiple simultaneous goals at various levels of granu- larity. References [Covrigaru and Lindsay, 19911 Arie A. Covrigaru and Robert K. Lindsay. Deterministic autonomous sys- tems. AI Magazine, 12(3):110-117, 1991. [Kaelbling, 19861 Leslie Pack Kaelbling. An architec- ture for intelligent reactive systems. In Michael P. Georgeff and Amy L. Lansky, editors, Reasoning about actions and plans: Proceedings of the 1986 Worlcshop, pages 395-410. Morgan Kaufmann, 1986. [Laird et al., 19871 John E. Laird, Allen Newell, and Paul S. Rosenbloom. Soar: An architecture for gen- eral intelligence. Artificial Intelligence, 33( l):l-64, 1987. [Pearson et al., 19931 Douglas J. Pearson, Scott B. Huffman, Mark B. Willis, John E. Laird, and Ran- dolph M. Jones. Intelligent multi-level control in a highly reactive domain. In Charles E. Thorpe F.C.A. Groen, S. Hirose, editor, Proceedings of the Third Internutionul Conference on Intelligent Au- tonomous Systems, pages 449-458.10s Press, 1993. Video Abstracts 861 | 1993 | 130 |
1,333 | Selective Peree Douglas A. Reece Institute for Simulation and Training University of Central Florida 12424 Research Parkway, Suite 300 Orlando, FL 32826 Steven A. Shafer School of Computer Science Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA 152 13 Abstract Robots performing complex tasks in rich environments need very good perception modules in order to understand their situation and choose the best action. Robot planning systems have typically assumed that perception was so good that it could refresh the entire world model whenever the planning system needed it, or whenever anything in the world changed. Unfortunately, this assumption is completely unrealistic in many real-world domains because perception is far too difficult. Robots in these domains cannot use the traditional planner paradigm, but instead need a new system design that integrates reasoning with perception. Our research is aimed at showing how a robot can reason about perception, how task knowledge can be used to select perceptual targets, and how this selection dramatically reduces the computational cost of perception. The domain addressed in this videotape is driving in traffic. We have developed a microscopic traffic simulator called PHAROS that defines the street environment for our research. PHAROS contains detailed representations of streets, markings, signs, signals, and cars. It can simulate perception and implement commands for a vehicle controlled by a separate program. We have also developed a computational model of driving called Ulysses that defines the driving task. The model describes how various traffic objects in the world determine what actions that a robot must take. These tools have allowed us to implement robot driving programs that request sensing actions in PHAROS, reason about right-of-way and other traffic laws, and then command acceleration and lane changing actions to control a simulated vehicle. The videotape shows three selective perception techniques that we have implemented in driving programs. Each program builds upon the concepts in the previous programs. The first, Ulysses-l, uses perceptual routines to control visual search in the scene. These task-specific routines use known objects to guide the search for others-- e.g. a routine scans “along the right side” of “the road ahead” for a sign. The second program, Ulysses-2, decides which objects are the most critical in the current situation and looks for them. It ignores objects that cannot affect the robot’s actions. Ulysses-2 creates an inference tree to determine the effect of uncertain input data on action choices, and searches this tree to decide which data to sense. Finally, Ulysses-3 uses domain knowledge to reason about how dynamic objects will move or change over time. Objects that do not move enough to affect the robot can be ignored by perception. The program uses the inference tree from Ulysses-2 and a time-stamped, persistent world model to decide what to look for. When run in the PHAROS world, the techniques included in Ulysses-3 reduced the computational cost for perception by 9 to 12 orders of magnitude when compared to an uncontrolled, general perception system. Acknowledgment We would like to thank Jim Kocher of the Robotics Institute at Carnegie Mellon University for his work in putting together this video. 862 Reece From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. | 1993 | 131 |
1,334 | puter Vision rsity ass ts Edward M. Riseman and Allen R. Hanson with J. Inigo Thomas and Members of the Computer Vision Computer Science Department University of Massachusetts Amherst, MA 01003 Abstract This video first summarizes current research at the University of Massachusetts on mobile vehicle navigation using landmark recognition and a partial 3D world model. We then show how landmarks and world models might be automatically acquired and updated over time. A fundamental goal in robot navigation is to determine the “pose” of the robot - that is, the position and orientation of the robot with respect to a 3D world model, such as a hallway. In order to determine its pose, the robot identifies modeled 3D landmarks such as doors and baseboards in a 2D image of the hallway. Identifying landmarks involves determining correspondences of extracted image line segments with predicted landmark lines projected into the image. Model matching is achieved by a combinatorial optimization technique (local search) which minimizes the error in the model to data fit. From the model- data feature correspondences thus obtained, the 3D pose of the robot is computed via a non-linear optimization procedure. The best pose requires that lines in the 2D image lie on the planes formed by the corresponding 3D landmark lines and the camera center. Robust statistical methods are employed to detect outliers. Extension of the initial partial model (over time) is achieved by determining the camera pose over a sequence of images while simultaneously tracking new unmodeled features; triangulation is then used to determine the depth of these new features , allowing them to be incorporated into the 3D model. One of our goals is to automatically construct a 3D model of the environment, without assuming the availability of an initial partial model. One technique identifies objects in the scene that are shallow in depth and which therefore can be represented by a frontal planar surface at some recovered depth. Using this model we automatically generate a path for the robot that avoids the obstacles. This shallow model has also been successfully used as the initial partial model for model extension from pose. Another 3D inference technique uses motion analysis over a sequence of images. This technique isolates and successfully corrects for a common major source of error- caused by error in the robot’s motion-that has been traditionally neglected in other structure-from-motion algorithms. Model extension has also been achieved through a semi-automated method involving invariants from projective geometry. Given four points or lines on a plane, this technique is able to accurately project from the image to the model any number of such additional features on the same plane. A test site for the UMass component of the DARPA Unmanned Ground Vehicle program, and a partial 3D model of a portion of the UMass campus has been built. Experiments are underway to use these models for landmark-based autonomous navigation, and automatic 3D model acquisition at this test site. Acknowledgment This work has been supported in part by the Advanced Research Projects Agency under contract numbers DAAE07-9 1 -C-R035 (via TACOM), and DACA76-92-C- 0041, by the Defense Advanced Research Laboratories (via Harry Diamond Labs), and by the National Science Foundation under grant numbers IRI-9208920 and CDA- 8922572. Video Abstracts 863 From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. | 1993 | 132 |
1,335 | A Fuzzy Co er for Fhkey, the Robot Alessaudro Saffbtti, Niclmlas elft, Kurt Konolige, John Lowrance, Karen Myers, Daniela Musto, Eurique Ruspini, Leonard Wesley Artificial Intelligence Center SRI International Menlo Park, CA 94025 Abstract SRI International has a long tradition in the field of quali- tative analysis and control of complex systems, starting with the development of the early mobile robot Shakey. More recently, we have developed a fuzzy controller for our new platform, Flakey. Flakey’s controller can pursue strategic goals while operating under conditions of uncer- tainty, incompleteness, and imprecision. This controller in- cludes capabilities for: l Robust, uncertainty-tolerating goal-directed activity. 0 Real-time reactivity to unexpected contingencies (e.g., unknown obstacles). * Blending of multiple goals (e.g., reaching a position while avoiding static and moving obstacles). In our approach, detailed in [2, 51, each goal is associ- ated with a function that maps each perceived situation to a measure of desirability of possible actions from the point of view of that goal. The notion of a “control structure,” is used for representing and manipulating high-level goals (and the associated desirability functions) in the fuzzy con- troller. Typical control structures are associated with envi- ronment features such as locations to reach, walls, or doorways. Each desirability function induces a particular “behavior” -one obtained by executing the actions with higher desirability. Many behaviors, induced by many si- multaneous goals can be smoothly blended together by combining their desirability functions using the inferential procedures of fuzzy logic. The fuzzy controller prefers the actions that best satisfy each behavior. Blending of behav- iors is the key to combining goal-oriented activity (e.g., try- ing to reach a given location) and reactivity (e.g., avoiding obstacles on the way). Our fuzzy controller can execute a full plan, expressed as a set of control structures [3,4]. Each control structure in a plan is associated with a fuzzy context of applicability, e.g., a corridor to follow when Flakey is near that corridor, and a door to cross when Flakey is close to that door. Sets of control structures can be generated by traditional AI planning techniques, and hence constitute a valuable link between symbolic reasoning systems and continuous control processes. We have performed experiments where 864 Safflotti Flakey planned and executed navigation tasks in an unmod- ified office environment during normal office activity. Thanks to the flexibility of fuzzy rules, Flakey only needs a sparse topological map of its environment, annotated with approximate metric information. The performance of our fuzzy controller was also demonstrated at the first international robotic competition of the AAAI, held in July 1992 at San Jose, California [ 11. Flakey placed second, and gained special recognition for its smooth and reliable reactivity, as exemplified by one judge’s comment: “Only robot I felt I could sit or lie down in front of.” (He actually did!). References [l] C. Congdon, M. Huber, D. Kortenkamp, K. Konolige, K. Myers, E. H. Ruspini, and A. Saffiotti. Carmel vs. Flakey: A comparison of two winners. AZ Magazine, 14( 1):49-57, Spring 1993. [2] E. H. Ruspini. Fuzzy logic in the {Flakey } robot. Procs. of the Int. Conf. on Fuzzy Logic and Neural Networks (IIZUKA}), pages 767-770, Japan, 1990. [3] A. Saffiotti. Some notes on the integration of planning and reactivity in autonomous mobile robots. Procs. of the AAAI Spring Symposium on Foundations of Automatic Planning, pages 122- 126, Stanford, CA, 1993. [4] A. Saffiotti, K. Konolige, and E. H. Ruspini. Now, do it! Technical report, SRI Artificial Intelligence Center, Menlo Park, California, 1993. [5] A. Saffiotti, E. H. Ruspini, and K. Konolige. Integrating reactivity and goal-directedness in a fuzzy controller. Procs. of the 2nd Fuzzy-IEEE Conference, San Francisco, CA, 1993. From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. | 1993 | 133 |
1,336 | William W. Cohen AT&T Bell Laboratories 600 Mountain Avenue Murray Hill, NJ 07974 wcohen@research.att.com Abstract A crucial problem in “inductive logic programming” is learning recursive logic programs from examples alone; current systems such as GOLEM and FOIL often achieve success only for carefully selected sets of exam- ples. We describe a program called FORCE2 that uses the new technique of “forced simulation” to learn two- clause “closed” linear recursive ij-determinate pro- grams; although this class of programs is fairly re- stricted, it does include most of the standard bench- mark problems. Experimentally, FORCE2 requires fewer examples than FOIL, and is more accurate when learning from randomly chosen datasets. Formally, FORCE2 is also shown to be a pat-learning algorithm in a variant of Valiant’s [1984] model, in which we as- sume the ability to make two types of queries: one which gives an upper bound on the depth of the proof for an example, and one which determines if an exam- ple can be proved in unit depth. Introduction An increasingly active area of research in machine learning is inductive logic programming (ILP). ILP systems, like conventional concept learning systems, typically learn classification knowledge from a set of randomly chosen examples; however, ILP systems use logic programs-typically some subset of function-free Prolog-to represent this learned knowledge. This rep- resentation can be much more expressive than repre- sentations (such as decision trees) that are based on propositional logic; for example, Quinlan [1990] has shown that FOIL can learn the transitive closure of a directed acyclic graph, a concept that cannot be ex- pressed in propositional logic. A crucial problem in ILP is that of learning re- cursive logic programs from examples alone. Early work in ILP describes a number of systems that learn recursive programs, and some of them have conver- gence proofs [Shapiro, 1982; Banerji, 1988; Muggleton and Buntine, 19881; however, these systems rely on fairly powerful queries (e.g. membership and subset queries) to achieve these results. More recent systems, such as FOIL [Quinlan, 19901 and GOLEM [Muggleton 86 Cohen and Feng, 19921 have been experimentally successful in learning recursive programs from examples alone. While the results obtained with these systems are im- pressive, they are limited in two ways. Non-random samples. In most published experi- ments in which recursive concepts are learned, the sam- ples used in learning are not randomly selected exam- ples of the target concept; instead, the samples are carefully constructed. For example, in using FOIL to learn the recursive concept member, Quinlan gives as examples all membership relations over the list [a, b, [cl] and its substructures; providing examples for all sub- structures makes it relatively easy for FOIL’s infor- mation gain metric to estimate the usefulness of a recursive call. It is unclear to what extent recursive programs can be learned from a random, unprepared sample. Lack of formal justification. Although parts of the GOLEM algorithm have been carefully analyzed, both FOIL and GOLEM make use of a number of heuristics, which makes them quite difficult to analyze. Ideally, one would like rigorous formal justification for an al- gorithm for learning recursive concepts, as well as ex- perimental results on problems of practical interest.l In this paper, we will describe a new algorithm called FORCE2 for learning a restricted class of re- cursive logic programs: namely, the class of two- clause “closed” linear recursive i j-determinate pro- grams. This class is fairly restricted, but does in- clude many of the standard benchmark problems, in- cluding list reversal, list append, and integer multiply. FORCE2 uses a new technique called forced simulation to choose the recursive call in the learned program. Formally, FORCE2 will be shown to be a pat-learning algorithm for this class; hence it learns against any dis- tribution of examples in polytime, and does not require a hand-prepared sample. Experimentally, FORCE2 re- quires fewer examples than FOIL, has much less diffi- culty with samples that have not been prepared, and ‘Recently, the pat-learnability of the class of programs learned by GOLEM has been studied; however, for recur- sive pro queries ‘i: rams the analysis assumed membership and subset Dieroski et al., 19921. From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. is fast enough to be practical. Although FORCE2 makes no queries per se, it does require a certain amount of extra information. In par- ticular, FORCE2 must be given both a characteriza- tion of the “depth complexity” of the program to be learned, and also a procedure for determining when an instance is an example of the base case of the recur- sion. For example, to learn the textbook definition of append, one might supply the following MAXDEPTH(append(Xs,Ys,Zs)) - length(Xs)+l. BASECASE(append(Xs,Ys,Zs)) = if null(Xs) then true else false The user need give only an upper bound on the depth complexity, not a precise bound, and need give only sufficient (not necessary and sufficient) conditions for membership in the base case. Later we will discuss, from both a formal and practical viewpoint, the degree to which this extra information is needed; there -are circumstances under which both the depth bound and the base-case characterization can be dispensed with. First, however, we will describe the class of logic pro- grams that FORCE2 learns, and present the FORCE2 algorithm and its accompanying analysis. The class of learnable programs In a typical ILP system, the user will provide both a set of examples and a background theory K: the learning system must then find a logic program P such that PA K I- e+ for every positive example e+ , and PA K If e- for every negative example e-. As a concrete example, if the target concept is a function-free version of the usual definition of-append, the user might provide a background theory K defining the predicate null(A) to be true when A is the empty list, and defining the predicate components(A,B,C) to be true when A is a list with head B and tail C. The learned program P can then use these be the program predicates: for example, P might append(Xs,Ys,Ys) +- null( Xs) . append(Xs,Ys,Zs) + components(Xs,X,Xsl), components(Zs,X,Zsl), append(Xsl,Ys,Zsl). FORCE2, like both GOLEM and FOIL, requires the background theory K to be a set of ground unit clauses (aka relations, a set of atomic facts, or a model.) Like FOIL, our implementation of FORCE2 also assumes that the program to be learned contains no function symbols; however this restriction is not necessary for our formal results. 3Briefly, a literal Bi is determinate if its output variables have at most one possible binding, given the binding of the input variables. If a variable V appears in the head of a clause, then the depth of V is zero, and otherwise, if Bi is the first literal containing the variable V and d is the maximal depth of the input variables of Bi, the depth of V is d + 1. Finally, a clause is ij-determinate if its body contains only literals of arity j or less, if all literals in the body are determinate, and if the clause contains only variables of depth less than or equal to i. Muggleton and Feng argue that many common recursive logic programs are ij-determinate for small i and j. The actual programs learned by the FORCE2 proce- dure must satisfy-four additional restrictions. First, a learned program must contain exactly two clauses: one 4The rlgg of a set of examples S (with respect a back- ground theory K) is the least general clause C such that recursive clause, and one clause representing the base Ve E S, C A K I- e. case for the recursion. Second, the recursive clause must be linearly recursive: that is, the body of the clause must contain only a single “recursive literal”, where a recursive literal is one with the same principle functor as the head of the clause. Third, the recursive literal must be closed: a literal is closed if it has no “output variables” .2 Finally, the program must con- tain no function symbols, and must be ij-determinate. The condition of ij-determinacy was first used in the GOLEM system, and variations of it have subsequently been adopted by FOIL [Quinlan, 19911, LINUS [LavraC and Dzeroski, 19921, and GRENDEL [Cohen, 1993131. It is defined in detail by Muggleton and Feng [1992].3 learning algorithm The FORCE2 algorithm is summarized in Figure 1. The algorithm has two explicit inputs: a set of posi- tive examples S+ , and a set of negative examples S-. There are also some additional inputs that (for read- ability) we have made implicit: the user also provides a function MAXDEPTH that returns an upper bound on the depth complexity of the target program, a func- tion BASECASE that returns “true” whenever e is an example of the base clause, a background theory K (defining only predicates of arity j or less), and a depth bound i. The output of FORCE2 is a two-clause linear and closed recursive i j-determinate logic program. The FORCE2 algorithm takes advantage of an im- portant property of ij-determinate clauses: for such clauses, the relative least general generalization4 of a set of examples is of size polynomial in the @9d num- ber of predicates defined in K, is unique, and can be tractably computed. FORCE2 actually uses two rlgg operators: one that finds the least general clause that covers a set of examples, and one that takes an initial clause Co and a single example e and returns the least general clause Ci > Co that covers e. The first step of the algorithm is to use the BASE- CASE function to split the positive examples into two 21f Bi is a literal of the (ordered) Horn clause A t Bl,..., Bl, then the input variables of the literal Bi are those variables appearing in Bi that also appear in the clause A + BI,.. . , Bi-1; all other variables appearing in Bi are called output variables. Complexity in Machine Learning 87 program FORCE2 (S+ , S- ) subroutine force-sim(e, Cb,,, , C&,L,) Chase := rlgg({e E S + : BASECASE(e if BASECASE then c ret := rlgg({e E S+ : ~BASECASE(e)}) Chase := rkd Chase , e> for each recursive literal L,. over variables( Cre,) do else for each e+ E S+ do C ret := rlgg( Crec, e) force-sim(e+, Chase, Crec,L,) if any variable in L, is not in the new Crec or P := the logic program containing the force-sim has recursed more than MAXDEPTH clauses Cb,,e and A + BI, . . . , Bl, L,, times, where eo is the example e used in the where &,,=A+ Bl,...,Bl top-level call of force-sim if an error was signaled in force-sim then then reset Chase and Crec to their original values signal an error and exit break from inner “for” loop and try the next L, else endif A := the head of C&c endfor Bl,... , Bl := the body of C&c if P consistent with S- then return P e’ := L,9, where 8 is such that AB = e endfor and K t- BlO, . . . , BlO return “no consistent program” force-sim(e’,K,CbaSe,Crec,L,.) end endif endif end sets: the examples of the base clause, and the exam- ples of the recursive clause. The rlggs Chase and CT,, of these two sets of examples are then computed; in- formally, these rlggs will be used as initial guesses at the base clause and the recursive clause respectively. (More accurately, Crec is a guess at the non-recursive portion of the recursive clause being learned.) As an example, we used FORCE2 to learn the append pro- gram (given as an example on page 2) from a small set of examples; the BASECASE and MAXDEPTH func- tions were as given on page 2. The rlgg of the three positive examples of the base case was the following:5 Chase: append(A,B,C) + components(B,D,E), null(A), B=C. The rlgg of the non-base case positive examples was the following: C * append(A,B,C) +- ret - components(A,D,E), components(C,F,G), D=F. Notice that C base is over-specific, since it requires B to be a non-empty list, and also that Crec does not 5As the examples show, our algorithm for constructing lgg’s is a bit nonstandard. A distinct variable is placed in every position that could contain an output variable, and equalities are then expressed by explicitly adding equality literals (like A=C and D=F in the examples above.) This encoding for an lgg is not the most compact one; however it is only polynomially larger than necessary, and simpli- fies the implementation. With the usual encoding the enu- meration of recursive literals must be interleaved with the force&m routine. 88 Cohen contain a recursive call. The remaining steps of the algorithm are designed to find an appropriate recursive call. From the way in which Chase and Cre, were con- structed, one might suspect that adding a recursive literal Lr t0 Crec would yield the least general pro- gram (in our class) that uses that recursive call and is consistent with the positive data; however, this is false. Consider adding the (correct) recursive call L r = append (E, B, G) to CT,,, to yield the program append(A,B,C) c- components(B,D,E), null(A), B=C. append(A,B,C) c- components(A,D,E), components(C,F,G), D=F, append(E,B,G). Now consider the non-basecase positive example e =uppend([i?J,,l’/,[$?]). Example e is covered by Cre,, but not by the program above; the problem is that the subgoal e’ =uppend([l,[],[]) is not covered by the base clause. The subroutine force&m handles this problem. Conceptually, it will simulate the hypothesized logic program on e, except that when the logic program would fail on e or any subgoal of e, the rlgg opera- tor is used to generalize the program so that it will succeed. The effect is to construct the least general program with the recursive call Lr that covers e. We will illustrate this crucial subroutine by exam- ple. Suppose FORCE2 has chosen the (correct) re- cursive literal Lr = uppend(E, B, G), using the Chase and C ret given above, and consider forcibly simulating the positive example uppend([l,2],[],[1,2/). The sub- routine force-&m determines that e is not a BASE- CASE; thus it will generalize CT,, to cover e by replac- ing Crew with the rlgg of CT,, and e; in this case Cre, is unchanged. The next step is to continue the sim- ulation by determining what subgoal would be gener- ated by the proposed recursive call; in this case, the recursive call uppend(E,B,F) would generate the sub- goal e’ =uppend([2],[],/2]). The subroutine force-&m is then called on e’, and determines that it is not a BASE- CASE; again, replacing Cre, with its rlgg against e’ leaves CT,, unchanged. Finally, the routine again com- putes the recursive subgoal e” =uppend([l,[],[]), and recursively calls force-&m on e”. It determines that e” is a BASECASE, and generalizes Chase to cover it, again using the rlgg operator. The final result is that C,,, is unchanged, and that Chase has been generalized to wpend(A,W) + null(A),B=C. This is the least generalization of the initial program (in the class of closed linear recursive two-clause pro- grams) that covers the example uppend([l,2],fl,[l,2]). For an incorrect choice of Lr, forced simulation may fail: for example, if FORCE2 were testing the incorrect recursive literal Lr =uppend(A,A,C), then given the example uppend([l,2],[,[1,2]) force-&m would loop, re- peatedly generating the same subgoal e’ = e” = e”’ . . . . This loop would be detected when the depth bound was violated, and an error would be signaled, indicat- ing that no valid generalization of the program cov- ers the example. For incorrect but non-looping recur- sive literals, forced simulation may lead to an over- general hypothesis: for example, given the incorrect recursive literal Lr =uppend(B,A,E) and the exam- ple uppend~~~,~~,l’l,~~,21) force-&m would subgoal to e’ =uppend([,[l,2],[2]), and generalize Chase to the clause uppend(A, B, C) + null(A). With sufficient neg- ative examples, this overgenerality would be detected. The remainder of the algorithm is straightforward. The inner for loop of the FORCE2 algorithm uses re- peated forced simulations to find a least general hy- pothesis that covers all the positive examples, given a particular choice for a recursive literal Lr; if this least general hypothesis exists and is consistent with the negative examples, it will be returned as the hy- pothesis of the learner. The outer for loop exhaus- tively tests all of the possible recursive literals; it is not hard to see that for a fixed maximal arity j there are polynomially many possible L,‘s.~ ‘Since C Tee is of polynomial size, it contains polynomi- ally many variables. Let p(n) be the number of variables in CT,, and t be the arity of the target predicate; there are at most p(n)” 2 p(n)” recursive literals. Formalizing the lowing theorem. preceding discussion leads to the fol- Theorem 1 If the training data is labeled uccord- ing to some two-clause linear and closed recursive ij- determinate logic program, then FORCE2 will output a least general hypothesis consistent with the training data. Furthermore, assuming that that MAXDEPTH and BASECASE can be computed in polynomial time, for any fixed i and j, FORCE2 runs in time poly- nomial in the number of predicates defined in K, the number of training examples, and the largest value of MAXDEPTH for any training example. It is known that ii-determinate clauses are of size polynomial in the number of background predicates; this implies that the VC-dimension [Blumer et al., 19861 of the class of a-clause i-j-determinate programs over a background theory K is polynomial in the size of K. Thus an immediate consequence of the preceding theorem is the following: Corollary 1 For any background theory K, FORCE2 is a put-learning algorithm [Valiant, 1984] for the con- cept class of two-clause linear and closed recursive i-i- determinate logic programs over K. To our knowledge, this is the first result showing the polynomial learnability of any class of recursive programs in a learning model disallowing queries. It might be argued that the MAXDEPTH and BASECASE functions act as “pseudo-queries”, and hence that our learning model is not truly passive. It should be noted, however, that the MAXDEPTH bound is not needed for the formal results, as the constant depth bound of (j] Kl)J’ is sufficient to de- tect looping. (In practice, however, learning is much faster with an appropriate depth bound.) The BASE- CASE function is harder to dispense with, but even here a positive formal result is obtainable. If one as- sumes that the base clause is part of K, it is reasonable to consider the learnability of the class of one-clause linear and closed recursive ii-determinate logic pro- grams over K. It can be shown that this class is pac- learnable, using a slightly modified version of FORCE2 that never changes the base clause. This pat-learnability result can also be strength- ened in a number of technical ways. A variant of the FORCE2 algorithm can be shown to achieve ex- act identification from equivalence queries; this learn- ing criterion is strictly stronger than pat-learnability [Blum, 19901. Another extension is suggested by the fact that FORCE2 returns a program that is max- imally specific, given a particular choice of recur- sive call. Instead of returning a single program, one could enumerate all (polynomially many) consistent least programs; this could be used to tractably encode the version space of all consistent programs using the [S, N] representation for version spaces [Hirsh, 19921. Complexity in Machine Learning 89 -? %full FOIL FORCE2 %full FOIL FORCE2 error error (CPU) error error (CPU) 2 6.10% 2.60% (20) 40 1.70% 0.00% (121) 5 5.10% 1.10% (21) 60 0.84% 0.00% (167) 10 4.80% 0.73% (33) 80 0.19% 0.00% (216) 20 3.00% 0.14% (63) 100 0.00% 0.00% (294) u Table 1: FORCE2 vs. FOIL on subsets of a good sample of multiply “natural” samples variant distributions Problem FOIL FORCE2 comment FOIL FORCE2 Straw1 Straw2 append 1.7% 0.0% near miss 27.7% 0.0% 27.3% 40.3% heapsort 3.7% 0.0% near miss 7.5% 0.0% 19.7% 0.0% member 13.4% 0.0% length< 10 15.1% 0.0% 41.1% 0.0% reverse1 5.4% 0.0% near miss 13.9% 0.0% 8.4% 8.0% Table 2: FORCE2 vs. FOIL on random samples Experimental results To further evaluate FORCE2, the algorithm was im- plemented and compared to FOIL4, the most recent implementation of FOIL. The first experiment we per- formed used the multiply problem, which is included with Quinlan’s distribution FOIL4.7 We presented progressively larger random subsets of the full dataset to FOIL and FORCE2 and estimated their error rates using the usual crossvalidation techniques. The re- sults, shown in Table 1, show that FORCE2 general- izes more quickly than FOIL. For this dataset, guessing the most prevalent class gives about a 4.5% error rate; thus FOIL’s performance for datasets less than 20% complete is actually quite poor. CPU times’ are also given for FORCE2, showing that FORCE2’s run time scales linearly with the num- ber of examples. No systematic time comparison with FOIL4 was attempted, as FORCE2 is implemented in Prolog, and FOIL4 in C. However we noted that even though FORCE2 is about 30 times slower than FOIL4 on the full hand-prepared dataset (294 seconds to 9 seconds) their speeds are were comparable on random subsamples of multiply: overall, FORCE2 averaged 91.6 seconds a run, and FOIL4 averaged 88.6 seconds a run. Run times for FORCE2 on other benchmarks (reported below) were also comparable: FORCE2 av- eraged 7 seconds for problems in Table 2 to FOIL4’s 10.9 seconds. Table 2 compares FORCE2 to FOIL on a number of other benchmarks, using training sets of 100 ex- amples; the intent of this experiment was to evalu- ate performance on randomly-selected samples, and to determine how sensitive performance is to the distri- 7The distributed dataset contains 1056 examples: all positive examples of m&(X, Y,Z) where X and Y are both less than seven, and all negative examples muZt(X, Y,Z) where X, Y, and 2 appear as arguments to some positive example. All results are averaged over five trials. 81n seconds on a Spare l+. 90 Cohen bution. First we presented FOIL and FORCE2 with samples generated by what seemed to us the most nat- ural random procedures. g These results are shown in the left-hand column. Next, we compared FORCE2 and FOIL on some harder variations of the “natural” distribution;” these results are shown in the right- hand column. FOIL’s performance degraded notice- ably on these distributions, but FORCE2’s was un- changed. There are (at least) two possible explanations for these results. First, FORCE2 has a more restricted bias for FOIL; thus the well-known generality-power tradeoff would predict that FORCE2 would outper- form FOIL. Second, FORCE2 uses a more sophisti- cated method of choosing recursive literals than FOIL. FORCE2 evaluates a recursive literal by using forced simulation of the positive examples, followed by a con- sistency test against the negative examples. FOIL uses information gain (a heuristic metric based on cover- ‘For member, a random list was generated by choosing a number L uniformly between one and four, then building a random list of length L over the atoms a,. . . ,z; positive examples were constructed by choosing a random element of the list, and negative examples by choosing a random non-element of the list. For append, positive examples were generated by choosing two lists at random (here allowing null lists) as the first two arguments, and negative examples by choosing three lists at random. Examples for reverse1 and heapsort were generated a similar way; reverse1 is a naive reverse program where the background theory defines a predicate that appends a single element to the tail of a list. Equal numbers of positive and negative examples were generated for all problems, and error rates were estimated using 1000 examples generated from the same distribution. loOn member, we increased the maximum length of a list from five to ten. On the remaining problems, we generated negative examples by taking a random positive example and randomly changing one element of the final argument; these “near miss” examples provide more information, but are harder to distinguish from the positive examples. age of the positive and negative examples) to choose all literals, including recursive ones. Application of any coverage-based metric to recursive literals is not straightforward: in general one does not know if a re- cursive call to a predicate P will succeed or fail, be- cause the full definition for P has not been learned. To circumvent this problem, FOIL uses the examples of P as an oracle for the predicate P: a subgoal of P is considered to succeed when there is a positive example that unifies with the subgoal. A disadvantage of FOIL’s method is that for ran- dom samples, the dataset can be a rather noisy oracle. In addition to making it harder to choose the right re- cursive literals, this means that FOIL’s assessment of which examples are covered by a clause may be inac- curate, causing FOIL to learn redundant or inaccurate clauses. To clarify the reasons for the difference in perfor- mance, we performed a final study in which we ran two “strawmen” learning algorithms on the harder distributions. Straw1 outputs FORCE2’s initial non- recursive hypothesis without adding any recursive lit- eral; Straw2 adds a single closed recursive literal chosen using the information gain metric. Our study indicates that both explanations are par- tially correct. On the member and heupsort problems, the good performance of Straw2 indicates that infor- mation gain is sufficient to choose the correct recursive literal, and hence the primary reason for FORCE2’s superiority is its more restrictive bias.” However, for reverse1 and append, even the strongly-biased Straw2 does poorly, indicating that on these problems infor- mation gain is not effective at choosing an accurate recursive literal; for these problems particularly up- pend, FOIL’s broader bias may actually be helpful.12 Concluding remarks This paper has addressed the problem of learning re- cursive logic programs from examples alone against an arbitrary distribution. We presented FORCE2, a pro- cedure that learns the restricted class of two-clause lin- ear and closed recursive ij-determinate programs; this class includes many of the standard benchmark prob- lems. Experimentally, FORCE2 requires fewer exam- ples than FOIL, and is less sensitive to the distribution IlExamining traces of FOIL4 on these problems also sup- ports this view. For member, FOIL4 learned the correct program twice and non-recursive approximations with high error rates three times; due to the “noisy oracle problem” FOIL4 believes the correct program to have an error rate of more than 30%, so has no good reason for preferring it. On the heupsort problem FOIL4 typically learns several re- dundant clauses. Thus, for both problems, a bias toward two-clause recursive programs would be beneficial. 12Unlike Straw2, FOIL has the option of learning a non- recursive approximation of the target concept, rather than making a forced choice among recursive literals based on inadequate statistics. of examples. More importantly, FORCE2 can be proved to pac- learn any program in this class. This result is sur- prising, as previous positive results have either con- sidered only nonrecursive concepts (e.g. [Page and Frisch, 19921) or have assumed the ability to make membership or subset queries (e.g. [Shapiro, 1982; Banerji, 1988; DZeroski et al., 19921). We make use of two additional sources of information namely the BASECASE and MAXDEPTH functions; however only one of these (the BASECASE function) is neces- sary for our formal results. Also, as noted in the dis- cussion following Theorem 1 a positive result can be obtained for a slightly more restricted language from labeled examples alone. There are a number of possible extensions to the FORCE2 algorithm. Although the BASECASE func- tion is formally necessary for pat-learning,13 it is prob- ably unnecessary in practice, given a distribution that provides a reasonable number of instances of the base case. Learners like GOLEM and FOIL can learn non- recursive clauses relatively easily; thus one might first learn the base case(s) 9 and then use a slightly modi- fied version of FORCE2 to learn the recursive clause.14 Such a learner would be more useful, albeit harder to analyze. It would also be highly desirable to extend the class of learnable programs. FORCE2 can be easily modified to learn a-clause programs with Ic closed recursive calls: one simply enumerates all Ic-tuples of recursive literals L T19” ‘9 L,, 9 and modifies force-sim to subgoal from an example to the appropriate 3c subgoals el, 9 . . . 9 ek. This learning algorithm generates a consistent clause, but can take time exponential in the depth bound given in MAXDEPTH and hence is suitable only for problems with a logarithmic depth bound.15 Although space does not permit a full explanation it is also possible to extend FORCE2 to learn non-closed 2-clause recursive programs. (In brief, an initial recur- sive clause is found by computing an rlgg and guessing a recursive call L,,, as before, and then adding to the clause all additional ij-determinate literals that use the output variables of L,,, and that succeed for every non-BASECASE positive example. The forced simu- lation procedure is also replaced with a procedure that actually simulates the execution of the program P on each positive example e, generalizing P as necessary by deleting failing literals.) On the other hand, several hardness results are known if the ij-determinacy condition is relaxed, or if l3 Without it, learning 2-clause Q-determinate programs is as hard as learning 2-term DNF, which is known to be intractable [Kearns et al., 19871. 14The modification is to disallow changes to the base clause(s) . 15This is unsurprising, since the time complexity of the learned program can be exponential in its depth complexity. Complexity in Machine Learning 91 less limited recursion is allowed; for example, learning a 2-clause ij-determinate program with a polynomial number of recursive calls is cryptographically hard, even if the base case for the recursion is known [Co- hen 1993a]. In another recent paper [Cohen, 1993c] we also show that learning an ij-determinate program with a polynomial number of linear recursive clauses is cryptographically hard. However, the learnability of ij-determinate programs with a constant number of clauses, each with a constant number of recursive calls, remains open. Acknowledgements Thanks to Haym Hirsh for comments on a draft of this paper, and Susan Cohen for help in proofreading. References (Banerji, 1988) Ranan Banerji. Learning theories in a subset of polyadic logic. In Proceedings of the 1988 Workshop on Computational Learning Theory, Boston Massachusetts, 1988. (Blum, 1990) Avrim Blum. Separating PAC and mistake-bound learning models over the boolean do- main. In Proceedings of the Third Annual Workshop on Computational Learning Theory, Rochester, New York, 1990. Morgan Kaufmann. (Blumer et al., 1986) Anselm Blumer, Andrezj Ehren- feucht, David Haussler, and Manfred Warmuth. Classifying learnable concepts with the Vapnik- Chervonenkis dimension. In 18th Annual Sympo- sium on the Theory of Computing. ACM Press, 1986. (Cohen, 1993a) William Cohen. Cryptographic limi- tations on learning one-clause logic programs. In Proceedings of the Tenth National Conference on Ar- tificial Intelligence, Washington D.C., 1993. (Cohen, 1993b) William Cohen. Rapid prototyping of ILP systems using explicit bias. In preparation, 1993. (Cohen, 1993c) William W. Cohen. Learnability of re- stricted logic programs. In Proceedings of the Work- shop on Inductive Logic Programming, Bled, Slove- nia, 1993. (DZeroski et aE., 1992) Savso DZeroski, Stephen Mug- gleton, and Stuart Russell. Pat-learnability of de- terminate logic programs. In Proceedings of the 1992 Workshop on Computational Learning Theory, Pittsburgh, Pennsylvania, 1992. (Hirsh, 1992) Haym Hirsh. Polynomial-time learning with version spaces. In Proceedings of the Tenth National Conference on Artificial Intelligence, San Jose, California, 1992. MIT Press. (Kearns et al., 1987) Micheal Kearns, Ming Li, Leonard Pitt, and Les Valiant. On the learnabil- ity of boolean formulae. In 19th Annual Symposium on the Theory of Computing. ACM Press, 1987. (LavraE and DZeroski, 1992) Nada LavraC and Saga Dkeroski. Background knowledge and declarative bias in inductive concept learning. In K. P. Jantke, editor, Analogical and Inductive Inference: Internu- tionul Workshop AII’92. Springer Verlag, Daghstuhl Castle, Germany, 1992. Lecture in Artificial Intelli- gence Series #642. (Muggleton and Buntine, 1988) Steven Muggleton and Wray Buntine. Machine in- vention of first order predicates by inverting resolu- tion. In Proceedings of the Fifth International Con- ference on Machine Learning, Ann Arbor, Michigan 1988. Morgan Kaufmann. (Muggleton and Feng, 1992) Steven Muggleton and Cao Feng. Efficient induction of logic programs, In Inductive Logic Programming. Academic Press, 1992. (Page and Frisch, 1992) C. D. Page and A. M. Frisch. Generalization and learnability: A study of con- strained atoms. In Inductive Logic Programming. Academic Press, 1992. (Quinlan, 1990) J. Ross Quinlan. Learning logical def- initions from relations. Machine Learning, 5(3) 9 1990. (Quinlan, 1991) J. Ross Quinlan. Determinate liter- als in inductive logic programming. In Proceedings of the Eighth International Workshop on Machine Learning, Ithaca, New York, 1991. Morgan Kauf- mann. (Shapiro, 1982) Ehud Shapiro. Algorithmic Program Debugging. MIT Press, 1982. (Valiant, 1984) L. G. Valiant. A theory of the learn- able. Communications of the ACM, 27( ll), Novem- ber 1984. 92 Cohen | 1993 | 14 |
1,337 | earnabilit e ichael Frazier* and 6. Department of Computer Science and Beckman Institute University of Illinois Urbana, IL 61801 Abstract Inductive logic programming is a rapidly growing area of research that centers on the development of inductive learning algorithms for first-order defi- nite clause theories. An obvious framework for inductive logic programming research is the study of the pat-learnability of various restricted classes of these theories. Of particular interest are the- ories that include recursive definite clauses. Be- cause little work has been done within this frame- work, the need for initial results and techniques is great. This paper presents results about the pat-learnability of several classes of simple definite clause theories that are allowed to include a recur- sive clause. In so doing, the paper uses techniques that may be useful in studying the learnability of more complex classes. 1. Introduction Inductive logic programming is a rapidly-growing area of research at the intersection of machine learning and logic programming [Muggleton, 19921. It focuses on the design of algorithms that learn (first-order) definite clause theories from examples. A natural framework for research in inductive logic programming is the in- vestigation of the learnability/predictability of various classes of definite clause theories, particularly in the models of pat-learnability [Valiant, 19841 and learning by equivalence queries [Angluin, 19881. Surprisingly lit- tle work has been done within this framework, though interest is rising sharply [Dieroski et al., 1992; Cohen and Nirsh, 1992; Ling, 1992; Muggleton, 1992; Page and Frisch, 1992; Arimura et al., 19921. This paper describes new results on the learnability of several restricted classes of simple (two-clause) defi- nite clause theories that may contain recursive clauses; theories with recursive clauses appear to be the most difficult to learn. The positive results are proven for *mfrazier@cs.uiuc.edu. Supported in part by NSF Grant IRI-9014840, and by NA$k Grant NAG 1-613. ‘The classes ‘HI and ‘HI, (discussed next) have one other tdpage@c .u s iuc.edu. Supported in part by NASA Grant restriction, that variables are stationary. This restriction NAG 1-613. is defined later. learning by equivalence queries, which implies pac- learnability [Angluin, 19881. In obtaining the results, we introduce techniques that may be useful in study- ing the learnability of other classes of ‘definite clause theories with recursion. The results are presented with the following organi- zation. Section 2 describes the learning model. Sec- tion 3 shows that the class 3cr, whose concepts are built from unary predicates, constants, variables, and unary functions, is learnable. Section 4 shows that the class 7&, an extension of 3cr that allows predi- cates of arbitrary arity, is not learnable under a rea- sonable complexity-theoretic assumption.’ Neverthe- less, Section 4 also shows that each subclass 7-& of 7-1,, in which predicates are restricted to arity Ic, is learn- able in terms of a slightly more general class, and is therefore pat-predictable. The prediction algorithm is a generalization of the learning algorithm in Section 3. The results of Section 4 leave open the questions of whether (1) 74, is pat-predictable and (2) 7ck is pac- learnable. Section 5 relates our results to other work on the learnability of definite clause theories in the pac- learning or equivalence query models [DZeroski et al., 1992; Page and F’risch, 19921. . The Model The examples provided to algorithms that learn con- cepts expressed in propositional logic traditionally have been truth assignments, or models. Such an exam- ple is positive if and only if it satisfies the concept. But concepts in first-order logic may (and almost al- ways do) have infinite models. Therefore, algorithms that learn definite clause theories typically take logi- cal formulas, usually ground atomic formulas, as ex- amples instead. Such an example is positive if and only if it is a logical consequence of the concept. The algorithms in this paper use ground atomic formulas (atoms) as examples in this manner. A concept is used to classify ground atoms according to the atoms’ Complexity in Machine Learning 93 From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. truth value in the concept’s least Herbrand modeJ2 which is to say, according to whether the atoms logi- cally follow from the concept. For example, the con- cept ~~W(9(~>))3 A WJGW) + P(f(~(4M which is in XI, cladks p(f(9W) and &WW+)))))) as true or positive while it classifies p(f(c)) as false or negative. If A and B are two concepts that have the same least Herbrand model, we say they are equivalent, and we write A % B. In a learning problem, a concept C, called the target, is chosen from some class of concepts C and is hidden from the learner. Each concept classifies each possible example element z from a set X, called the instance space, as either positive or negative. We require the existence of an algorithm that, for every C E C and every z E X, efficiently (polynomial-time) determines whether C classifies x as positive. (Such an algorithm exists for each concept class introduced in this paper.) The learner infers some concept C’ based on informit- tion about how the target C classifies the elements of the instance space X. For each of our learning prob- lems, the concept class C is a class of definite clause theories, and we require that any learning algorithm, ~4, must for any C E C produce a concept C’ E C such that C’ G+ C, that is, that C and C’ have the same least Herbrand model. (For predictability we remove the re- quirement that C’ belong to C, though we still must be able to efficiently determine how C’ classifies exam- ples.) The instance space X is the Herbrand universe of C, and the learning algorithm .4 is able to obtain information about the way C classifies elements of X only by asking equivalence queries, in which A conjec- tures some C’ and is told by an oracle whether C’ Z C. If C’ p C, A is provided a counterexample x that C’ and C classify differently. We close this section by observing that the union of several classes can be learned by interleaving the learning algorithms for each class. Fact I. Let p(n) be a polynomial in n, and let (Ci : 1 5 i 5 p(n)} be concept classes with learning algorithms (Ai : 1 5 i 5 p(n)) h aving time complexities (TAG : 1 5 i 5 p(n)} respectively. Then the concept class Uf.$?Ci can be learned in time maxl~i~pcn>(p(n)TA~~. 3. The Class 3-11 The concept class %r is the class of concepts that can be expressed as a conjunction of at most two simple clauses, where a simple clause is a positive literal (an “The least Herbrand model of a set of definite clauses (every such set has one) is sometimes referred to as the modeZ or the unique minimal model of the set. The set of definite clauses entails a logical sentence if and only if the sentence is true in this model. It is often useful to think of this model as a set, namely, the set of ground atoms that it makes true. atom) composed of unary predicates and unary or O- ary functions or an implication between two such pos- itive literals. As an example, the following is a concept in 3tr that we have seen already. Since our conjuncts are always universally quanti- fied, we henceforth leave the quantification implicit. Thus the above concept is written We can divide 3tl into two classes: trivial concepts, which are equivalent (g) to conjunctions of at most two atoms, and non-trivial, or recursive, concepts. The trivial concepts of 3-11 can be learned easily.’ We next describe an algorithm that learns the non-trivial, or recursive, concepts in 7-11. It follows that Rr is learn- able, since we can interleave this algorithm with the one that learns the trivial concepts of 7fl. It can be shown that the recursive concepts in 3cr have the form Wdl A Mt2(xN + P(t3w)l (1) where tl is a term, and tz(x) and ta(z) are terms ending in the same variable, x. 4 The fact that the functions and predicates are unary leads to a very concise de- scription of a recursive concept in Zl. Specifically, we can drop all parentheses in and around terms. Further, since we are discussing recursive concepts, all predicate symbols are the same and can likewise be dropped. Thus any concept having the form of concept (1) may be written [eye] A @z + TX], or { ai? Px + YX (2) where o, /3, and +y are strings of function symbols, 2 is a variable, and e is either a constant or a variable. Using this notation, determining whether, for example, aye unifies with pz requires only determining whether either Q is a prefix of ,0 or p is a prefix of cy. For any strings (Y and /?, if Q is a prefix of /? then we write (r<p. Since we are speaking now of recursive concepts only, we refer to the two parts of the concept as the base atom and the recursive clause. The atoms in the least Herbrand model that are not instances of the base 3The basic idea is that the learning algorithm, by using equivalence queries, is able to obtain one example for each of the (at most two) atoms in the concept. Only a few atoms are more general than (that is, have as instances) each example, and the algorithm conjectures all combina- tions of these atoms, one for each of the (at most two) examples. 4The proof of this is omitted for brevity, as are some other proofs; see [Frazier and Page, 1993; Frazier, 19931 for missing proofs. 94 Frazier atom in the concept are generated by applying the re- cursive clause. A concept is equivalent to a conjunction of two atoms if its recursive clause can be applied at most once. For the recursive clause to apply at all, LY must unify with ,J?, and for it to apply more than once, ,0 must unify with 7. Hence a concept is non- trivial only if (1) either a<@ or /?<a! and (2) either ,L?<r or r<P. In light of Fact 1, to show that the non- trivial concepts can be learned in polynomial time, we need only show that the class of non-trivial concepts can be partitioned into a polynomial number of con- cept classes, each of which can be learned in polyno- mial time. Therefore, we carve the class of non-trivial concepts into five different subclasses defined by the prefix relationships among a, ,8, and y. The five possi- ble sets of prefix relationships that can yield recursive concepts, based on our earlier discussion, are (1) a</3 and NY, (2) ~-4 r4 and a<~, (3) ~0, VP, and =y<cu, (4) ,8< cy and ,!?<r, (5) ,B<cu and r<P. (We do not need to divide case (4) into two cases because the relationship between a and y is insignificant here.) The approach for each subclass is similar-generalize the first positive example in such a way that the ora- cle is forced to provide a positive example containing whatever pieces of cy, /3, or y are missing from the first example. For brevity, we present only the proof of the most difficult case. P<a and y<@ the form This class consists of concepts having 9Ne 4$x --) 4x (3) Concepts of this form generate smaller and smaller atoms by deleting copies of ‘Ic) at the front of CJ; if + is not a prefix of w, the concept can delete only one copy of $. Any concept of this form has the least Herbrand model described by qi@Ce for 15 k 5 n + 1, where u = $“C Lemma 2 This time. class can be learned in polynomial Roof: To learn this class an algorithm needs to ob- tain an example that contains 4, $, U, and C. It then must determine n. We give an algorithm that makes at most two equivalence queries to obtain 4, $, w, and c. It then guesses larger and larger values for n until it guesses the correct value. This value of n is linearly related to the length of the base atom, so overall the algorithm takes polynomial time. 1. Conjecture the false concept to obtain counterexam- Pie Pe 2. Dovetail the following algorithms Q Assuming p = &bjC[e for some j > 1 (a> Select 4% Cd from p (b) Guess the value of n (c) Halt with output { #tins1 Ce 4$x 3 4x 0 Assuming p = f$Cee (a) Select 4, C, [ from p (b) Conjecture { N e to obtain counterexample p’e’. Note that p’ necessarily contains II, as a substring. (c) Select 1c) from p’ (4 G uess n (e) Halt with output { dV+‘Ce 4+x 3 4x When the algorithm selects substrings from a coun- terexample, it is in reality dovetailing all possible choices; nevertheless, we observe that there are only 0( 1~1~) (respectively 0( Ip’ I)) choices to try. Similarly, when the algorithm guesses the value of n, it is actu- ally making successively larger and larger guesses for n and testing whether it is correct with an equivalence query. It will obtain the correct value for n in time polynomial in the size of the target concept. At that point it outputs the necessarily correct concept and halts. 0 It is worth noting that the algorithm above uses only two of the counterexamples it receives, though it typi- cally makes more than two equivalence queries. This is the case with the algorithms for the other subclasses as well. It is also worth noting that when the algorithm above guesses the value of n, it is guessing the number of times the recursive clause is applied to generate the earliest generated atom of which either example is an instance. By the preceding arguments, we have Theorem 3. Theorem 3 The concept class 7-11 is learnable. ncreasing Predicate Arity It is often useful to have predicates of higher arity, but otherwise maintain the form of the concepts in X1. For example { Pqx, 0, x) pJus(x, ?/, 3) -+ Pqx, 4Y), 44) 1 greater(s(x), 0) greater(x, Y) - we~ter(s(x), +Y)) In this section we remove the requirement that pred- icates be unary. Specifically, let 7-L be the result of allowing predicates of arbitrary arity but requiring functions to remain unary, with the additional restric- tion, which we define next, that variables be station- ary. Notice that because functions are unary, each Complexity in Machine Learning 95 argument has at most one variable (it may have a constant instead), and that variable must be the last symbol of the argument. A concept meets the sta- tionary variables restriction if for any variable 2, if 2 appears in argument i of the consequent of the re- cursive clause then z also appears in argument i of the antecedent. This class does include the preceding arithmetic concepts built with the successor function and the constant 0, but does not include the concept [p(a, b, c)] A [p(z, y, z) + p(z, x, y)] because variables “shift positions” in the recursive clause. Unfortunately, we have the following result for the class X,. Theorem 4 7-L is not learnable, assuming R # NP.5 The result follows from a proof that the consistency problem [Pitt and Valiant, 19881 for X, is NP-hard. Our conjecture is that the class is not even predictable (that is, learnable in terms of any other class), though this is an open question. Nevertheless, we now show that if we fix the predi- cate arity to any integer k, then the resulting concept class ?& is learnable in terms of a slightly more gen- eral class, called tik’, and is therefore predictable (the question of learnability of ?$k in terms of x1, itself re- mains open). Concepts in %k’ may be any Union of a concept in %k and two additional atoms built from variables, constants, unary functions, and predicates with arity at most k. An example is classified as pos- itive by such a concept if and only if it is classified as positive by the concept in %k or is an instance of one of the additional atoms. The learning algorithm is based on the learning algorithm for 3-11, and central to it are the following definition and lemma. In the lemma, and afterward, we use Go to denote the base atom and, in- ductively, Gi+l to denote the result of applying the recursive clause to Gi. Definition 5 I;et be a concept in xk. Then we say the subconcept at argument i, for 1 5 i 5 k, of this concept is For example the subconcepts at arguments 3, respectively, of the concept plus are: 1 through 1 X { 0 x+x Y -+ S(Y) C X 2 3 t?(z) Notice that. an atom Gi = pZus(tl, t2, t3) is the ith atom generated by the concept plus if and only if for all 1 < k 5 3, the ith term generated by subconcept 5That is, FL is not pat-learnable, and therefore is not learnable in polynomial time by equivalence queries. R is the class of problems that can be solved in random poly- nomial time. k of plus is tk. This is the case because the argument positions in the definition of plus never disagree on the binding of a variable. Not all concepts in ?‘& have this property; concepts that do are said to be decoupled. Lemma 6 (Decoupling Lemma) Let p(yd, . . . . Ykei) be a concept in xk. For any 1 5 j 5 k, if eg is a vari- able x then for any n 2 2: if G, = p(tl, . . . . tk) unifies with p(&ei, . . . . ,&e’,), the binding generated for x by this unification is the same as the binding generated for x by unifying ti with &x. Equivalently, for any 1 2 i # j 5 k, if ai and ei are the same variable, x, then for any n 2 2: unifying ti with pix yields the same binding for z as does unifying 9 with &x. The lemma says, in effect, that every concept in RI, behaves in a decoupled manner after the generation of Ga. Therefore, by the Decoupling Lemma, any con- cept G in x1, that generates GO, G1, . . . can be rewrit- ten as the union of three concepts (only polynomially larger than C), two of which are the atoms Go and G1. The third concept is some 6 in %k whose base atom Gc is G2 and whose recursive clause (if any) generates Gr = Gs, . . . , G, = Gm+2,. . . meeting the following condition: for all m > 0 and any variable x in the an- tecedent of the recursive clause, no two argument po- sitions impose different bindings on x when generating G73+1 from G,. In other words, c is decoupled, so the behavior of c can be understood as a simple compo- sition of the independent behaviors of the subconcepts at arguments 1 through k. As an example, the concept phs can be rewritten as the union of GO = phs(x, 0, x), 61 = plus(x, s(O), s(x)), and c = PWX, 44w, 444>> PWX, Y, 4 ---) PWX, S(Y), 44 Of course, the definition of plus is such that even without this rewriting, the concept is decoupled. Con- sider instead the concept P(Z, s(w)1 4 Pb:, 2, Y> --+ Pb +wm S(Y>> In generating G1 from Go, the first argument binds x to x while the second binds x to s(w), and the third binds y to z. Thus the concept is not decou- pled. The result is that x and y are bound to s(w), so G1 is p(s(w), s(s(s(O))), s(s(w))). Furthermore, in generating G2 from Gr, the first argument binds x to s(w), while the second binds x to s(s(s(O))), and the third binds y to s(s(w)). The result is that x is bound to s(s(s(O))), and y is bound to s(s(s(s(0)))). (But from this point on, the first and second argu- ments always agree on the binding for x.) We would like the concept, instead, to be such that the bindings 96 Frazier generated by the arguments are independent, that is, the concept is decoupled. The following, equivalent concept in ?&’ has this property: Gs = p(z, s(w), z), GI = ~(W,4449)), 44.4, and 8 = { PWWh SWW) 9 fGMs(s(O)))))) P(X, x, Y) + P(X, 2, S(Y)> These observations motivate an algorithm that learns ?& in terms of %k’ ‘and is therefore a predic- tion algorithm for xk. Theorem 7 For any constant k, the class 7-11, is pre- dictable from equivalence queries alone. Proofi (Sketch) Any target concept C is equivalent to some concept in tik,’ that consists of Go, 61 and C = that is only polynomially larger than C, where C is decoupled. Our algorithm will obtain such an equiv- alent concept. Go, G1, . . . C generates some sequence of atoms (this sequence may or may not be finite). Notice that the subconcepts, Sl, . . . . Sk, of 6 are, re- spectively: { we1 Plel, --) yle:’ . . . Let $i,j be the jth term generated by subconcept i. Then because 6’ is decoupled, for all j 2 0 we have Gj = P(gl,j, --*) gk,j)- At the highest level of description, the prediction al- gorithm poses equivalence queries in such a way that it obtains, as examples, instances of GO, 61, and Gi and Gj for distinct i and j. The algorithm determines Go, 61, Gi, and Gj from their examples, and it determines 6 from Gi and Gj. To determine 6 the algorithm uses the learning algorithm for 3c1 to learn the subconcepts Sl , . .., Sk of C. The only subtlety is that some sub- concept Si may be equivalent to a conjunction of two terms, and the learning algorithm for 3-11 might return such a concept. Such a concept cannot serve as a sub- concept for a member of %k. But it is straightforward to verify that the only case in which this can occur is the case in which all $i,j, j > 1, are identical. And the examples the learning algorithm receives do not include gi,o (it receives only examples extracted from 62, Gs, . ..). Therefore, if the concept returned by the learning algorithm is a conjunction of at most two atoms, it is in fact a single atom-gi,l (= $i,j , for all j 2 1)-in which case we may use the concept in ‘Ml whose base atom, recursive clause antecedent, and recursive clause consequent are all gs,l . We now fill in the details of the algorithm. The algorithm begins by conjecturing the empty the- ory, and it necessarily receives a positive counterexam- ple in response. This counterexample is an instance of some more general atom, Al, that is either Go, G1, or some Gj. The algorithm guesses Al and guesses - whether Al is Go, G1, or some Gi. It then conjectures Al in an equivalence query and, if Al is not the tar- get, necessarily receives another positive example. (As earlier, by guess we mean that the algorithm dovetails the possible choices.) This example is also an instance of Go, 61, or some Gi, but it is not an instance of Al. Again, the algorithm guesses that atom-call it AZ-and guesses whether it is an instance of Go, 61, or some Gi. Following the second example, and follow- ing each new example thereafter, the algorithm has at least two of of Go, Gl, Gi, and Gj (some i # j). It conjectures the union of those that it has, with the fol- lowing exception: if it has both Gi , and 61 (any i # j), it uses a guess of C in place of Cr and Gj. Again, in response to such a conjecture, either the algorithm is correct or it receives a positive counterexample. It re- mains only to show how (1) the atoms GO, 61, Gi, and Gj are “efficiently guessed”, and (2) 6 is “efficiently guessed” from Gi and Gj. Given any example that is an instance of Go, the number of possibilities for Go is small (because no function has arity greater than 1). For ‘example, if A is pZus(s(s(s(O))), s(O), s(s(s(s(O))))), then the first argument of Go must be s(s(s(O))), s(s(s(z))), s(s(x)), s(x), or x. Similarly, the other arguments of GO must be generalizations, or prefixes, of the other arguments’ of A. Finally, there are at most 5 possible choices of variable co-references, or patterns of variable names, for the three arguments (all three arguments end in the same variable, all three end in different variables, etc.). By this reasoning, there are fewer than O((21Al)k) gen- eralizations of A to consider, all of which can be tried in parallel. Note that, because k is fixed , the number of possible generalizations is polynomial in the size of the example. 61, Gi, and Gj are efficiently guessed in the same way. To learn C, the algorithm first determines the “high- level structure” of C; specifically, it guesses which of el, .-9 ek9 e/l, . ..) ek9 ey, . ..) ei are variables, and which of these are the same variable, e.g. that ey and ey are the same variable 2. (In other words, it guesses vari- able co-references; since 6 is decoupled, these are only important for ensuring proper co-references in gene& ated atoms that contain variables.) There are fewer than k3” possibilities, where k is fixed. The algorithm then is left with the task of precisely determining the subconcepts 5’1, . ..) Sk of 6’. Because it has two exam- ples of distinct atoms generated by C-Gi and Gj-it has two- examples of each subconcept. For example, where Gj is p($l,j, . . . . $k,j), for each 1 < i 5 k we know $i,j is an example of subconcept Si. The al- gorithm uses the learning algorithm for 7-tr to learn the subconcepts from these examples, with the follow- ing slight modification. While the learning algorithm Complexity in Machine Learning 97 for Zl would ordinarily conjecture a concept in til, the present learning algorithm must conject&e a co& cept in Zk’. Therefore, the algorithm conjectures ev- ery concept in 3-t,’ that results from any combination of conjectures, by the learning algorithm for x1, for the subconcepts; that is, it tries all combinations of subconcepts. Because Ic is fixed, the number of such combinations is polynomial in the sizes of the coun- terexamples seen thus far. Finally, recall that for concepts having the form of concept 3 the learning algorithm for 3c1 must guess the value for n, where n is the number of times the recur- sive clause was applied to generate the earlier of the two examples it has. Therefore, the present learning algorithm may have to guess n. (The n sought is nec- essarily the same for all subconcepts. Thus only one n needs to be found.) This is handled by initially guess- ing n = 1 and guessing successively higher values of n until the correct n is reached. This approach succeeds provided that the target truly contains a type 3 sub- concept. In the case that the target does not contain a type 3 subconcept, the potentially non-terminating, errant search for a non-existent n halts because we are interleaving the steps from all pending guesses of all forms-including the correct (not type 3) form-of the subconcept. cl 5. elated Work The concept classes studied in this paper are incom- parable to-that is, neither subsume nor are sub- sumed by-the other classes of definite clause theories whose learnability, in the padearning and/or equiv- alence query models, has been investigated [Dz’eroski et al., 1992; Page and Frisch, 1992].6 Page and Frisch investigated classes of definite clauses that may have predicates and functions of arbitrary arity but explic- itly do not have recursion [Page and F’risch, 19921. In that work, a background theory was also allowed; al- lowing such a theory in the present work is an in- teresting topic for future research. Dgeroski, Muggle- ton, and Russell [Dz’eroski et al., 19921 investigated the learnability of classes of function-free determinate k-clause definite clause theories under simple distribu- tions, also in the presence of a background theory.7 This class includes recursive concepts; to learn recur- sive concepts, the algorithm requires two additional kinds of queries (existential queries and membership queries). Rewriting definite clause theories that con- tain functions to function-free clauses allows their al- gorithm to learn in the presence of functions. Nev- 6Theoretical work on the learnability of definite clauses in other models includes Shapiro’s [Shapiro, 19831 work on learning in the limit, Angluin, Frazier, and Pitt’s [Angluin et al., 19921 work with propositional definite clause theories, and Ling’s [Ling, 19921 investigation of learning from good examples. 7The restriction to simple distributions is a small one that quite possibly can be removed. ertheless, the restriction that clauses be determinate effectively limits the depth of function nesting; their algorithm takes time exponential in this depth. So, for example, while the algorithm can easily learn the concept even integer, or multiple of 2, from 7f1 - [even(O)] A [even(s) + even(s(s(x)))]-the time it re- quires grows exponentially in moving to a concept such as multiple of 10 or multipde of 1000, also in 7tl. It is easy to show that the classes 3c1, 7fk,, and 3c,, rewrit- ten to be function-free, are not determinate. References Angluin, D. (1988). & ueries Machine Learning, 2:319-342. and concept learning. Angluin, D., F’razier, M., and Pitt, L. (1992). Learn- ing conjunctions of horn clauses. Machine Learning, 9: 147-164. Arimura, H., Ishizaka, H., and Shinohara, T. (1992). Polynomial time inference of a subclass of context-free transformations. In COLT-@, pages 136-143, New York. The Association for Computing Machinery. Cohen, W. W. and Hirsh, H. (1992). Learnability of description logics. In COLT-92, pages 116-127, New York. The Association for Computing Machinery. DZeroski, S., Muggleton, S., and Russell, S. (1992). Pat-learnability of logic programs. In COLT-92, pages 128-135, New York. The Association for Com- puting Machinery. Frazier, M. (1993). Forthcoming. PhD thesis: Univer- sity of Illinois at Urbana-Champaign. Frazier, M. and Page, 6. D. (1993). Learnability of re- cursive, non-determinate theories: Some basic results and techniques. Submitted to the Third International Workshop on Inductive Logic Programming (ILP93). Ling, C. X. (1992). Logic program synthesis from good examples. In Muggleton, S. H., editor, Inductive Logic Programming, pages 113-129. Academic Press, London. Muggleton, S. H. (1992). Inductive logic program- ming. In Muggleton, S. H., editor, Inductive Logic Programming, pages 3-27. Academic Press, London. Page, C. D. and Frisch, A. M. (1992). Generalization and learnability: A study of constrained atoms. In Muggleton, S. H., editor, Inductive Logic Progrum- ming, pages 29-61. Academic Press, London. Pitt, L. and Valiant, L. G. (1988). Computational limitations on learning from examples. Journal of the ACM, 35(4):965-984. Shapiro, E. Y. (1983). Algorithmic Program Debug- ging. MIT Press, Cambridge, MA. Valiant, L. G. (1984). A theory of the learnable. Com- munications of the ACM, 27(11):1134-1142. 98 Frazier | 1993 | 15 |
1,338 | Complexity Analysis of Sven Koenig and Reid G. Simmons School of C omputer Science Carnegie Mellon University Pittsburgh, PA 15213-3891 skoenig@cs.cmu.edu, reids@cs.cmu.edu Abstract This paper analyzes the complexity of on-line reinforce- ment learning algorithms, namely asynchronous real- time versions of Q-learning and value-iteration, applied to the problem of reaching a goal state in deterministic domains. Previous work had concluded that, in many cases, tabula rasa reinforcement learning was exponen- tial for such problems, or was tractable only if the learn- ing algorithm was augmented. We show that, to the contrary, the algorithms are tractable with only a sim- ple change in the task representation or initialization. We provide tight bounds on the worst-case complexity, and show how the complexity is even smaller if the re- inforcement learning algorithms have initial knowledge of the topology of the state space or the domain has certain special properties. We also present a novel bi- directional Q-learning algorithm to find optimal paths from all states to a goal state and show that it is no more complex than the other algorithms. ntroduction Consider the problem for an agent of finding its way to one of a set of goal locations, where actions consist of moving from one intersection (state) to another (see Figure 1). Initially, the agent has no knowledge of the topology of the state space. We consider two different tasks: reaching any goal state and determining shortest paths from every state to a goal state. Off-line search methods, which first derive a plan that is then executed, cannot be used to solve the path plan- ning tasks, since the topology of the state space is ini- tially unknown to the agent and can only be discovered by exploring: executing actions and observing their ef- fects. Thus, the path planning tasks have to be solved on-line. On-line search methods, also called real-time search methods [Korf, 19901, interleave search with ac- tion execution. The algorithms we describe here per- form minimal computation between action executions, choosing only which action to execute next, and basing *This research was supported in part by NASA under contract NAGW-1175. The views and conclusions contained in this document are those of the authors and should not be nterpreted as representing the official policies, either ex- lressed or implied, of NASA or the U.S. government. Figure 1: Navigating on a map this decision only on information local to the current state of the agent (and perhaps its immediate successor states). In particular, we will investigate a class of algorithms which perform reinforcement learning. The application of reinforcement learning to on-line path planning prob- lems has been studied by [Barto et al., 19911, [Ben- son and Prieditis, 19921, [Pemberton and Korf, 19921, [Moore and Atkeson, 19921, and others. [Whitehead, 19911 showed that reaching a goal state with reinforce- ment learning methods can require a number of action executions that is exponential in the size of the state space. [Thrun, 19921 h as shown that by augmenting reinforcement learning algorithms, the problem can be made tractable. In fact, we will show that, contrary to prior belief, reinforcement learning algorithms are tractable without any need for augmentation, i.e. their run-time is a small polynomial in the size of the state space. All that is necessary is a change in the way the state space (task) is represented. In this paper, we use the following notation. S de- notes a finite set of states, and G C S is the non- empty set of goal states. A(s) is the finite set of ac- tions that can be executed in s E S. The size of the state space is n := ISI, and the total number of ac- tions is e := xsES IA(s All actions are determinis- CompPexity in Machine Learn From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. tic. succ(s, a) is the uniquely determined successor state when a E A(s) is executed in state s. The state space is strongly connected, i.e. every state can be reached from every other state. gd(s) denotes the goal distance of s, i.e. the smallest number of action executions required to reach a goal state from s. We assume that the state space is totally observable’, i.e. the agent can deter- mine its current state with certainty, including whether it is currently in a goal state. Formally, the results of the paper are as follows. If a good task representation or suitable initialization is cho- sen, the worst-case complexity of reaching a goal state has a tight bound of O(n3) action executions for Q- learning and O(n2) action executions for value-iteration. If the agent has initial knowledge of the topology of the state space or the state space has additional properties, the O(n3) bound can be decreased further. In addition, we show that reinforcement learning methods for find- ing shortest paths from every state to a goal state are no more complex than reinforcement learning methods that simply reach a goal state from a single state. This demonstrates that one does not need to augment rein- forcement learning algorithms to make them tractable. Reinforcement Learning Reinforcement learning is learning from (positive and negative) rewards. Every action a E A(s) has an immediate reward r(s, a) E R, that is obtained when the agent executes the action. If the agent starts in s E S and executes actions for which it receives imme- diate reward rt at step t E No, then the total reward that the agent receives over its lifetime for this particu- lar behavior is U(s) := 2 ytrt t=Cl where y E (0, l] is called the discount factor. If y < 1, we say that discounting is used, otherwise no discount- ing is used. Reinforcement learning algorithms find a behavior for the agent that maximizes the total reward for ev- ery possible start state. We analyze two reinforcement learning algorithms that are widely used, namely Q- learning [Watkins, 19891 and value-iteration [Bellman, 19571. One can interleave them with action execution to construct asynchronous real-time forms that use ac- tual state transitions rather than systematic or asyn- chronous sweeps over the state space. In the following, we investigate these on-line versions: l-step Q-learning and l-step value-iteration. ‘[Papadimitriou and Tsitsikhs, 19871 state results about the worst-case complexity of every algorithm for cases where the states are partially observable or unobservable. 1. Set s := the current state. 2. If s E G, then stop. 3. Select an action a E A(s). 4. Execute action a. 5. 6. /* As a consequence, the agent receives reward r(s, a) and is in state SUCC(S, a). Increment the num- ber of steps taken, i.e. set t := t + 1. */ Set Q(s, a) := r(.s, u) + yU(succ(3, a)). Go to 1. where U(s) := max@=A(s) Q(s, a) at every point in time. Figure 2: The (l-step) Q-learning algorithm Q-Learning The l-step Q-learning algorithm2 [Whitehead, 19911 (Figure 2) stores information about the relative good- ness of the actions in the states. This is done by maintaining a value Q(s, a) in state s for every action a E A(s). Q(s, a) approximates the optimal total re- ward received if the agent starts in s, executes a, and then behaves optimally. The action selection step (line 3) implements the ex- ploration rule (which state to go to next). It is allowed to look only at information local to the current state s. This includes the Q-values for all a E A(s). The actual selection strategy is left open: It could, for ex- ample, select an action randomly, select the action that it has executed the least number of times, or select the action with the largest Q-value. Exploration is termed undirected [Thrun, 19921 if it uses only the Q-values, otherwise it is termed directed. After the action execution step (line 4) has executed the selected action a, the value update step (line 5) adjusts Q(s, a) (and, if needed, other information lo- cal to the former state). The l-step look-ahead value T(S, a)+yU(succ(s, a)) is more accurate than, and there- fore replaces, &(s, a). Value-Iteration The l-step value-iteration algorithm is similar to the l-step Q-learning algorithm. The difference is that the action selection step can access T(S, a) and U(succ(s, a)) for every action a E A(s) in the current state s, whereas Q-learning has to estimate them with the Q- values. The value update step becomes “Set U(s) := maxaEqs)(r(s, a) + yu(succ(s, a)))“. Whereas Q-learning does not know the effect of an action before it has executed it at least once, value- iteration only needs to enter a state at least once to discover all of its successor states. Since value-iteration is more powerful than Q-learning, we expect it to have a smaller complexity. 2Since the actions have deterministic outcomes, we state the Q-learning algorithm with the learning rate cv set to one. 100 Koenig Task Representation To represent the task of finding shortest paths as a re- inforcement learning problem, we have to specify the reward function r. We let the lifetime of the agent in formula (1) d h en w en it reaches a goal state. Then, the only constraint on T is that it must guarantee that a state with a smaller goal distance has a larger optimal total reward and vice versa. We consider two possible reward functions with this property. In the goal-reward representation, the agent is rewarded for entering a goal state, but not rewarded or penalized otherwise. This representation has been used by [Whitehead, 19911, [Thrun, 19921, [Peng and Williams, 19921, and [Sutton, 19901, among others. r(s,a) = 1 if succ(s, a) E G 0 otherwise The optimal total discounted reward of s E S - G := {s E S : s e 6) is y gd(s)-l. If no discounting is used, then the optimal total reward is 1 for every s E S - G, independent of its goal distance. Thus, discounting is necessary so that larger optimal total rewards equate with shorter goal distances. In the action-penalty representation, the agent is penalized for every action that it executes, i.e. T(S, a) = -1. This representation has a more dense reward struc- ture than the goal-reward representation (i.e. the agent receives non-zero rewards more often) if goals are rela- tively sparse. It has been used by [Barto et al., 19891, [Barto et ad., 19911, and [Koenig, 19911, among others. The optimal total discounted reward of s E S is (1 - Y > gdcs) /(y - 1). Its optimal total undiscounted reward is -gd(s). Note that discounting can be used with the action-penalty representation, but is not necessary. Compiexit-y of caching a Goal State We can now analyze the complexity of reinforcement learning algorithms for the path planning tasks. We first analyze the complexity of reaching a goal state for the first time. The worst-case complexity of reaching a goal state with reinforcement learning (and stopping there) pro- vides a lower bound on the complexity of finding abb shortest paths, since this cannot be done without know- ing where the goal states are. By “worst case” we mean an upper bound on the total number of steps for a tab- ula rasa (initially uninformed) algorithm that holds for all possible topologies of the state space, start and goal states, and tie-breaking rules among actions that have the same Q-values. Assume that a Q-learning algorithm is zero- initialized (all Q-values are zero initially) and operates on the goal-reward representation. Note that the first Q-value that changes is the Q-value of the action that leads the agent to a goal state. For all other actions, no information about the topology of the state space is remembered and all Q-values remain zero. Since the ac- tion selection step has no information on which to base its decision if it performs undirected exploration, the agent has to choose actions according to a uniform dis- tribution and thus performs a random walk. Then, the agent reaches a goal state eventually, but the average number of steps required can be exponential in n, the number of states [Whitehead, 19911. This observation motivated [Whitehead, 19911 to ex- plore cooperative reinforcement learning algorithms in order to decrease the worst-case complexity. [Thrun, 19921 showed that even non-cooperative algorithms have polynomial worst-case complexity if reinforcement learning is augmented with a directed exploration mech- anism (“counter-based Q-learning”). We will show that one does not need to augment Q-learning: it is tractable if one uses either the action-penalty representation or different initial Q-values. Using a Different Task Representation Assume we are still using a zero-initialized Q-learning algorithm, but let it now operate on the action-penalty representation. Although the algorithm is still tabula rasa, the Q-values change immediately, starting with the first action execution, since the reward structure is dense. In this way, the agent remembers the effects of previous action executions. We address the case in which no discounting is used, but the theorems can eas- ily be adapted to the discounted case. [Koenig and Sim- mons, 19921 contains the proofs, additional theoretical and empirical results, and examples. Definition 1 Q-values are culled consistent in, for uld s E G and a E A(s), Q(s, a) = 0, and, for all s E S - G and a E A(s), -1 + U(succ(s, a)) 5 Q(s, a) 5 0. Definition 2 An undiscounted Q-learning algorithm with action-penalty representation is culled admissible3 iff its initial Q-values are consistent and its action selection step is “a := urgmux,,EA~s~Q(s, a’) “. If a Q-learning algorithm is admissible, then its Q- values remain consistent and are monotonically decreas- ing. Lemma 1 contains the central invariant for all proofs. It states that the number of steps executed so far is always bounded by an expression that depends only on the initial and current Q-values and, more over, “that the sum of all Q-values decreases (on average) by one for every step taken” (this paraphrase is grossly simplified). A time superscript of t in Lemmas 1 and 2 refers to the values of the variables immediately before executing the action during step t. Lemma 1 For all steps t E J’& (until termination) of an undiscounted, admissible Q-learning algorithm with 31f the value update step is changed to “Set Q(s, a) := min(Q(s, a), -1 +yU(succ(s, a)))“, then the initial Q-values need only to satisfy that, for all s E G and a E A(s), Q(s,u) = 0, and, for all s E S - G and a E A(s), -1 - gd(szlcc(s,u)) 2 Q(s, a) < 0. Note that Q(s, a) = -1 - h(succ(s, u)) h as this property, where h is an admissi- ble heuristic for the goal distance. action-penalty representation, Ut(st) + c x Q”(s, a) - t 2 sES aEA(s) c c Qt(s, a) + U"(so) - loopt sES 64(s) and coopt 5 &S C~EA(S)(~~(S, u> - Qtb u))j where doopt := I{t’ E (0,. . . , t - 1) : st’ = s”+~}] (the number of actions executed before t that do not change the state). Lemma 2 An undiscounted, admissible Q-learning ad- gorithm with action-pen&y representation reaches a goal state and terminates after at most 2 x c (Q’(s, a) + gd(succ(s, a)) + 1) - u”bo) SES-G aEA(s) steps. Theorem 1 An admissible Q-learning algorithm with action-pen ulty representation reaches a goal state and terminates after at most O(en) steps. Lemma 2 utilizes the invariant and the fact that each of the e different Q-values is bounded by an expres- sion that depends only on the goal distances to de- rive a bound on t. Since the sum of the Q-values de- creases with every step, but is bounded from below, the algorithm must terminate. Because the shortest dis- tance between any two different states (in a strongly connected graph) is bounded by n - 1, the result of Theorem 1 follows directly. Note that Lemma 2 also shows how prior knowledge of the topology of the state space (in form of suitable initial Q-values) makes the Q-learning algorithm better informed and decreases its run-time. If a state space has no duplicate actions, then e 5 n2 and the worst-case complexity becomes O(n3). This provides an upper bound on the complexity of the Q- learning algorithm . To demonstrate that this bound is tight for a zero-initialized Q-learning algorithm, we show that O(n3) is also a lower bound: Figure 3 shows a state space where at least 1/6n3 - 1/6n steps may be needed to reach the goal state. To summarize, al- though Q-learning performs undirected exploration, its worst-case complexity is polynomial in n. Note that Figure 3 also shows that every algorithm that does not know the effect of an action before it has executed it at least once has the same big-0 worst-case complexity as zero-initialized Q-learning. Using Different Initial &-values We now analyze Q-learning algorithms that operate on the goal-reward representation, but where all Q-values are initially set to one. A similar initialization has been used before in experiments conducted by [Kaelbling, 19901. Figure 3: A worst-case example goal If the action selection strategy is to execute the ac- tion with the largest Q-value, then a discounted, one- initialized Q-learning algorithm with goal-reward rep- resentation behaves identically to a zero-initialized Q- learning algorithm with action-penalty representation if all ties are broken in the same way.4 Thus, the com- plexity result of the previous section applies and a dis- counted, one-initialized Q-learning algorithm with goal- reward representation reaches a goal state and termi- nates after at most O(en) steps. Gridworlds We have seen that we can decrease the complexity of Q-learning dramatically by choosing a good task rep- resentation or suitable initial Q-values. Many domains studied in the context of reinforcement learning have additional properties that can decrease the worst-case complexity even further. For example, a state space topology has a linear upper action bound b E No iff e 5 bn for all n E NO. Then, the worst case complexity becomes O(bn2) = O(n2). Gridworlds, which have often been used in studyin reinforcement learning [Barto et ad., 19891 [Sutton, 1990 !i [Peng and Williams, 19921 [Thrun, 19921 have this prop- erty. Therefore, exploration in unknown gridworlds ac- tually has very low complexity. Gridworlds often have another special property. A state space is called l-step invertible [Whitehead, 19911 iff it has no duplicate ac- tions and, for all s E S and a E A(s), there exists an a’ E A(succ(s, a)) such that s~cc(s~cc(s, a), a’) = s. (We do not assume that the agent knows that the state space is l-step invertible.) Even a zero-initialized Q- learning algorithm with goal-reward representation (i.e. a random walk) is tractable for l-step invertible state spaces, as the following theorem states. Theorem 2 A zero-initialized Q-learning udgorithm with goal-reward representation reaches a goad state and terminates in at most O(en) steps on average ifthe state space is l-step invertible. This theorem is an immediate corollary to [Aleliunas et al., 19791. If the state space has no duplicate actions, 4This is true only for the task of reaching a goal state. In general, a discounted, “one-initialized” Q-learning algo- rithm with goal-reward representation behaves identically to a “(minus one)-initialized” Q-learning algorithm with action-penalty representation if in both cases the Q-values of actions in goal states are initialized to zero. 102 Koenig Initially, Qr(s,a) = Qb(s, a) = 0 and done(s,a) = f&e for all s E S and a E A(s). 1. Set s := the current state. 2. Ifs E G, then set done(s,a) := trzde for all a E A(s). 3. If done(s) = trzse, then go to 8. 4. /* forward step */ 5. 6. 7. 8. 9. 10. 11. 12. Set a := argmax,,EA(sjQf(s, a’). Execute action a. (As a consequence, the agent re- ceives reward -1 and is in state succ(s,u).) Set Qf(s, a) := -l+Uf( SUCC(S, a)) and done(s, a) := done(succ(s, u)). Go to 1. /* backward step */ Set a := argmax,,CAC,~Qb(s, a’). Execute action a. (As a consequence, the agent re- ceives reward -1 and is in state succ(s,u).) set Qb(s, U) := -1 i- ub(suCC(S, U)). If ub(S) 5 -n, then stop. Go to 1. where, at every point in time, Uj(S) := &(s) := ma&EA( s) Q.f by a>, m%EA( s) Qb(% a>> and done(s) := 3,,~(,>(Qf(s, a) = uf(s) A done@, a>>. Figure 4: The bi-directional Q-learning algorithm then the worst-case complexity becomes O(n3). This bound is tight. Thus, the average-case complexity of a random walk in l-step invertible state spaces is poly- nomial (and no longer exponential) in n. For l-step invertible state spaces, however, there are tabula rasa on-line algorithms that have a smaller big-0 worst-case complexity than Q-learning [Deng and Papadimitriou, 19901. eterrnining Optimal olicies We now consider the problem of finding shortest paths from udl states to a goal state. We present a novel extension of the Q-learning algorithm that determines the goal distance of every state and has the same big- 0 worst-case complexity as the algorithm for finding a single path to a goal state. This produces an optimal deterministic policy in which the optimal behavior is obtained by always executing the action that decreases the goal distance. The algorithm, which we term the bi-directional $- learning algorithm, is presented in Figure 4. While the complexity results presented here are for the undis- counted, zero-initialized version with action-penalty representation, we have derived similar results for all of the previously described alternatives. The bi-directional Q-learning algorithm iterates over two independent Q-learning searches: a forward phase that uses &f-values to search a state s with done(s) = true from a state s’ with done(s’) = false, followed by a backward phase that uses Qb-values to search a state s with done(s) = false from a state s’ with done(s’) = true. The forward and backward phases are implemented using the Q-learning algorithm from Fig- ure 2. The variables done(s) h ave the following semantics: If done(s) = true, then Uf(s) = -gd(s) (but not necessar- ily the other way around). Similarly for the variables done(s, a) for s E S - G: If done(s,a) = true, then Qf(s, a) = -1 - gd(succ(s, a)). If the agent executes a in s and done(succ(s, a)) = true, then it can set done(s, a) to true. Every forward phase sets at least one additional done(s, a) value to true and then transfers control to the backward phase, which continues until a state s with done(s) = false is reached, so that the next forward phase can start. After at most e forward phases, done(s) = true for all s E S. Then, the backward phase can no longer find a state s with done(s) = f a se 1 and decreases the Ub-values beyond every limit. When a Ub-value reaches or drops below -n, the agent can infer that an optimal policy has been found and may terminate. See [Koenig and Simmons, 19921 for a longer description and a similar algorithm that does not need to know n in advance, always terminates no later than the algorithm stated here, and usually terminates shortly after done(s) = true for all s E S. Theorem 3 The bi-directional &-learning algorithm finds an optimal policy and terminates after at most 0( en) steps. The proof of Theorem 3 is similar to that of Theo- rem 1. The theorem states that the bi-directional Q- learning algorithm has exactly the same big-0 worst- case complexity as the Q-learning algorithm for finding a path to a goal state. The complexity becomes O(n3) if the state space has no duplicate actions.5 That this bound is tight follows from Figure 3, since determining an optimal policy cannot be easier than reaching a goal state for the first time. It is surprising, however, that the big-0 worst-case complexities for both tasks are the same. Figures 5 and 6 show the run-times of various reinforce- ment learning algorithms in reset state spaces (i.e. state spaces in which all states have an action that leads back to the start state) and one-dimensional gridworlds of sizes n E [2,50], in both cases averaged over 5000 runs. The x-axes show the complexity of the state space (measured as en) and the y-axes the number of steps needed to complete the tasks. We use zero-initialized algorithms with action-penalty representation, with ties broken randomly. For determining optimal policies, we distinguish two performance measures: the number of steps until an optimal policy is found (i.e. until U(s) = 5The bi-directional Q-learning algorithm can be made more efficient, for example by breaking ties intelligently, but this does not change its big-0 worst-case complexity. Complexity in Machine Learning 183 2500 goal Reset State Space 2500 en Figure 5: Reset state space -gd(s) for every state s), and the number of steps unti I the algorithm realizes that and terminates. These graphs confirm our expectation about the var ious algorithms. For each algorithm, Q-learning and value-iteration ++expect its run-time (i.e. number of steps needed) for reaching a goal state to be smaller than the run-time for finding an optimal policy which we expect, in turn, to be smaller than the run-time for terminating with an optimal policy. We also expect the run-time of the efficient? value-iteration algorithm to be smaller than the run-time of the efficient Q-learning algorithm which we expect to be smaller than the run- time of a random walk, given the same task to be solved. In addition to these relationships, the graphs show that random walks are inefficient in reset state spaces (where they need 3 x 2n-2 - 2 steps on average to reach a goal state), but perform much better in one-dimensional gridworlds (where they only need (n- 1)2 steps on aver- age), since the latter are l-step invertible. But even for gridworlds, the efficient Q-learning algorithms continue to perform better than random walks, since only the former algorithms immediately remember information about the topology of the state space. Extensions The complexities presented here can also be stated in terms of e and the depth of the state space d (instead of n), allowing one to take advantage of the fact that 6 “Efficient” means to use either the action-penalty rep- resentation or one-initialized Q-values (U-values). goal 0 500 1000 1400 2000 2500 3000 3500 4000 4500 Figure 6: One-dimensional gridworld the depth often grows sublinearly in n. The depth of a state space is the maximum over all pairs of different states of the length of the shortest path between them. All of our results can easily be extended to cases where the actions do not have the same reward. The result about l-step invertible state spaces also holds for the more general case of state spaces that have the following property: for every state, the number of actions entering the state equals the number of actions leaving it. Reinforcement learning algorithms can be used in state spaces with probabilistic action outcomes. Al- though the results presented here provide guidance for modeling probabilistic domains, more research is re- quired to transfer the results. [Koenig and Simmons, 19921 contains a discussion of additional challenges en- countered in probabilistic domains. Conclusion Many real-world domains have the characteristic of the task presented here - the agent must reach one of a number of goal states by taking actions, but the initial topology of the state space is unknown. Prior results which indicated that reinforcement learning algorithms were exponential in n, the size of the state space, seemed to limit their usefulness for such tasks. This paper has shown, however, that such algorithms are tractable when using either the appropriate task rep- resentation or suitable initial Q-values. Both changes produce a dense reward structure, which facilitates learning. In particular, we showed that the task of reaching a goal state for the first time is reduced from exponential to O(en), or O(n3) if there are no duplicate actions. Furthermore, the complexity is further reduced if the domain has additional properties, such as a linear 104 Koenig Tight bounds on the number of steps required in the worst case for reaching a goal state using a zero-initialized algo- rithm with action-penalty representation or a one-initialized algorithm with goal-reward representation; the same results apply to determining optimal policies State Space Q-Learning Value-Iteration general case ok4 m2> no duplicate actions O(n3) Ob2) linear upper action bound O(n2) O(n2) Figure 7: Complexities of Reinforcement Learning upper action bound. In l-step invertible state spaces, even the original, inefficient algorithms have a polyno- mial average-case complexity. We have introduced the novel bi-directional Q- learning algorithm for finding shortest paths from all states to a goal and have shown, somewhat surprisingly, that its complexity is O(en) as well. This provides an efficient algorithm to learn optimal policies. While not all reinforcement learning tasks can be reformulated as shortest path problems, the theorems still provide guid- ance: the run-times can be improved by making the re- ward structure dense, for instance, by subtracting some constant from all immediate rewards. The results derived for Q-learning can be transferred to value-iteration [Koenig, 19921 [Koenig and Simmons, 19921. The important results are summarized in Fig- ure 7. Note that a value-iteration algorithm that al- ways executes the action that leads to the state with the largest U-value is equivalent to the LRTA* algo- rithm [Korf, 19901 with a search horizon of one if the state space is deterministic and action penalty represen- tation is used [Barto et a!., 19911. In summary, reinforcement learning algorithms are useful for enabling agents to explore unknown state spaces and learn information relevant to performing tasks. The results in this paper add to that research by showing that reinforcement learning is tractable, and therefore can scale up to handle real-world problems. Acknowledgements Avrim Blum, Long-Ji Lin, Michael Littman, Joseph O’Sullivan, Martha Pollack, Sebastian Thrun, and es- pecially Lonnie Chrisman (who also commented on the proofs) provided helpful comments on the ideas pre- sented in this paper. eferences Aleliunas, R.; Karp, R.M.; Lipton, R. J.; Loviisz, L.; and Rackoff, C. 1979. Random walks, universal traversal se- quences, and the complexity of maze problems. In 20th Annual Symposium on Foundation of Computer Science, San Juan, Puerto Rico. 218-223. Barto, A.G.; Sutton, R.S.; and Watkins, C.J. 1989. Learn- ing and sequential decision making. Technical Report 89- 95, Department of Computer Science, University of Mas- sachusetts at Amherst. Barto, A.G.; Bradtke, S.J.; and Singh, S.P. 1991. Real-time learning and control using asynchronous dynamic program- ming. Technical Report 91-57, Department of Computer Science, University of Massachusetts at Amherst. Bellman, R. 1 957. Dynamic versity Press, Princeton. Programming. Princeton Uni- Benson, G.D. and Prieditis, A. 1992. Learning continuous- space navigation heuristics in real-time. In Proceedings of the Second International Conference on Simulation of Adaptive Behavior: From Animals to Animats. Deng, X. and Papadimitriou, C.H. 1990. Exploring an un- known graph. In Proceedings of the FOCS. Kaelbling, L.P. 1990. Learning in Embedded Systems. Ph.D. Dissertation, Computer Science Department, Stan- ford University. Koenig, S. and Simmons, R.G. 1992. Complexity analy- sis of real-time reinforcement learning applied to finding shortest paths in deterministic domains. Technical Report CMU-CS-93-106, School of Computer Science, Carnegie Mellon University. Koenig, S. 1991. Optimal probabilistic and decision- theoretic planning using M arkovian decision theory. M as- ter’s thesis, Computer Science Department, University of California at Berkeley. (Available as Technical Report UCB/CSD 92/685). Koenig, S. 1992. The complexity of real-time search. Tech- nical Report CMU-CS-92-145, School of Computer Sci- ence, Carnegie Mellon University. Korf, R.E. 1990. Real-time heuristic search. Artificial In- telligence 42(2-3):189-211. Moore, A.W. and Atkeson, C.G. 1992. Memory-based rein- forcement learning: Efficient computation with prioritized sweeping. In Proceedings of the NIPS. Papadimitriou, C.H. and Tsitsiklis, J.N. 1987. The com- plexity of Markov decision processes. Mathematics of Op- erations Research 12(3):441-450. Pemberton, J.C. and Korf, R.E. 1992. Incremental path planning on graphs with cycles. In Proceedings of the First Annual AI Planning Systems Conference. 179-188. Peng, J. and Williams, R.J. 1992. Efficient learning and planning within the Dyna framework. In Proceedings of the Second International Conference on Simulation of Adaptive Behavior: From Animals to Animats. Sutton, R.S. 1990. Integrated architectures for learning, planning, and reacting based on approximating dynamic programming. In Proceedings of the Seventh International Conference on Machine Learning. Thrun, S.B. 1992. The role of exploration in learning control with neural networks. In White, David A. and Sofge, Donald A., editors 1992, Handbook of Intelligent Control: Neural, Fuzzy and Adaptive Approaches. Van Nos- trand Reinhold, Florence, Kentucky. Watkins, C. J. 1989. Learning from Delayed Rewards. Ph.D. Dissertation, King’s College, Cambridge University. Whitehead, S.D. 1991. A complexity analysis of coopera- tive mechanisms in reinforcement learning. In Proceedings of the AAAI. 607-613. Complexity in Machine Leaming 105 | 1993 | 16 |
1,339 | en e Colin P. Williams and Tad Hogg Xerox Palo Alto Research Center 3333 Coyote Hill Road Palo Alto, CA 94304, U.S.A. CWilliams@parc.xerox.com, Hogg@parc.xerox.com Abstract In a previous paper we defined the “deep structure” of a constraint satisfaction problem to be that set system pro- duced by collecting the nogood ground instances of each constraint and keeping only those that are not supersets of any other. We then showed how to use such deep structure to predict where, in a space of problem instances, an abrupt transition in computational cost is to be expected. This pa- per explains how to augment this model with enough extra details to make more accurate estimates of the location of these phase transitions. We also show that the phase tran- sition phenomenon exists for a much wider class of search algorithms than had hitherto been thought and explain the- oretically why this is the case. 1. Introduction In a previous paper (Williams & Hogg 1992b) we defined the “deep structure” of a constraint satisfaction problem (CSP) to be that set system produced by collecting the nogood ground instances of each constraint and keeping only those that are not supersets of any other. We use the term “deep” because two problems that are superficially different in the constraint graph representation might in fact induce identical sets of minimized nogoods. Hence their equivalence might only become apparent at this lower level of representation. We then showed how to use such deep structure to predict where, in a space of problem instances, the hardest problems are to be found. Typi- cally, this model led to predictions that were within about 15% of the empirically determined correct values. Whilst this model allowed us to understand the observed abrupt change in difficulty (in fact a phase transition) in general terms in this paper we identify which additional aspects of real problems account for most of the butstanding numer- ical discrepancy. This is particularly important because as larger problems are considered, the phase transition re- gion becomes increasingly spiked. Hence, an acceptable error for small problems could become unacceptable for larger ones. To this end we have identified 2 types of error; model- ling approximations (such as the assumption that the val- ues assigned to different variables are uncorrelated or that 152 Williams the solutions are not clustered in some special way) and mathematical approximations (such as the assumption that, for a function f(z), (f(z)) = f&c)), known as a mean- field approximation). In addition we also widen the do- main of applicability of our theory to CSPs solved using algorithms such as heuristic repair (Minton et al. 1990), simulated annealing (Johnston et al. 1991, Kirkpatrick 1983) and GSAT (Selman, Levesque dz Mitchell 1992), that work with sets of complete assignments at all times. In the next section we summarize our basic deep struc- ture theory. Following this, we shall show how to make quantitatively accurate predictions of the phase transition points for 3-COL, 4-COL (graph colouring) and 3-SAT. Our results are summarized in Table 1 where the best ap- proximations are highlighted. Finally in Section 4 we present experimental evidence for a phase transition ef- fect in heuristic repair and adapt our deep structure theory to account for these observations. 2. Basic Deep Structure Model Our interest lies in predicting where, in a space of CSP instances, the harder problems typically occur, more or less regardless of the algorithm used. Because the exact difficulty of solving each instance can vary considerably from case to case it makes more sense to talk about the average difficulty of solving CSPs that are drawn from some pool (or ensemble) of similar problems. This means we need to know something about how the difficulty of solving CSPs changes as small modifications are made to the structure of the constraint problem. There are many possible types of ensemble that one could choose to study. For example, one might restrict consideration to an ensemble of problems each of whose member instances are guaranteed to have at least one solution. Alternatively, one could study an ensemble in which this requirement is relaxed and each instance may or may not have any solutions. Similarly one could choose whether the domain sizes of each variable should or should not be the same or whether the constraints are all of the same size etc. The possibilities are endless. The best choice of ensemble cannot be determined by mere From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. cogitation but depends on what the CSPs arising in the “real world” happen to be like and that will inevitably vary from field to field. Lacking any compelling reason to choose one ensemble over another, we made the simplest choice of using an ensemble of CSPs whose instances are not guaranteed to be soluble and having variables with a uniform domain size, b. Given an ensemble of CSPs, then, the deep structure model allows us to predict which members will typically be harder to solve than others. The steps required to do this can be broken down into: 1. CSP + Deep Structure 2. Deep Structure + Estimate of Difficulty The first step consists of mapping a given CSP instance into its corresponding deep structure. We chose to think of CSPs that could be represented as a set of constraints over p variables, each of which can take on one of b values. Each constraint determines whether a particular combination of assignments of values to the variables are consistent (“good”) or inconsistent (“nogood”). Collecting the nogoods of all the constraints and discarding any that are supersets of any other we arrive at a set of “minimized nogoods” which completely characterize the particular CSP. By “deep structure” we mean exactly this set of minimized nogoods. Unfortunately, reasoning with the explicit sets of mini- mized nogoods does not promote understanding of generic phenomena or assist theoretical analysis. We therefore at- tempt to summarize the minimized nogoods with as few parameters as possible and yet still make reasonably accu- rate quantitative predictions of quantities of interest such as phase transition points and computational costs. As we shall see, such a crude summarization can sometimes throw away important information e.g. regarding the cor- relation between values assigned to tuples of variables. Nevertheless, it does allow us to identify which parame- ters have the most important influence on the quantities of interest. Moreover, one is always free to build a more accurate model, as in fact we do in Section 3. In our basic model, we found that the minimized no- goods could be adequately summarized in terms of their number, m, and average size, Ic. Thus we crudely charac- terize a CSP by just 4 numbers, (p, b, m, k). eep Structure -+ Estimate of Having obtained the crude description of deep structure we need to estimate how hard it would be to solve such a CSP. The actual value of this cost will depend on the particular algorithm used to solve the CSP. In our origi- nal model we assumed a search algorithm that works by eter caning D 1 number of variables b 1 number of values per variable m 1 number of minimized nogoods k average size of minimized nogoods Fig. 1. A coarse description of a CSP. al solutions (either in a tree or a lattice) un- solution is found. However, the im point is not so much the actual value of the cost but in predicting where it will attain a maximum as this corre- sponds to the point of greatest difficulty. In Section 4 we extend our model to cover the possibility of solving the CSP using an algorithm that works with complete states e.g heuristic repair, simulated annealing or GSAT which requires a different cost measure (still related to the min- imized nogoods) to be used. To obtain a definite prediction, we defined “difficulty” to be the cost to find the first solution or to determine there are no solutions, C,. Analytically, this is a hard function to derive and in the interests of a more tractable analysis we opted to use a proxy instead that was the cost to find all solutions divided by the number of solutions (if there were any) or else the cost to determine there were no solutions, which we approximated as’: (C)/(NSo~n) if there are solutions otherwise C1j We analyzed what happens to this cost, on average, as the number of minimized nogood ground instances, m = flp, is increased. Note that we merely write m like this to emphasize that the number of minimized no will grow as larger problems are considered (i.e. as p es). The upshot of this analysis was the prediction t, as p ---) 00, the transition occurs where (N,.I,) = I and so the hardest problems are to be found at a critical value of p given by: P lnb wit = - In (1 - bvk). (2) In other words, if all we are told about a class of CSPs is that there are p variables (with p >> l), each variable takes one of b values and each minimized nogood is of size k then we expect the hardest examples of this class to be when there are merit = ,&.dtp nogods. ‘N.B. this approximation will fail if the solutions are tightly clustered. Constraint-Based Reasoning 153 3. More Accurate Predictions We have tested this formula on two kinds of CSPs: graph colouring and k-SAT and compared its predictions against experimental data obtained by independent authors. Qpi- tally this formula gave predictions that were within about 15% of the empirically observed values. The remaining discrepancy can be attributed to one of two basic kinds of error: First, there can be errors in the model (e.g. due to assuming that the values assigned to different variables are uncorrelated). Second there can be errors due to various mathematical approximations (e.g. the mean-field approx- imation that (C/N,.I,) zll (C)/(N,,h& Int.mstWy, graph colouring is more affected by errors in the model whereas k-SAT is more affected by errors in the mean- field approximation. These two CSPs then will serve as convenient examples of how to augment our basic deep structure model with sufficient extra details to permit a more accurate estimation of the phase transition points. Graph Colouring A graph colouring problem consists of a graph containing p nodes (i.e. variables) that have to be assigned certain colours (i.e. values) such that no two nodes at either end of an edge have the same colour. Thus the edges provide implicit constraints between the values assigned to the pair of nodes they connect. Therefore, if we are only allowed to use b colours, then each edge would contribute exactly b nogoods and every nogood would be of size 2, so k = 2. Plugging these values into equation 2 gives the prediction that the hardest to colour graphs occur when &,-it = 9.3 (3-COL) and &it = 21.5 (4-COL) in contrast to the experimentally measured values of 8.1 f0.3 and 18 f I respectively. This approximation isn’t too bad, nevertheless, we will now show how to make it even better by taking more careful account of the structure of the nogoods that arise in graph colouring. Imprecision due to Model The key insight is to realize that in our derivation of for- mula 2 we assume the nogoods are selected independently. Thus each set of m nogoods is equally likely. However, in the context of graph colouring this is not the case be- cause each edge introduces nogoods with a rather special structure. Specifically, each edge between nodes u and v introduces b minimal nogoods of the form {u = i, v = i} for i from 1 to b, which changes, for a given number of minimized nogoods, the expected number of solutions, as follows. Consider a state at the solution level, i.e., an assigned value for each of p variables, in which the value i is used ci times, with 5 c i = p. In order for this state to be a i=l solution, none of its subsets must be among the selected nogoods. This requires that the graph not contain an edge between any variables with the same assignment. This b excludes a total of c ( “2 ) edges. With random graphs with e edges, the p&bability that this given state will be a solution is just P(fcil) = ((3 -~,:.,) (3) ( 1 (z) 2 e By summing over all states at the solution level, this gives the expected number of solutions: (4) where the multinomial coefficient counts the number of states with specified numbers of assigned values. For the asymptotic behaviour, note that the multinomial becomes sharply peaked around states with an equal num- ber of each value, i.e., cd = p/b. This also minimizes the number of excluded edges c ( “2) giving a maximum in p( { ci }) as well. Thus the sum for ( Nsoln) will be domi- nated by these states and Stirling’s approximation2 can be used to give (5) because the number of minimal nogoods is related to the number of edges by m = ,Bp = eb. With this replacement for In (N,,l,) our derivation of the phase transition point proceeds as before by determin- ing the point where the leading term of this asymptotic behaviour is zero, corresponding to ( Ns oln) = I, hence: P blnb crit = - In (1 - +) (6) which is different from the prediction of our basic model as given in equation 2. This result can also be obtained more directly by assuming conditional independence among the nogoods introduced by each edge (Cheeseman, Kanefsky & Taylor 1992). For the cases of 3 and rt-colouring, equation 6 now allows US to predict Pcrit = 8.1 and 19.3, respectively, close to the empirical values given by Cheeseman et al. 2i.e. Inx!wxlnx-xasxt.00. 154 Williams k-S AT Empirical studies by Mitchell, Selman & Levesque (Mitchell, Selman 8z Levesque 1992) on the cost of solv- ing k-SAT problems using the Davis-Putnam procedure (Franc0 & Paul1 1983), allow us to compare the predic- tions of our basic model against a second type of CSP. In k-SAT, each of the ~1 variables appearing in the formula can take on one of two values, true or false. Thus there are b = 2 values for each variable. Each clause appearing in the given formula is a disjunction of (possibly negated) variables. Hence the clause will fail to be true for exactly one assignment of values to the k variables appearing in it. This in turn gives rise to a single nogood, of size k. Distinct clauses will give rise to distinct nogoods, so the number of these nogoods is just the number of distinct clauses in the formula. Thus, using equation 2, our basic model, with b = 2, k = 3 predicts the 3-SAT transition to be at @crit = 5.2 which is above empirically observed value of 4.3. However, as we show below, the outstanding error is largely attributable to the inaccuracy of the mean-field approximation and there is a simple remedy for this. Imprecision due to Mean-field Approximation Cheeseman et al. observed that the phase transition for graph colouring occurred at the point when the probability of having at least one solution fell abruptly to zero. In a longer version of this paper (Williams & Hogg 1992a) we explain why this is to be expected. One way of casting this result, which happens to be particularly amenable to mathematical analysis, is to hypothesize that the phase transition in cost should occur when (&) transitions from being near zero to being near 1. In order to es timate this point, we consider the Taylor series approximation (Papoulis 1990, p129): var(Nsod (7) + (1 + (N,01,))3 In figure 2 we plot measured values of ( &) togefier with its truncated Taylor series approximation versus p for increasing values of ~1. This proxy sharpens to a step function as ~1 ---) 00 apparently at the same point as that reported by Cheeseman et al. Fortunately although the truncated Taylor series approximation overshoots the true v~--f (&) before finally returning to a value of 1 at high ,8, it nevertheless is. accurate in the vicinity of the phase transition point as required and may therefore be used. Hence, the true transition point can be estimated as the value of p at which the right hand side of equation 7 equals 3. As the true transition point precedes the old one (predicted using equation 2), i.e. ,8EFT < ,&it and as 0.4 0.2 0 pg. 2. Behaviour of (I/( 1 + Nsorn)) vs p. .for b = 2, = 3 (3-SAT). The dark curves show em rncal data for lJ= 10 (dashed) and ,u = 20 (solid). TEe light curves show the corresponding two-term Taylor series approximation to (l/(1 + JLrn)). there are exponentially many solutions for p < &it the first term in the above approximation must be negligible at the true transition for large enough values of p. In this case we can estimate ,8::?: as the value of ,0 at which var(Nsorn) = 2 1 (I+ ( N,oln))3. (8) By the same argument as that in (Williams & Hogg 1992b) we can show, var(Nsoln) = (Nfol,) - (N,o~n)2 with (N,“,h) = bp 2 (Il)o- ‘)‘--’ r=O , The (N,26,, ) term is obtained by counting how many ways there‘are of picking m = ,Bp nogoods of size k such that a given pair of nodes at the solution level are both good and have a prescribed overlap r weighted by the number of ways sets can be picked such that they have this overlap. Finally, this is averaged over all possible overlaps. With these formulae the phase transition can be located as the fixed point solution (in /3) to equation 7. For ~1 = 10 or 20 this gives the transition point at ,f? = 4.4. Asymptotically, one can obtain an explicit formula for the new critical point by applying Stirling’s formula to equations 9 and 10, approximating equation 10 as an integral with a single dominant term and factoring a coefficient as a numerical integral. This gives a slightly higher critical point of Constraint-Based Reasoning 155 I I I Basic + correc- tions(7) Table 1. Comparisons of our basic theory and various refine- ments thereof with empirical data obtained by other authors. The numbers in the column headings refer to the equations used to calculate that columns’ entries. 175: 150: 125: loot 75: 50 2 4 . , _ . . . . . - - * . 6 8 10 12 14 beta Fig. 3. Median search cost for heuristic repair as a function of p for the case b = 3, k a maximum at p = = 2. The solid curve, for p = 20, has 9. The dashed curve, for p = 10, has a broad peak in the same region. P crit = 4.546 and predicts the new functional form for the critical number of minimized nogoods as m,,it = &,.itp + const + 0 $ 0 with const = 3.966. The results for our basic model, the correlation model (for graph colouring) and the correction to mean field model (for Ic-SAT) are collected together in Table 1 where the best results are highlighted. 4. Heuristic Repair Has Phase ‘Ikansition Too The above results show that the addition of a few extra details to the basic deep structure model allows us to make quantitatively accurate estimates of the location of phase transition points. However, the question of the applicability of these results to other search methods, in particular those that operate on complete states, such as heuristic repair, simulated annealing and GSAT remains open. In this section we investigate the behaviour of such methods and show theoretically and empirically that they also exhibit a phase transition in search cost at about the same point as the tree based searches. In figure 3 we plot the median search cost for solving random CSPs with b = 3, k: = 2 versus our order param- eter ,8 (the ratio of the number of minimized nogoods to the number of variables) using the heuristic repair algo- rithm. As for other search algorithms we see a character- istic easy-hard-easy pattern with the peak sharpening as larger problem instances are considered. To understand this recall that heuristic repair, simulated annealing and GSAT all attempt to improve a complete state through a series of incremental changes. These meth- ods differ on the particular changes allowed and how de- cisions are made amongst them. In general they all guide the search toward promising regions of the search space by emphasizing local changes that decrease a cost function such as the number of remaining conflicting constraints. In our model, the number of conflicting constraints for a given state is equal to the number of nogoods of which it is a superset. A complete state is minimal when ev- ery possible change in value assignment would increase or leave unchanged the number of conflicts. These heuristics provide useful guidance until a state is reached for which none of the local changes consid- ered give any further reduction in cost. To the extent that many of these local minimal or equilibrium states are not solutions, they provide points where these search methods can get stuck. In such situations, practical implementa- tions often restart the search from a new initial state, or perform a limited number of local changes that leave the cost unchanged in the hope of finding a better state before restarting. Thus the search cost for difficult problems will be dominated by the number of minimal points, Nminimal, encountered relative to the number of solutions, Nsoln. Thus our proxy is: with (Nmindrna~) = bppminimal where pminimal is the probability that a given state (at the solution level) is minimal. This in turn is just given by the ratio of the number of ways to pick m nogoods such that the given state is minimal to the total number of ways to pick m nogoods. Of course, we should be aware that the mean- field approximation will again introduce some quantitative error. In figure 4 we plot this cost proxy and the mean- field approximation to it, for p = 10, b = 3, k = 2. This predicts that the hardest problems occur around p = 9.5. Compare this with empirical data, in figure 3. We see that heuristic repair does indeed find certain problems harder than others and the numerical agreement between predicted and observed critical points is quite good, suggesting that (Nminimar/Nsoln) is an adequate proxy for the true cost. Thus our deep structure theory applies to sophisticated search methods beyond the tree search algorithms considered previously. 156 Williams minimized nogoods. Moreover, our experience suggests the exact form for the proxy is not that critical, provided it tracks the actual cost measure faithfully. beta Fig. 4. Ratio of number of minimal points to number of solutions vs. /3 for the case of ,v = 10, b = 3, k = 2 (dashed curve, with maximum at ,8 = 9.5) and its mean-field approximation (grey, with maximum at 12). co The basic deep struc~e model (Williams & Hogg 1992) typically led to predictions of phase transition points that were within about 15% of the empirically determined val- ues. However, both empirical observations and theory suggest that the phase transition becomes sharper the larger the problem considered making it important to determine the location of transition points more precisely. To this end, we identified modelling approximations (such as ne- glecting correlations in the values assigned to different variables) and mathematical approximations (such as the mean field approximation) as the principal factors imped- ing proper estimation of the phase transition points. We then showed how to incorporate such influences into the model resulting in the predictions reported in Table 1. This shows that the deep structure model is capable of making quantitatively accurate estimates of the location of phase transition points for all the problems we have considered. However, we again reiterate that the more important re- sult is that our model predicts the qualitative existence of the phase transition at all as this shows that fairly simple computational models can shed light on generic compu- tational phenomena. A further advantage of our model is that it is capable of identifying the coarse functional dependencies between problem parameters. This allows actual data to be fitted to credible functional forms from which numerical coefficients can be determined, allowing scaling behaviour to be anticipated. Our belief that phase transitions are generic is buoyed by the results we report for heuristic repair. This is an entirely different kind of search algorithm than the tree or lattice-like methods considered previously and yet it too exhibits a phase transition at roughly the same place as the tree search methods. We identified the ratio of the number of minimal states to the number of solutions as an adequate cost proxy which can be calculated from the Cheeseman, P.; Xanefsky, B.; and Taylor, W. M. 1991. ere the Really Hard Problems Are. In @ of the Twelfth International Joint Conference on Artificial Intelligence, 33 l-337, Morgan Kaufmann. Cheeseman, P.; Kanefsky, B.; and Taylor, W. Computational Complexity and Phase Transitions. In Proc. of the Physics of Computation Workshop, IEEE Computer Society. France and Paul1 1983. Probabilistic Analysis of the Davis tman Procedure for Solving Satisfiability Problems, Dis- crete Applied Mathematics 5:77-87. Huberman, B.A. and Hogg, T. 1987. Phase Transitions in Artificial Intelligence Systems, Artificial Intelligence, 33:1§5-171. Johnson, D., Aragon, C., McGeoch L., Schevon, C., 1991. Gptimization by Simulated Aneealing: An experimental evaluation; part ii, graph coloring and number partitioning, Research, 39(3):3784X5, Ma S., Gelatt C., Vecchi M., 198 ted Annealing. Science 220571 mization Minton S., Johnston M., Philips A., Laird P. 1990. Solving Large-scale Constraint Satisfaction and Scheduling Prob- lems using a Heuristic Repair Method. In Proc. AAAI-90, pp 17-24. Mitchell D., Selman B., Levesque H., 1992. Hard & Easy Distributions of SAT Problems. In Proceedings of the 10th National Confemce on Artificial Intelligence, AAAI-92, pp459-465, San Jose, CA. Morris P., 1992. On the Density of Solutions in librium Points for the N-Queens Problem, In Proce of the 10th National Confemce on Artificial Intelligence, AAAI-92, pp428-433, San Jose, CA. Papoulis A., 1990. Probability Bt Statistics, p129, Prentice Hall Selman B., Levesque H., Mitchell D., 1992. A New Method for Solving Hard Satisfiability Problems, In Pro- ceedings of the 10th National Confemce on Artificial In- telligence, AAAI-92, pp440-446, San Jose, CA. Williams, C. P. and Hogg, T. 1991. Typicality of Phase Transitions in Search, Tech. Rep. SSL-91-04, Xerox Palo Alto Research Center, Palo Alto, California (to appear in Computational Intelligence 1993) Williams, C. P. and Hogg, T. 1992a. Exploiting the Deep ture of Constraint Problems. Tech. Rep. SSL-92-24, Xerox Palo Alto Research Center, Palo Alto, CA. Williams, C. P. and Hogg, T. 1992b Using Deep Structure to Locate Hard Problems, in Proc 10th National Conf. on Artificial Intelligence, AAAI-92,pp472-477,San Jose CA. Constraint-Based Reasoning 157 | 1993 | 17 |
1,340 | Christian Bessihe LIRMM, University of Montpellier II 161, rue Ada 34392 Montpellier Cedex 5, FRANCE Email: bessiere@lirmm.fr Abstract Constraint networks are known as a useful way to formulate problems such as design, scene labeling, temporal reasoning, and more recently natural language parsing. The problem of the existence of solutions in a constraint network is NP-complete. Hence, consistency techniques have been widely studied to simplify constraint networks before or during the search of solutions. Arc-consistency is the most used of them. Mohr and Henderson [Moh&Hen86] have proposed AC-4, an algorithm having an optimal worst-case time complexity. But it has two drawbacks: its space complexity and its average time complexity. In problems with many solutions, where the size of the constraints is large, these drawbacks become so important that users often replace AC-4 by AC-3 [Mac&Fre85], a non- optimal algorithm. In this paper, we propose a new algorithm, AC-6, which keeps the optimal worst-case time complexity of AC-4 while working out the drawback of space complexity. More, the average time complexity of AC-6 is optimal for constraint networks where nothing is known about the semantic of the constraints. At the end of the paper, experimental results show how much AC-6 outperforms AC-3 and AC-4. 1. Introduction There is no need to show the importance of arc- consistency in Constraint Networks. Originating from Waltz [Waltz72], who developed it for vision problems, it has been studied by Mackworth and Freuder [Mackworth77], [Mac&Fre85], by Mohr and Henderson [Moh&Hen86] who have proposed an algorithm having an optimal worst-case time complexity: O(ed2 ), where e is the number of constraints (or relations) and d the size of the largest domain. In [Bessiere91] its use has been extended to Dynamic constraint networks. Recently, Van Hentenryck, Deville and Teng [Dev&VanH91], [VanH&al92], have proposed a generic algorithm which can be implemented with all known techniques, and have extracted classes of networks on which there exist algorithms running arc- consistency in O(ed). In 1992, Perlin [Perlin92] has 10s BessiCre Marie-Odile Cordier IRISA, University of Rennes I Campus de Beaulieu 35042 Rennes, FRANCE Email: cordier@irisa.fr g” ven properties of arc-consistency on factorable relations. Everybody now looks for arc-consistency complexity in particular classes of constraint networks because AC-4 [Moh&Hen86] has an optimal worst-case complexity and it is supposed that we cannot do better. But AC-4 drawbacks are its average time complexity which is too much near the worst-case time complexity and more, its space complexity which is O(ed2). In applications with a large number of values in variables domains and with weak constraints, AC-3 is often used instead of AC-4 because of its space complexity. Such situations appear for example when domains encode discrete intervals and constraints are defined as arithmetic relations (2, <, f,... ). Constraint Logic Programming (CLP) languages [Din&a1881 which are big consumers of arc-consistency (arc-consistency has some good properties in CLP) are concerned by these problems. In problems with many solutions, where the constraints are weak, AC-4 initialization step is very long because it requires to consider the relations in their whole to construct its data structure. In those cases, AC-3 [Mac&Fre85] runs faster than AC-4 in spite of its non-optimal time complexity. In this paper we propose a new algorithm, AC-6, which while keeping O(ed2) optimal worst-case time complexity of AC-4, discards the problem of space complexity (AC-6 space complexity is O(ed)) and checks just enough data in the constraints to compute the arc-consistent domain. AC-4 looks for all the reasons for a value to be in the arc-consistent domain: it checks, for each value, all the values compatible with it (called its supports) to prove this value is viable. AC-6 only looks for one reason per constraint to prove that a value is viable: it checks, for each value, one support per constraint, looking for another one only when the current support is removed from the domain. The rest of the paper is organized as follows. Section 2 gives some preliminaries on constraint networks and arc-consistency. Section 3 presents the algorithm AC-6. In section 4, experimental results From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. show how much AC-6 outperforms the algorithms AC-3 and AC-4l. A conclusion is given in section 5. A network of binary constraints (CN) is defined as a set of n variables {i, j,. . . ], a domain D=]Di, Dj,. . .I where Di is the set of possible values for variable i, and a set of binary constraints between variables. A binary constraint (or relation) Xij between variables i and j is a subset of the Cartesian product Di x D j that specifies the allowed pairs of values for i and j. Following from Montanari [Montanari74], a binary relation Xi* between variables i and j is usually represente d as a (O,l)-matrix (or a matrix of booleans) with I Di I rows and I Dj I CO~UIIUIS by imposing an ordering on the domains of the variables. Value true at row a, column b, denoted Rii<a, b), means that the pair consisting of the ath element of Di and the bth element of D * is permitted; value false means the pair is not permi c ted. In all the networks of interest here Xij(a, b>=Rji(b, a). In some applications (constraint logic programming, temporal reasoning,. . .), Rij is defined as an arithmetic relation (=, #, <, 2,. . .) without giving the matrix of allowed and not allowed pairs of values. A graph G can be associated to a constraint network, where nodes correspond to variables in the CN and an edge links nodes i and j every time there is a relation Rij on variables i and j in the CN. For the purpose of this paper, we consider G as a symetric directed graph with arcs (i, j) and (j, i) in place of the edge {i, j}. A solution of a constraint network is an instantiation of the variables such that all the constraints are satisfied. Definition. Having the constraint Xi*, value b in Dj is called a support for value a in d i if the pair (a, b) is allowed by Rij (i.e. Rij(a, b) is true). A value a for a variable i IS viable if for every variable j such that Rij exists, a has a support in D*. d e domain D of a CN is arc-consistent if for every variable i in the CN, all the values in Di are viable. 3. Arc-consistency with u 3.1. Preamble As Mohr and Henderson underlined in [Moh&- Hen86], arc-consistency is based on the notion of support. As long as a value a for a variable i (denoted (i, a)) has supporting values on each of the other variables j linked to i in the constraint graph, a is considered a viable value for i. But once there exists a variable on which no remaining value satisfies the relation with (i, a), then a must be eliminated from Di. The algorithm proposed in [Moh&Hen86] makes this support explicit by assigning a counter counter[(i, j), a] to each arc-value pair involving the arc (i, j) and the value a on the variable i. This counter records the number of supports of (i, a) in Dj. For each value (j, b), a set S ‘b is constructed, where sjb={(i, a)/(j, b) supports (i, a J }. Then, if (j, b) is elimi- nated from Dj, counter[(i, j), a] must be decremented for each (i, a) III Sjb. This data structure is at the origin of AC-4 optimal worst-case time complexity. But computing the number of supports for each value (i, a) on each constraint Rij and recording all the values (i, a) supported by each value Jj, b) implies an expensive space complexity of O(ed-) (the size of the support sets Sjb) and an average time complexity increasing with the number of allowed pairs in the relations since the number of supports is proportional to the number of allowed pairs in the relations. The purpose of AC-6 is then to avoid the expensive checking of the relations to find all the supports for all the values. AC-6 keeps the same principle as AC-4, but instead of checking all the supports for a value, it only checks one support (the first one) for each value (i, a) on each constraint Ri* to prove that (i, a) is currently viable. When (j, b is r’ found as the smallest support of (i, a) on Rij, (i, a) is added to Sjb, the list of values currently having (j, b) as smallest support. If (j, b) is removed from Dj then AC-6 looks for the next support in Dj for each value (i, a) in Sjb. The only requirement in the use of AC-6 is to have a total ordering in all domains Dj. But this is not a restriction since in any implementation, a total ordering is imposed on the domains. This ordering is independent of any ordering computed in a rearrangement strategy for searching solutions. 3.2. The algorithm The algorithm proposed here works with the following data structure: e A table M of booleans keeps track of which values of the initial domain are in the current domain or not (M(i, a)=true H aE Di). In this table, each initial Di is considered as the integer range I.. 1 Di ] . But it can be a set of values of any type with a total ordering on these values. We use the following IAC-5 [VanH&al92] is not discussed here since it is not an improvement but a generic framework in which all previous algorithms can be written. Constraint-Based Reasoning 109 constant time functions to handle Di sets that are considered as lists: -first(Q ) returns the smallest value in Q. - last(Q ) returns the largest value in Q. - next(u, Di ) returns the value a’ in Di such that every value a D larger than a and smaller than a ’ is out of Di. 0 Sjb=I(i, a>/& b) is the smallest value in D j supporting (i, a) on Xii) while in AC-4 it was containing all the values supported by (j, b). 0 Counters for each arc-value pair in AC-4 are not used in AC-6. e A list List contains values deleted from the domain but for which the propagation of the deletion has not been processed yet. In AC-4, when a value (j, b) was deleted, it was added to List waiting for the propagation of the consequences of its deletion. These consequences were to decrement counfer[(i, j), a] for every (i, a) in Sjb and to delete (i, a) when counfer[(i, j), a] becomes equal to zero. In AC-6, the use of List is not changed but the consequence of (j, 6) deletion is now to find another support for every (i, a) in S-b. Having an ordering on Dj we look after b (the o d support) for I! another value c in Dj supporting (i, a) on Xij (we know there is no such value before b). When such a value c is found, (i, a) is added to Sjc since (j, c) is the new smallest support for (i, a) in Dj. If no such value exists, (i, a) is removed and put in List. AC-6 uses the following procedure to find the smallest value in Dj not smaller than b and supporting (i, a) on Rij: procedure nextsupport(in i, j, a : integer; in out b : integer; out emptysupport : boolean); begin {search of the smallest value as large as b that belongs to Dj; this part is not needed in the call of the procedure done in the initialization step since b already belongs to Dj } while not M(j, b) and b c last(Dj ) do b t b + 1 ; emptysupport t not M(j, b) ; {search of the smallest support for (i, a) in Dj} while not R$a, b) and not emptysuppoftdo if b c last(Dj) then b t next(b, Dj) else emptysupport t true end; The algorithm AC-6 has the same framework as AC-4. In the initialization step, we look for a support for every value (i, a) on each constraint Rq to prove that (i, a) is viable. If there exists a constraint Rij on which (i, a) has no support, it is removed from Di and put in List. In the propagation step, values (j, b) are taken from List to propagate the consequences of their deletion: finding another support (j, c) for values (i, a) they were supporting (values (i, a) in Sjb). When such a value c in Di is not found, (i, a) is removed from Di and put in Lid at its turn. {initialization} for(i, a) E Ddo &t0; M(i, a)+ true; for ((1) E arcs(GJ do foraE Did0 begin if Dj= 0 then emptysupport t true else b t first(Dj) ; nextsupport(i, j, a, b, emptysupport ) ; if emptysuppott then Di t Di\ {a} ; M(i, a) t false ; Append( List, (i, a)) else Append(Sjb, (i, a)) end {propagation} while List # 0 do begin choose (j, b) from List and remove (j, b) from List; for (i, a) E sjb do {before its deletion (j, b) was the begin smallest support in Djfor (i, a) on Rii} remove (i, a) from sjb ; if M(i, a) then begin c t b ; nextsupport(i, j, a, c, emptysupport) ; if emptysupport then Di t Di\ {a} ; M(i, a) t false ; Append( List, (i, a)) i:; Append(Sjc, (i, a)) end end 3.3. Correctness of AC-6 Here are the key steps for a complete proof of the correcmess of AC-6. In this section we denote maxAC the maximal arc-consistent domain which is expected to be computed by an arc-consistency algorithm. 0 In AC-6, value (i, a) is removed from Di only when it has no support in Dj on a constraint Ri*. If all previously removed values are out of maxA 6 then (i, a) is out of maxAC. maxAC was trivially included in D when AC-6 started. Then, by induction, (i, a) is out of maxAC. Thus, maxAC CD is an invariant property of AC-6. * Every time a value 0, b) is removed, it is put in List until the values it was supporting are checked for new supports. Every time a value (i, a) is found without support on a constraint, it is removed from D. Thus, every value (i, a) in D has at least one support in DuLisf on each constraint Rij. AC-6 terminates with List empty. Hence, after AC-6, every 110 BessiCre orresponding to a house position (e.g. assigning the value 2 to the variable horse means that the horse owner lives in the second house) [Dechter88]. eferences Bessiere, C. 1991. “Arc-Consistency in Dynamic Constraint Satisfaction Problems”; Proceedings 9th National Conference on Artificial Intelligence, Anaheim CA, 221-226 Dechter, R. 1988. “Constraint Processing Incorporating Backjumping, Learning, and C&set-Decomposition”; Proceedings 4th IEEE Conference on AI for Applications, San Diego CA, 312-319 Deville, Y. and Van Hentenryck, I?. 1991. “An Efficient Arc Consistency Algorithm for a Class of CSP Problems”; Proceedings 12th International Joint Conference on Artificial Intelligence, Sydney, Australia, 325-330 Dincbas, M., Van Hentenryck, P., Simonis, H., Aggoun, A., Graf, T. and Berthier, F. 1988. “The constraint logic programming language CHIP”; Proceedings International Conference on Fifth Generation Computer Systems, Tokyo, Japan Janssen, P., Jegou, P., Nouguier, B., Vilarem, M.C. and Castro, B. 1990. “SYNTWA: Assisted Design of Pep tide Synthesis Plans”; New Journal of Chemistry, 14-12,969-976 Mackworth, A.K. 1977. “Consistency in Networks of Relations”; Artificial Intelligence 8,99-118 Mackworth, A.K. and Freuder, E.C. 1985. “The Complexity of Some Polynomial Network Consistency Algorithms for Constraint Satisfaction Problems”; Artificial Intelligence 25, 65-74 Mohr, R. and Henderson, T.C. 1986. “Arc and Path Consistency Revisited” ; Artificial Intelligence 28, 225-233 Montanari, U. 1974. “Networks of Constraints: Fundamental Properties and Applications to Picture Processing”; Information Science 7,95-132 Perlin, M. 1992. “Arc-consistency for factorable relations”; Artificial Intelligence 53, 329-342 Van Hentenryck, I’., Deville, Y. and Teng, CM. 1992. “A generic arc-consistency algorithm and its specializations”; Artificial Intelligence 57,291-321 Waltz, D.L. 1972. “Understanding Line Drawings of Scenes with Shadows”; in: The Psychology of Computer Vision, McGraw Hill, 1975, 19-91 (first published in: Tech.Rep. A1271, MIT MA, 1972) Constraint-Based Reasoning 113 | 1993 | 18 |
1,341 | On the Consistency of eneral raint=Satis S Philippe Jdgou L.I.U.P. - Universite de Provence 3,Place Victor Hugo F13331 Marseille cedex 3, France jegou@gyptis.univ-mrs.fr Abstract The problem of checking for consistency of Constraint-Satisfaction Problems (CSPs) is a fun- damental problem in the field of constraint-based reasonning. Moreover, it is a hard problem since satisfiability of CSPs belongs to the class of NP- complete problems. So, in (Freuder 1982), Freud- er gave theoretical results concerning consisten- cy of binary CSPs (two variables per constraints). In this paper, we proposed an extension to these results to general CSP (n-ary constraints). On one hand, we define a partial consistency well ad- justed to general CSPs called hyper-k-consistency. On the other hand, we proposed a measure of the connectivity of hypergraphs called width of hyper- graphs. Using width of hypergraphs and hyper-k- consistency, we derive a theorem defining a suffi- cient condition for consistency of general CSPs. Introduction Constraint-satisfaction problems (CSPs) involve the assignment of values to variables which are subject to a set of constraints. Examples of CSPs are map color- ing, conjunctive’queries in a relational databases, line drawings understanding, pattern matching in produc- tion rules systems, combinatorial puzzles. . . In the gen- eral case checking for the satisfiability (i.e. consisten- cy) of a CSP is a NP-complete problem. A well known method for solving CSP is the Backtrack procedure. The complexity of this procedure is exponential in the size of the CSP, and consequently, this approach fre- quently induces ” combinatorial explosion” . So, many works try to improve the search efficiency. Three im- portant classes of methods has been proposed: 1. Improving Backtrack search: eg. dependency- directed backtracking, Forward Checking (Haralick & Elliot 1980), etc. 2. Improving representation of the problem before search: eg. technics of achieving local consisten- cies using arc-consistent filtering (Mohr & Hender- son 1986). 3. Decomposition methods: these technics are based on an analysis of topological features of a the constraint network related to a given CSP; these methods have generally better complexity upper bound than Back- track methods. The two first classes of methods do not improved the- oretical complexity of solving CSP, but give on many problems, good practical results. The methods of the third class are based on theoretical results due to Freuder (Freuder 1982) (eg. the cycle-cutset method (Dechter 1990)) or research in the field of relation- al databases theory (Beeri et al. 1983) (eg. tree- clustering (Dechter & Pearl 1989)). These theoreti- cal results associate a structural property of a given constraint network (eg. an acyclic network) to a se- mantic property related to a partial consistency (eg. arc-consistency). These two properties permit to de- rive a theorem concerning global consistency of the C- SP and its tractability. Intuitivly, more the network is connected, more the CSP must satisfies a large con- sistency, and consequently, more the problem is hard to solve. These theoretical results have two practical benefits: on one hand, to define polynomial classes of CSPs, and on the other hand, to elaborate decomposi- tion methods. In this paper, we propose a theoretical result that is a generalization of the results given in (Freuder 1982) and in relational databases theory (‘Beeri et al. 1983). Indeed, the theorem given by Freuder concerns bina- ry CSPs (only two variables per constraint), and so this limitation induces practical problems to its ap- plication. On the contrary, the property given in the field of relational databases concerns n-ary CSPs (no limitation to the number of variables per constraint), but only CSPs with no cycle. The theorem given in this paper concerns binary and n-ary CSPs, and cyclic constraint networks. It permits to define a sufficient condition to global consistency of general CSPs. This property associates a structural measure of the con- nectivity of the network, called width of hypergraph (in the spirit of Freuder), to a semantic property of CSPs related to partial consistency of n-ary CSPs, that is called hyper-k-consitency. 114 Jegou From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. It is known that any non-binary CSP can be treated as a binary CSP if one look at the dual representation or join-graph (this representation has been defined in the field of relational databases: constraints are vari- ables and binary constraints impose equality on com- mon variables). But this approach if of limited interest: it does not allow to realize extension of all theorems and algorithms to non-binary CSPs. For example, the width of an n-ary CSP cannot be defined exactly as the width of its join graph (see example in figure 3). So, original definitions are introduced in this paper. The second section presents definitions and preliminar- ies. In the third section, we defined hyper-k-consitency while in next section we introduce the notion of width of hypergraphs. The last section exposes the consis- tency theorem and give comments about its usability. efinitions and preliminaries Finite Constraint-Satisfaction Problems A General Constraint-Satisfaction Problem involves a set, X of n variables Xi, X2, . . . X, , each defined by its finite domain values D1, &, . . . D,, (d denotes the maximum cardinality over all the Di). D is the set of all domains. C is the set of constraints Cl, (72,. . .c,. A constraint Ci is defined as a set of variables (Xi,, Xia,. . . Xi,,). To any constrain- t, Ci, we associate a subset of the Cartesian product Di, x . . . x Dijt that is denoted R; (Ri specifies which values of the variables are compatible with each other; Ri is a relation, so it is a set of tuples). R is the set of all Ri. So, we denote a CSP P = (X, D, C, R). A solution is an assignment of value to all variables sat- isfying all the constraints. Given a CSP P = (X, D, C, R), the hypergraph (X, C) is called the constraint hypergraph (nodes are varia.bles and hyper-edges are defined by constraints). A binary CSP is one in which all the constraints are binary, i.e. only pairs of variables are possible, so (X, C) is a graph called constraint graph. For a given CSP, the problem is either to find all solutions or one solution, or to know if there exists any solution. The decision problem (ex- istence of solution) is known to be NP-complete. We use two relationnal operators, Projection of relation,- s: if X’ E Ci, the projection of Ri on X’ is denoted Ri[X'] and join of relations denoted Ri W Rj ; see for- mal definitions in (Maier 1983). Partial consistencies in CS Different levels of consistency have been introduced in the field of CSPs. The methods to achieve these lo- cal consistencies are considered as filtering algorithm- s: they may lead to problem simplifications, without chaaging the solution set. They have been used as well to improve the representation prior the search, as t,o a.void backtrack during the search (Haralick & El- liot 1980). Historically, the first partial consistency proposed was arc-consistency. Its generalization was given in (Freuder 1978). Definition 1 (Freuder 1978). A CSP is k-consistent iff for all set of Ic - 1 variables, and all consistent assign- ments of these variables (that satisfy all the constraints among them), for all lath variable XI,, there exists a val- ue in the domain & that satisfies all the constraints among the K variables. A CSP is strongly k-consistent iff the CSP is j-consistent for j = 1, . . . k. Given a CSP and a value k, the complexity of the algorithm achieving k-consistencyis O(nkdk) (Cooper 1989). But achieving k-consistency on a binary CSP generally induces new constraints, with arity equal to k - 1. Consequently, a binary CSP can be tranformed in an n-ary CSP using this method (eg. achieving 4- consistency). An other partial-consistency has been defined particu- larly for n-ary CSPs: the pairwise-consistency (Janssen et al. 1989) 1 a so called inter-consistency (Jegou 1991). This consistency is based on works concerning rela- tional da.tabases (Beeri et al. 1983). Whereas k- consistency is a local consistency between variables, domains and constraints, inter-consistency defines a consistency between constraints and relations. On the contrary of k-consitency, that does not consider structural features of the constraint network, inter- consistency is particularly adjusted to the connections in n-ary CSPs, because connections correspond to in- tersections between constraints. Definition 2 (Beeri et al. 1983)(Janssen et al. 1989). We said that P = (X, D, C, R) is inter-consistent iff VC$Z’~,R$2$6’~]=R#$6’~]andVR~,R~#0. In (Janssen et al. 1989) a polynomial algorithm achieving this consistency is given. This algorithm is based on an equivalent binary representation given in the next section. Binary representation for n-ary CS In this representation, the vertices of the constraint graph are n-ary constraints Ci, their domains are the associated relations Ri, and the edges, that, are new constraints, are given by intersections between Ci. The compatibility relations are then given by the equali- ty constraints between the connected Ri. This binary representation is called the constraint intergraph asso- ciated to a constraint hypergraph (Jegou 1991). Definition 3. A hypergraph H is a pair (X, C) where X is a finite set of vertices and C a set of hyper-edges, i.e. subsets of X. When the cardinality of any hyper- edges is two, the hypergraph is a graph (necessary undirected). Given a CSP (X, D, C, R), we consider its associated hypergraph denoted (X, C). Comstraht=Based Reason 115 Definition 4 (Bernstein SC Goodman 1981). Given a hypergraph H = (X,C), an intergraph of H is a graph G(H) = (C, E) such as: o E E ((Ci,Cj) C C/i# j and CanCj # 0) * VCi, Cj E C, if Ci n Cj # 0, there is a chain (Ci = Cl, ~32,. . .Cq = Cj) in G(H) such as Vk,l<k<q,CinCjECknCk+l Intergraphs are also called line-graphs, join-graphs (Maier 1983) and dual-graphs (Dechter si Pearl 1989). Definition 5 (Jegou 1991). Given a CSP (X, D, C, R), we defined an equivalent (equivalent sets of solutions) binary CSP (C, R, E, Q): o (C, E) is an intergraph of the hypergraph (S, C). 0 c: = {C,,...C,) is a set of variables defined on domains R = {RI, . . . R, }. o if (Ci, Cj} E E, then we have an equality constraint: Qk = {(ri,Vj) E Ri x Rj/ri[Ci nCjJ = rj[Ci f?Cj]} Given a hypergraph, it can exists several associa t- ed intergraphs. Some of them can contain redundant, - edges that can be deleted to obtain an other inter- graph. The maximal one is called representalive graph. In the field of CSPs, we are naturally interested with minimal intergraphs: all edges are necessary, i.e. no edge can be deleted conserving the property of chains in intergraphs. So an algorithms have been proposed to find minimal intergraphs in (Janssen et al. 1989). 11 study of combinatorial properties of minimal inter- graphs is given in (Jegou St Vilarem 1993). In the next. example two minimal intergraphs are given: Figure 1. Hypergraph (a) and two minimal intergraphs. A sufficient condition for CSPs consistency Freuder has identified sufficient conditions for a bina.- ry CSP to satisfy consistency, ie. satisfiability. These conditions associate topology of the constraint graph with partial consistency. Definition 6 (Freuder 1982). An ordered constraint graph is a constraint graph in which nodes are linearly ordered. The width of a node is the number of edges that link that node to previous nodes. The width of an order is the maximum width of all nodes. The width of n gmph is the minimum width of all orderings of that graph. This definition is illustrated in figure 1: the width of the graph (1)) is 2 and the width of the graph (c) is 3. On this example, we can remark that the width of an hypergraph cannot be defined as the width of its min- imal intergraph, because all minimal intergraphs has not the same width. Theorem 7 (Freuder 1982). Given a CSP, if the lev- el of strong consist5ency is greater than the width of the const.raint graph, then the CSP is consistent and it is possible to find solutions without backtracking (in polynomial time). Freuder also gave an algorithm to compute the width of any graph (in O(n+?n)). So given a CSP, it is sufficien- t to know the width of the constraint graph, denoted b - 1, then to achieve k-consistency. But a problem appears: tohis approach is possible only for acyclic con- straint graphs (width equal to one) and a subclass of graphs the width of which is two (called regular graphs of width two in (Dechter ti Pearl 1988)). The cause of t,hat problem: achievin g k-consistency generally in- duces nary constraints (arity can be equal to k - l), so the corresponding problem is a const5raint hypergraph, aud the t.heorem can not be applied. Nevertheless, the result concerning acyclic CSPs is applied in the cycle-cutset, method (Dechter 90) and we can consider Freuder’s t.heorem as a vehicle to give a lower bound for complexity of a binary CSP: to the order of cl” if its width is k - 1, because complexity of achieving k- coiisist,ency is O( ?z’cl” ) (Cooper 1989). A result of relational database theory A similar property has been derived in the field of re- latioual databases. This property is related to acyclic hypergraphs. Definition 8 (Beeri et al. 1983). A hypergraph is acyclic iff 3 a. linear order (Cl, Cz, . . . Cm) such as Vi, 1 < i 5 m. 3ji < i/(Ci nU~l’,C~) E Cj, (this prop- erty is called runntllg infers&?o:n property) Figure 2. Cyclic (a) and acyclic (b) hypergraphs. (Beeri et al. 1983) gave a fundamental property of acyclic database schemes that concerns consistency of such databases, namely global consisfency. This result is presented below using CSPs terminology: Definition 9. Let P = (S, D, C, R) be a CSP. We 116 Jegou say that P is globally consistent if there is a rela- tion S over the variables X (S is the set of solution- s) such as Vi, 1 5 i 5 m, Ri = S[Ci]. It is equiv- alent to Vi, 1 5 i 5 m, (D$El Rj)[Ci] = Rd since S = (WY& Rj). Note that global consistency of CSPs implies satisfi- ability of CSPa. Indeed, a CSP is globally consistent iff every tuple of relations appears at least in one so- lution. Furthermore, it is clear that global consistency implies inter-consistency but the converse is false. We give the interpretation in the field of CSP to the prop- erty given by (Beeri et al. 1983): Theorem 10 (Beeri et al. 1983). If P is such as its constraints hypergraph is acyclic, then P is inter-consistent e P is globally consisient. An immediate application of this theorem concerns the consistency checking of CSPs. We know polyno- mial algorithms to achieve inter-consistency while to check global consistency is a NP-complete problem. So, knowing this theorem, if the database scheme is acyclic, it is possible to check global consistency in a polynomial time achieving inter-consistency. This re- sult is applied in the tree-clustering method (Dechter & Pea.rl 1989). Some remarks Theorems 7 and 10 are significant. First, they can be used to solve CSP, immediatly if the considered CSP is acyclic, or for all CSP, using decomposition methods as the cycle-cutset method, or tree-clustering scheme. Second, because they define polynomial subclasses of CSPs. Nevertheless, there is two significant limitations to these theoretical results. On one hand, theorem 7 is only defined on binary CSPs, and so, it can not be a.pplied, nor on n-ary CSPs, nor on constraints graph with width greater than 2. On the other hand, theorem 11 concerns only acyclic n-ary CSPs. So, a genera.liza- tion to cyclic n-ary CSPs is necessary to extend this kind of theoretical approach to all CSPs. A new consistency fcmr wary CS yper-k-consistency When Freuder defined k-consistency, the definition is related to assignments: “given a consistent assignment of variables Xi, X2, . . . Xk- 1, it is possible to extend this assignment for all kth variable”. To generalize k-consistency to n-ary CSPs, we consider the same a.p- proach but with constraints and relations: we can con- sider ” assignment” of constraints Cl, Cz, . . . Ck- 1, and their extension to any kth constraint. Our definition of hyper-k-consistency is given in this spirit: Definition 11. A CSP P = (X, D, C, R) is hyper- k-consistent iff VRi, Ri # 0 and VCi, Ca, . . .Ck-1 E C, ($;lj Ri)[(&; Ci) n Ck] c Rk[(&$i) n Ck] P is strongly hyper-k-consistent iff Vi, 1 5 i 5 k, P is hyper-i-consistent. We can note that hyper-2-consistency is equivalen- t to inter-consistency. So, hyper-k-consistency con- stitutes really a generalization of inter-consistency to greater levels. Actually, this definition can be con- sidered as a formulation of k-consistency on the con- straint intergraph: (Wfi,’ ri)~(D$z: Ri) signifies that (~1, ~2,. . . rk-1) is a consistent assignment of con- straints Cl, C2, . . . Ck - 1, and if there is rk E Ra such as: then (q, r2,. . . v-k-1, rk) is a consistent assignment of constraints Cl, C2, . . . Ck-l,C’k, i.e. k variables of the constraint intergraph. This particularity induces a method to achieve hyper-k-consitency, that is based on the same approach of achieving k-consistency on binary CSPs. So, we have the same kind of problems: achieving hyper-k-consistency can modified constraint hypergraph. A second problem concerns the complex- ity of achieving hyper-k-consistency: the complexity is in the order of rk if r is the maximum size of Ri. These problems are discussed in (Jegou 1991). Another remark about hyper-k-consistency concerns its links with global consistency; we easily verify that if a CSP P is hyper-m-consistent, then P is globally consistent while the converse is generally false. The definition of hyper-k-consistency in n-ary CSP concerns connections in hypergraphs, i.e. intersection- s between hyper-edges. So the definition of width of hypergraph is based on the same principles. Connections in hypergraphs concerns intersections be- tween hyper-edges. So, the consistency of a n-ary CSP is intimately connected to the intersections between hyper-edges. The definition of width of hypergraph allows us to define a degree of cyclicity of hypergraph- s. Using this width, we shall work out links between structural properties of hypergraphs and the global consistency of n-ary CSPs. Before to give our definition of the width of an hy- pergraph, we must explain why this definition is not immediatly related to intergraphs. A first reason has already been given: all minimal intergraphs of a. hyper- graph have not necessary the same width (see figure 1). An other rea.son is the next one: if we define the width of an hypergraph as the width of one of its intergraph (not necessary a minimal one), we cannot obtain the same properties than we have with Freuder’s theorem that is based on a good order for the assignment of the variables: Constraint-Based Reasoning 117 Figure 3. Problem of order on variables. The hypergraph considered in the figure 4 is the one of the figure 1. We consider two possible orders on this intergraph. The first one is not a possible order for the assignment of the variables. Indeed, when the variable corresponding to C’s is assigned, the variable 1 of the hypergra.ph has already been assigned and so, it is pos- sible to assign Ca with an other value for 1. This is possible because C2 is given before C’s in the order a.nd because there is no edge between C2 and Ca. So, we can obtain two different assignments for the variable 1, and finally, we can obtain a consistent assignment 011 C1,C&.. . C’s, and consequently an assignment on variables Xi, X2, . . . Xc that is not a solution of the problem. The second order of width 3 does not induce such problems. Definition 12. Given a hypergraph H = (S, C), 0 the set of linear orders on C, and a linear order T=(cl,. ..c~)EO: the *width of Ci in order r on H is the number of maximal intersections with predecessors of Cd in r; &(Ci) denots the width of Ci in order r: C,(Ci) = I{C~nCj/j<ir\l3k,k<i/CinCj c#CinCk} 1 the luidth ofr is ,!ZH(~) = ma2 {LT(Ci)/Ci E C) the width of H is E(H) = min {L~(r)/r E 0) Order Cl c3 c4 c5 c2 Figure 4. Width of the hypergraph. In the fugure 4, we see the width of the hyper- graph H given in figure l-a. Here, CH(~) = 3 since &({3,5,6}) = 3; we can verify that VT E 0, LN( T) = 3, and consequently that ,!Z( H) = 3. A property relies width and cyclicity: Proposition 13. H is acyclic e L(H) = 1 Proof. H acyclic satisfies the running intersection property H 3order (Cl, (2’2,. . .C,,-,) such as Vi, 1 < i 5 m, 3ji < i/(Ci n( Ui;iCk) ) C Cj, e 3order (Cl, C2,. . . C,,,) such as Vi, 1 < i 5 m, 1 {cincj/j < ir\+lk, k < i/CiflCj C# CinCk) I< 1 e 3order (Cl, Cz, . . .C,,,)/L&) 5 1 @ L(H) 5 1: Moreover, it is clear that if H is connected and if C possesses naore than one hyper-edge, the inequali- ty L(H) 2 1 necessary holds. Given a hypergraph (X, C) and an order on C, it is not hard to find the width of this order. It is just necessary to compute for each Ci the number of max- imal intersections with predecessors, and to select the greater. 011 the contrary, finding an order that give the minimal width of a hypergraph is an optimization problem; this problem seems us to be open concerning its complexity: does it belong to NP-hard problems ? This question is at present an open question. In (JCgou 1991), an heuristic is proposed to find small width, ie. just an approximation of the width of a given hyper- graph. Consistency Theorem In t,his section, we derive a sufficient condition to con- sistency of general CSPs. This condition concerns the width of the hypergraph associated to the CSP, a.nd the hyper-k-consistency that the relations satisfy (i.e. the value k). Theorem 14. Let P = (X, D, C, R) be a CSP and H = (S, C). If P is strongly hyper-k-consistent, and if L(H) 2 k - 1, then P is consistent. Proof. We mast show that P is consistent, i.e. Vi, 1 5 i 5 m,3ri E Ri/(w~?., ri) E (WEI Ri). That is (rl, rz,. . .rm) can be considered as a consis- tent assignment on (Cl, C?, . . .C,): Vi, j, 1 5 i, j 5 m, ri [Ci n Cj] = Tj [Ci n Cj]. We proove this property by induction on p, such as 1 Lps m.. IfP = 1, the property trivially holds. We consider now a linear order (Cl, (2’2,. . . Cm) as- sociated to the width t(H) 5 k - 1. Suppose that the property holds for p - 1 such as 1 < p 5 m. That is we have (rl , rz, . . . rp--l)/Vl:, 1 5 i 5 p - 1, ri E Ri and Vi,j,l<i,j<p-l,r&XCj]=rj[CinCj]. By definition of the width, C,, possesses at most k - 1 maximal intersections with predecessors in the order. Let Cil,Cj2,... Ci, be the corresponding Ci, with nec- essary q 5 k - 1, considering only one Ci for ev- ery maximal intersection. Q being strongly hyper-k- consistent, and since q 5 k - 1, we have (WY=1 Rij)[(U~=,Cij) n C,] E RP[(Ul,1Ci,) n C,] and for the ri, ‘s appearing in. (r1 , 1.2, . . . rP--l ) (W&l Ti,)[(U~=lCi,) n Cp] E Rp[(UjQ=tCiJ 1 n Cpl 118 Jegou so, 3-p E RP such as rP is consistent with (7’. 21) Pi2,. . . rip), that is: Vj, 1 5 j 5 Q, rij[Ci, n C,] = rP[Cij n C,]. We show now, that rP is also consistent for all the ri ‘s, i. e. ra[Ci n C,] = rP[Ci n C,] . Consider Ci such as 1 < i < p and C; n Cp # 0. By the definition of the width, 3j, 1 < j 5 q such as Ci n CP C Cij n CP because the CijTs are maximal for the intersection with CP. Consequently, we have CinCP E CinCjj. By hypothesis, we have rd[Ca n Cij] = ri,[Ci n Cd,], a fortiori, we have ri[Cd n C,] = rij[Ci n C,]. We seen that rij [Cij nCp] = rP[Cij n CP]; this emplies that ri, [Ci n C,] = rP[Cs n C,]. Consequently, ri[Ca n C,] = raj [Ci n C,] = rP[Cj n C,]. So, (rl,rl,. . .rP-l,rP) is consistent assignment on (Cl, C2, . . . C,_l, C,>. So the property holds for p, and consequently, P is consistent. If we recall the property given about acyclic database schemes (Beeri et al. 1983), it is clear that theorem 10 is a corollary of theorem 14 (because k = 2). A more interesting result is the next corollary: Corollary 15. Let P = (X, D, C, R) be a CSP such as (X, C) is a graph (i.e. all hyper-edges have cardi- nality 2). If P is strongly hyper-3-consistent, then P is consistent. Proof. It is suficient to remark that if (X, C) is a graph, its width is at most 2, because a/l hyper-edges of (X, C) are edges, and an edge has no more than 2 maximal intersections. Nevertheless, this surprising corollary is not allways us- able, because achieving hyper-k-consistency can modi- fy the hypergraph associated to a n-ary CSP. Freuder’s theorem has the same kind of problems: try to obtain its preconditions can modify these preconditions. So, concerning the practical use of the theorem, a problem is given by the verification of hyper-k-consistency in a CSP. On one hand, the theorem gives a sufficient con- dition to consistency, and not a necessary condition; on the other hand, given a value k, it is possible to obtain hyper-k-consistency using filtering mecanisms (J&go*& 1991) in polynomial time in k, in the size of the CSP. But this process can modify the hypergraph with a.dditions of new hyper-edges, and so modify the width. Nevertheless, contrary to Freuder’s theorem, the consistency theorem can be tried to apply after modification of the width because it is directly defined on n-ary CSPs. Consequently, the theorem must be considered in a first time as a theoretical result, with, at this moment, only one practical application: the corollary given in (Beeri et al. 1983), and not as a directly usable result. The next research must be to exploit the theorem, on one hand to try to find new polynomial classes of C- SP, and on the other hand to propose new methods to solve practically n-ary CSPs. eferenees Beeri, C., Fagin, R., Maier, D. and Uannakakis, M. 1983. On the desirability of acyclic database schemes. Journal of the ACM 30:479-513. Dechter, R. 1990. Enhancement Schemes fo Constrain- t Processing: Backjumping, Learning and Cutset De- composition. Artificial Intelligence 41:273-312. Dechter, R. and Pearl, J. 1988. Network-based heuris- tics for constraint satisfaction problems. Artificial In- telligence 34~1-38. Dechter, R. and Pearl, J. 1989. Tree Clustering for Constraint Networks. Artificial Intelligence 38:353- 366. Freuder, E.C. 1978. Synthesizing constraint expres- sions. Communications of the ACM 21:958-967. Freuder, E.C. 1982. A sufficient condition for backtrack-free search. Journal of the ACM 29(1):24- 32. Ha.ralick, R.M. and Elliot, G.L. 1980. Increasing tree search efficiency for constraint-satisfaction problems. Artificial Intelligence 14:263-313. Janssen, P., J&gou, P., Nouguier, B. and Vilarem, M.C. 1989. A filtering process for general constraint satis- faction problems: achieving pairwise-consistency using an associated binary representation. In Proceedings of the IEEE Workshop on Tools for Artificial Intelligence, 420-427. Fairfax, USA Jkgou, P. 1991. Contribution 8. l’&ude des probE?mes de satisfaction de contraintes. . . Thbse de Doctorat, U- niversitk Montpellier II, France. Jegou, P. and Vilarem, M.C. 1993. On some partial line graphs of a hypergraph and the associated ma- troid. Discrete Mathematics. To appear. Maier, D. 1983. The Theory of Relational Databas- es. Computing Science Press. Mohr, R. and Henderson, T.C. 1986. Arc and path consistency revisited. Artificial Intelligence 28(2):225- 233. Constraint-Based Reasoning PI9 | 1993 | 19 |
1,342 | Procedures Ian P. Gent and Toby Walsh Department of Artificial Intelligence, University of Edinburgh 80 South Bridge, Edinburgh EHl lHN, United Kingdom Email:I.P.Gent@edinburgh.ac.uk, T.Walsh@edinburgh.ac.uk Abstract Recently several local hill-climbing procedures for propositional satisfiability have been proposed which are able to solve large and difficult problems beyond the reach of conventional algorithms like Davis-Putnam. By the introduction of some new variants of these procedures, we provide strong experimental evidence to support our conjecture that neither greediness nor randomness is import- ant in these procedures. One of the variants in- troduced seems to offer significant improvements over earlier procedures. In addition, we investig- ate experimentally how performance depends on their parameters. Our results suggest that run- time scales less than simply exponentially in the problem size. Introduction Recently several local hill-climbing procedures for pro- positional satisfiability have been proposed [Gent and Walsh, 1992; Gu, 1992; Selman et al., 19921. Proposi- tional satisfiability (or SAT) is the problem of deciding if there is an assignment for the variables in a propos- itional formula that makes the formula true. SAT was one of the first problems shown to be NP-hard [Cook, 19711. SAT is of considerable practical interest as many AI tasks can be encoded quite naturally into it (eg. planning [Kautz and Selman, 19921, constraint satis- faction, vision interpretation [Reiter and Mackworth, 19891, refutational theorem proving). Much of the in- terest in these procedures is because they scale well and can solve large and difficult SAT problems beyond the reach of conventional algorithms like Davis-Putnam. These hill-climbing procedures share three common features. First, they attempt to determine the satis- fiability of a formula in conjunctive normal form (a con- junction of clauses, where a clause is a disjunction of *This research was supported by SERC Postdoctoral Fellowships to the authors. We thank Alan Bundy, Bob Constable, Judith Underwood and the members of the Mathematical Reasoning Group for their constructive com- ments and their estimated 100 trillion CPU cycles. literals). Second, they hill-climb on the number of sat- isfied clauses. Third, their local neighbourhood (which they search for a better truth assignment) is the set of truth assignments with the assignment to one variable changed. Typical of such procedures is GSAT [Selman et r~k., 19921, a greedy random hill-climbing procedure. GSAT starts with a randomly generated truth assign- ment, and hill-climbs by changing (or “flipping”) the variable assignment which gives the largest increase in the number of clauses satisfied. Given the choice between equally good flips, it picks one at random. In [Gent and Walsh, 19921 we investigated three fea- tures of GSAT. Is greediness important? Is random- ness important ? Is hill-climbing important? One of our aims is to provide stronger and more complete an- swers to these questions. We will show that neither greediness nor randomness is important. We will also propose some new procedures which show considerably improved performance over GSAT on certain classes of problems. Finally, we will explore how these pro- cedures scale, and how to set their parameters. As there is nothing very special about GSAT or the other procedures we analyse, we expect that our results will translate to any procedure which performs local hill- climbing on the number of satisfied clauses (for ex- ample SATI. and SAT&O [Gu, 19921). To perform these experiments, we use a generalisation of GSAT called “GenSAT” [Gent and Walsh, 19921. procedure GenSAT(C) for i := 1 to Max-tries T := initial(C) ; generate a truth assignment for j := 1 to Max-flips if T satisfies C then return T ePse Poss-flips := hill-climb@, T) ; compute best local neighbours V := pick(Poss-flips) ; pick one to flip T := T with V’s assignment flipped end end ret urn “no satisfying assignment found” GSAT is an instance of GenSAT in which initial gen- erates a random truth assignment, hill-climb returns those variables whose truth assignment if flipped gives 28 Ghent From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. the greatest increase in the number of clauses satisfied (called the “score” from now on) and pick chooses one of these variables at random. An important feature of GSAT’s hill-climbing is sideways flips - if there is no flip which increases the score, a variable is flipped which does not change the score. GSAT’s performance degrades greatly without sideways flips. Greediness and To study the importance of greediness, we introduced CSAT [Gent and Walsh, 19921, a cautious variant of GenSAT. In CSAT, bilk-climb returns all variables which increase the score when flipped, or if there are no such variables, all variables which make no change to the score, or if there are none of these, all variables. Since we found no problem sets on which CSAT per- formed significantly worse than GSAT, we conjectured that greediness is not important [Gent and Walsh, 19921. To test this conjecture, we introduce three new variants of GenSAT: TSAT, ISAT, and SSAT. TSAT is timid since hill-climb returns those vari- ables which increase the score the least when flipped, or if there are no variables which increase the score, all variables which make no change, or if there are none of these, all variables. ISAT is indifferent to upwards and sideways flips since hill-climb returns those vari- ables which do not decrease the score when flipped, or if there are none of these, all variables. SSAT, however, is a sideways moving procedure since hill- climb returns those variables which make no change to the score when flipped, or if there are no such vari- ables, all those variables which increase t’he score, or if there are none of these, all variables. We test these procedures on two types of problems: satisfiability encodings of the n-queens and random k- SAT. The n-queens problem is to place n-queens on an n x n chessboard so that no two queens attack each other. Its encoding uses n2 variables, each true iff a particular square is occupied by a queen. Problems in random k-SAT with N variables and L clauses are gen- erated as follows: a random subset of size k of the N variables is selected for each clause, and each variable is made positive or negative with probability i. For random S-SAT the ratio L/N = 4.3 has been identi- fied as giving problems which are particularly hard for Davis-Putnam and many other algorithms [Mitchell et aZ., 1992; Larrabee and Tsuji, 19921. This ratio was also used in an earlier study of GSAT [Selman et al., 19921. Since GenSAT variants typically do not determ- ine unsatisfiability, unsatisfiable formulas were filtered out by the Davis-Putnam procedure. In every experiment (unless explicitly mentioned otherwise) Max-flips was set to 5 times the number of variables and Max-tries to infinity. In Table 1, the fig- ures for “Tries” are the average number of tries taken until success, while the figures for “Flips” give the av- erage number of flips in successful tries only. The final two columns record the total number of flips (including ISAT 6.35 Random GSAT 10.7 70 vars TSAT 10.2 ISAT 11.9 Random GSAT 25.7 100 vaxs TSAT 26.1 ISAT 34.6 6 queens GSAT 2.14 TSAT 2.26 ISAT 2.22 8 queens GSAT 1.18 TSAT 1.20 ISAT 1.21 16 queens GSAT 1.03 TSAT 1.04 ISAT 1.02 - Fhps 93.8 96.4 127 158 161 208 261 272 327 65.0 74.1 78.8 84.5 101 112 253 282 339 Total s.d. 1310 2200 1180 2090 1460 2560 3550 6090 3390 5980 4030 7890 12600 22800 12800 22000 17100 43200 271 267 301 296 298 310 141 170 165 171 178 173 288 251 326 295 365 226 Table 1: Comparison of GSAT, TSAT, and ISAT unsuccessful tries) and their standard deviations. 1000 experiments were performed in each case, all of which were successful. To reduce variance, all experiments used the same randomly generated problems. The results in table 1 confirm our conjecture that r reediness is not important. Like cautious hill-climbing Gent and Walsh, 19921, t’ lmid hill-climbing gives very similar performance to greedy hill-climbing. The dif- ferences between GSAT and TSAT are less than vari- ances we have observed on problem sets of this size. ISAT does, however, perform significantly worse than GSAT. ISAT’s performance falls off much more quickly as the problem size increases. We conjecture that as the problem size increases, the number of sideways flips offered increases and these are typically poor moves compared to upwards flips. Combined with other heur- istics, however, some of these sideways flips can be good flips to make, as we show in a later section where a variant of ISAT gives improved performance over GSAT. As well as SSAT, we tried a variant of ISAT which is indifferent to flips which increase the score, leave it constant or decrease it by 1. Both this variant and SSAT failed to solve any of 25 random 3-SAT 50 variable problems in 999 tries. We therefore conclude that you need to perform some sort of hill-climbing. Greediness has also been used in several local search procedures for the generation of start positions (eg. in a constraint satisfaction procedure [Minton et al., 19901, and in various algorithms for the n-queens problems [Sosit and Gu, 19911). To investigate whether such initial greediness would be useful for satisfiability, we introduce a new variant of GenSAT called OSAT which is opportunistic in its generation of an initial truth as- signment. In OSAT, the score function (number of sat- isfied clauses) is extended to partial truth assignments by ignoring unassigned variables. OSAT incrementally builds an initial truth assignment by considering the Automated Reasoning 29 variables in a random order and picking those truth values which maximize the score; the use of a random order helps prevent any variable’from dominating. In addition, if the score is identical for the assignment of a variable to true and false, a truth assignment is chosen at random. OSAT is identical to GSAT in all other respects. A comparison of OSAT and GSAT is given in table 2. In this and subsequent tables, percentages give the total flips as a percentage of the comparable figure for GSAT, and standard deviations are omitted for reasons of space. Table 2: Comparison of GSAT and OSAT OSAT always takes less flips on average than GSAT on a successful try. OSAT also takes the same or slightly more tries as GSAT. The total number of flips performed by OSAT can therefore be slightly less than GSAT on the same problems. However, if we include the O(N) computation necessary to perform the greedy start, OSAT is nearly always slower than GSAT. To conclude, our results confirm that greediness is neither important in hill-climbing nor in the generation of the initial start position. Any form of hill-climbing which prefers up or sideways moves over downwards moves (and does not prefer sideways over up moves) appears to work. Randomness GSAT uses randomness in generating the initial truth assignment and in picking which variable to flip when offered more than one. To explore the importance of such randomness, we introduced in [Gent and Walsh, 19921 three variants of GenSAT: FSAT, DSAT, and USAT. FSAT uses a fixed initial truth assignment but is otherwise identical to GSAT. DSAT picks between equally good variables to flip in a deterministic but fair way, whilst USAT picks between equally good variables to flip in a deterministic but unfair wayl. On ran- dom k-SAT problems both USAT and FSAT performed poorly. DSAT, however, performed considerably bet- ter than GSAT ( as well as [Gent and Walsh, 19921 see figures 4 & 5). We therefore concluded that there is nothing essential about the randomness of picking in GSAT (although fairness is important) and that the initial truth assignment must vary from try to try. ‘A procedure is fair if it eventually picks any variable that is offered continually. USAT picks the least variable in a fixed ordering. DSAT picks variables in a cyclical order. To explore whether the initial truth assignment can be varied deterministically, and to determine if ran- domness can be eliminated simultaneously from all parts of GenSAT, we introduce three new variants: NSAT, VSAT, VDSAT. NSAT generates initial truth assignments in “numerical” order. That is, on the n- th try, the m-th variable in a truth assignment is set to true iff the m-th bit of the binary representation of n is 1. VSAT, by comparison, generates initial truth assignments to maximize the variability between suc- cessive assignments. On the first try, all variables are set to false. On the second try, all variables are set to true. On the third try, half the variables are set to true and half to false, and so on. See [Gent and Walsh, 19931 for details. Since this algorithm cycles through all possible truth assignments, VSAT is a complete de- cision procedure for SAT when Max-tries is set to 2jv. NSAT and VSAT are identical to GSAT in all other re- spects. VDSAT uses the same start function as VSAT and is identical to DSAT in all other respects. Unlike all previous variants, VDSAT is entirely deterministic. As table 3 demonstrates, NSAT’s performance was very poor on 50 variable problems. Its performance on larger problems was even worse. We conjecture that this poor performance is a consequence of the lack of variability between successive initial truth assign- ments. VSAT and VDSAT have initial truth assign- ments which vary much more than initial truth assign- ments in NSAT. VSAT’s performance is very close to GSAT’s. VDSAT performs very similarly to DSAT, and better than GSAT, on random problems. VD- SAT’s performance on the 16 queens problem is poor because VDSAT is entirely deterministic and the first try happens to fail. 1 VDSAT 2 1 296 1 1576 550% 1 Table 3: Comparison of NSAT, VSAT, and VDSAT To conclude, randomness is neither important in the initial start position nor in picking between equally good variables. However, it is important that success- ive initial start positions vary on a large number of variables. 30 Ghent Memory Information gathered during a run of GenSAT can be used to guide future search. For example, [Selman and Kautz, 19931 introduced a variant of GSAT in which a failed try is used to weight the emphasis given to clauses by the score function in future tries. They re- port that this technique enables GSAT to solve prob- lems that it otherwise cannot solve. In [Gent and Walsh, 19921 we introduced MSAT, which is like GSAT except that it uses memory to avoid making the same flip twice in a row except when given no other choice. MSAT showed improved performance over GSAT particularly on the n-queens problem, al- though the improvement declines as problems grow lar- ger. This is, of course, not the only way we can use memory of the earlier search. In this section we in- troduce HSAT and IHSAT. These variants of GenSAT use historical information to choose deterministically which variable to pick. When offered a choice of vari- ables, HSAT always picks the one that was flipped longest ago (in the current try): if two variables are offered which have never been flipped in this try, an ar- bitrary (but fixed) ordering is used to choose between them. HSAT is otherwise like GSAT. IHSAT uses the same pick as HSAT but is indifferent like ISAT. Results for HSAT and IHSAT are summarised in table 4. Problem Proc Tries Flips Total % Random HSAT 3.82 58.7 763 58% 50 van IHSAT 3.38 96.0 690 53% Random HSAT 4.93 101 1480 42% LSAT 3.84 165 1160 33% [SAT 8.11 184 3740 30% :SAT 6.95 274 3250 26% [SAT 1.11 43.3 62.9 23% :SAT 1.08 55.8 70.6 26% [SAT 1.09 44.1 73.9 52% :SAT 1.08 66.2 90.5 64% [SAT 1.02 156 183 64% :SAT 1.02 220 245 85% Table 4: Comparison of HSAT and IHSAT Both HSAT and IHSAT perform considerably better than GSAT. Indeed, both perform better than any pre- vious variant of GenSAT. Many other variants of HSAT also perform very well (eg. HSAT with cautious hill- climbing, with timid hill-climbing, with VSAT’s start function). Note also that, unlike MSAT, the improve- ment in performance does not appear to decline as the number of variables increases. To conclude, memory of the current try can signi- ficantly improve the performance of many variants. In particular, picking variables based on the history of the try rather than randomly is one such improvement. Running GenSAT We have studied the behaviour of GenSAT as the func- tions initial, hill-climb, and pick are varied. However, we have not discussed the behaviour of GenSAT as we vary its explicit parameters, Max-tries and Max-flips. The setting of Max-tries is quite simple - it depends only on one’s patience. Increasing Max-tries will in- crease one’s chance of success. The situation for Max-flips is different to that for Max-tries. Although increasing Max-flips increases the probability of success on a given try, it can decrease the probability of success in a given run time. To under- stand this fully it is helpful to review some features of GenSAT’s search identified in [Gent and Walsh, 19921. GenSAT’s hill-climbing is initially dominated by in- creasing the number of satisfied clauses. GSAT, for example, on random S-SAT problems is typically able to climb for about 0.25N flips, where N is the number of variables in the problem, increasing the percentage of satisfied clauses from 87.5% (i of the clauses are ini- tially satisfied by a random assignment) to about 97%. From now on, there is little climbing; the vast majority of flips are sideways, neither increasing nor decreasing the score. Occasionally a flip can increase the score; on some tries, this happens often enough before Max-flips is reached that all the clauses are satisfied. In Figure 1, we have plotted the percentage of prob- lems solved against the total numbers of flips used by HSAT for 50 variable random problems, with Max-flips is 150. The dotted lines represent the start of new tries. During the initial climbing phase almost no problems are solved: in fact no problems were solved in less that 10 flips on the first try. Note that 10 is 0.2N, approx- imately the length of the initial climbing phase. This behaviour is repeated during each try: very few prob- lems are solved during the first 10 flips of a try. After about 10 flips, there is a dramatic change in the gradi- ent of the graph. There is now a significant chance of solving a problem with each flip. Again, this behaviour is repeated on each try. Finally, after about 100 flips of a given try, the gradient declines noticeably. From now on, there is a very small chance of solving a problem during the current try if it has not been solved already. ! lOO- % *O- Solved 60- 40- 20- 50 var / 215 Clause problems MIdlipS I I I I I I I I I > 0 75 150 225 300 375 450 Total nips Figure 1: HSAT, Max-flips = 150 Different values of Max-flips offer a trade-off between the unproductive initial phase at the start of a try and Automated Reasoning 31 the unproductive phase at the end for large Max-flips. To determine the optimal value, we have plotted in Figure 2 the average total number of flips used on 50 variable problems against integer values of Max-flips from 25 to 300 for HSAT, DSAT, and GSAT. For small values of Max-flips, not enough flips remain after the hill-climbing phase to give a high chance of success on each try. Each variant performs much the same. This is to be expected as each is performing the same (greedy) hill-climbing. The optimum value for Max- flips is about 60. Since this minimum is not very sharp, it is not, however, too important to find the exact op- timal value. For Max-flips larger than about 100, the later flips of most tries are unsuccessful and hence lead to wasted work. As Max-flips increases, the amount of wasted work increases almost linearly. For everything but small values of Max-flips, HSAT takes fewer flips than DSAT, which in turn takes fewer than GSAT. The type of picking performed thus seems to have a signi- ficant effect on the chance of success in a try if more than a few flips are needed. Average Total Flips Ih 2000 1600 1200 800 400 n Similar results are observed when the problem size is varied. In Figure 3, we have plotted the average tot’al number of flips used by HSAT on random problems against integer values of Max-flips for differing num- bers of variables N. The optimal value of Max-flips ap- pears to increase approximately as N2. Even with 100 variable random problems, the optimal value is only about 2N flips. Figure 3 also supports the claim made in [Selman et al., 19921 and [Gent and Walsh, 19921 that these hill-climbing procedures appear to scale bet- ter than conventional procedures like Davis-Putnam. To investigate more precisely how various GenSAT variants scale, Figure 4 gives the average total num- ber of flips used by GSAT, DSAT and HSAT on ran- dom problems against the number of variables N at 10 variable intervals. Although the average total flips increases rapidly with N, the rate of growth seems to be less than a simple exponential. In addition, the improvement in performance offered by HSAT over DSAT, and by DSAT over GSAT increases greatly with N. One cause of variability in these results is that Max- flips is set to 5N and not its optimal value. In Figure 5 we have therefore plotted the optimal values of the 50 Var / 215 Clause problems average total flips against the number of variables at 10 variable intervals using a log scale for clarity. The s performances of GSAT, DSAT and HSAT in Figure 5 GSAT are consistent with a small (less than linear) exponen- DSAT tial dependence on N. Note that the data does not rule out a polynomial dependency on N of about order 3. HSAT Further experimentation and a more complete theoret- ical understanding are needed to choose between these two interpretations. We can, however, observe (as do [Selman et al., 1992J) that these hill-climbing proced- ures have solved some large and difficult random 3- SAT problems well beyond the reach of conventional ” 1 I I I L I I - 0 50 100 150 200 250 300 Max-flips Figure 2: Varying Max-flips Average Total Flips 2000 f I HSAT 1600- 1200- 8UOw 400- O- 0 N 2N 3N 4N 5N N=70 N=60 N-50 N=40 N=30 procedures. At worst, their behaviour appears to be exponential with a small exponent. Note again that HSAT offers a real performance advantage over GSAT and DSAT, not just a constant factor speed-up. Average Total Flips 32000s 16000- L/N=4.3 8000- Max-dips Figure 3: Varying Max-flips and N 20 40 60 80 100 120 N Figure 4: Max-flips = 5N 32 Ghent GSAT DSAT HSAT 0 20 80 Figure 5: Max-flips Optimal elated an Hill-climbing search has been used in many different domains, both practical (eg. scheduling) and artifi- cial (eg. 8-puzzle). Only recently, however, has hill- climbing been applied to SAT. Some of the first pro- cedures to hill-climb on the number of satisfied clauses were proposed in [Gu, 19921. Unfortunately, it is dif- ficult to compare these procedures with GenSAT dir- ectly as they use different control structures. These experiments have been performed with just two types of SAT problems: random Ic-SAT for k = 3 and L/N = 4.3, and an encoding of the n-queens. Al- though we expect that similar results would be ob- tained with other random and structured problem sets, we intend to confirm this conjecture experimentally. In particular, we would like to try other-values of k-and L/N, and other non-random problems (eg. blocks world planning encoded as SAT [Kautz and Selman, 19921, boo”lean induction, standard graph colouring problems encoded as SAT). To test problem sets with large num- bers of variables, we intend to implement GenSAT on a Connection Machine. This will be an interesting ex- ercise as GenSAT appears to be highly parallelizable. One aspect of GenSAT that we have not probed in detail is the scoring function. The score function has always been the number of clauses satisfied. Since much of the search consists of sideways flips, this is per- haps a little insensitive. We therefore intend to invest- igate alternative score functions. Finally, we would like to develop a better theoretical understanding of these experimental results. Unfortunately, as with simulated annealing, we fear that such a theoretical analysis may be rather difficult to construct. conclusions Recently, several local hill-climbing procedures for pro- positional satisfiability have been proposed [Selman et al., 1992; Gent and Walsh, 19921. In [Gent and Walsh, 19921, we conjectured that neither greediness nor ran- domness was essential for the effectiveness of the hill- climbing in these procedures. By the introduction of some new variants, we have confirmed this conjecture. Any (random or fair deterministic) hill-climbing pro- cedure which prefers up or sideways moves over down- wards moves (and does not prefer sideways over up moves) appears to work. In addition, we have shown that randomness is not essential for generating the ini- tial start position, and that greediness here is actually counter-productive. We have also proposed a new vari- ant, HSAT, which performs much better than previous procedures on our problem sets. Finally, we have stud- ied in detail how the performance of these procedures depends on their parameters. At worst, our experi- mental evidence suggests that they scale with a small (less than linear) exponential dependence on the prob- lem size. This supports the conjecture made in [Selman et al., 19921 that such procedures scale well and can be used to solve large and and difficult SAT problems beyond the reach of conventional algorithms. eferences Cook, S.A. 1971. The complexity of theorem proving procedures. In Proceedings of the 3rd Annual ACM Symposium on the Theory of Computation. 151-158. Gent, I. and Walsh, T. 1992. The Enigma of SAT Hill-climbing Procedures. Research Paper 605, Dept. of AI, University of Edinburgh. Gent, I. and Walsh, T. 1993. Towards an Understand- ing of Hill-climbing Procedures for SAT. Research Paper, Dept. of AI, University of Edinburgh. Gu, J. 1992. Efficient local search for very large-scale satisfiability problems. SIGART Bulletin 3( 1):8-12. Kautz, H.A. and Selman, B. 1992. Planning as Satis- fiability. In Proceedings of the 10th ECAI. 359-363. Larrabee, T. and Tsuji, Y. 1992. Evidence for a Satisfiability Threshold for Random 3CNF Formulas. Technical Report UCSC-CRL-92-42, UC Santa Cruz. Minton, S.; Johnston, M.; Philips, A.; and Laird, P. 1990. Solving large-scale constraint satisfaction and scheduling problems using a heuristic repair method. In Proc. 8th National Conference on AI. 17-24. Mitchell, D.; Selman, B.; and Levesque, H. 1992. Hard and easy distributions of SAT problems. In Proc. 10th National Conference on AI. 459-465. Reiter, R. and Mackworth, A. 1989. A logical frame- work for depiction and image interpretation. Artificial Intelligence 41(3):123-155. Selman, B. and Kautz, H. 1993. Domain-independent extensions to GSAT. Technical report, AI Principles Research, AT & T Bell Laboratories, Murray Hill, NJ. Selman, B.; Levesque, H.; and Mitchell, D. 1992. A New Method for Solving Hard Satisfiability Problems. In Proc. 10th National Conference on AI. 440-446. SosiC, R. and Gu, J. 1991. Fast search algorithms for the N-queens problem. IEEE Transactions on Sys- tems, Man, and Cybernetics 21(6):1572-1576. Automated Reasoning 33 | 1993 | 2 |
1,343 | onstraint se Y Steven Minton Sterling Software NASA Ames Research Center, M.S. 269-2 Moffett Field, CA 94035-1000 minton@ptolemy.arc.nasa.gov Abstract This paper describes a set of experiments with a system that synthesizes constraint satisfaction programs. The system, MULTI-TAC, is a CSP “ex- pert” that can specialize a library of generic algo- rithms and methods for a particular application. MULTI-TAC not only proposes domain-specific ver- sions of its generic heuristics, but also searches for the best combination of these heuristics and inte- grates them into a complete problem-specific pro- gram. We demonstrate MULTI-TAC'S capabilities on a combinatorial problem, “Minimum Maximal Matching”, and show that MULTI-TAC can syn- thesize programs for this problem that are on par with hand-coded programs. In synthesizing a pro- gram, MULTI-TAC bases its choice of heuristics on the instance distribution, and we show that this capability has a significant impact on the results. Introduction AI research on constraint satisfaction has primarily fo- cused on developing new heuristic methods. Invariably, in pursuing new techniques, a tension arises between efficiency and generality. Although efficiency can be gained by designing very specific sorts of heuristics, there is little to be gained scientifically from devising ever more specialized methods. One attractive option is to devise generic algorithms that can be made effi- cient by incorporating additional information. A good example of this is AC-5 [Van Hentenryk et al., 1992a], a generic arc-consistency method that can be specialized for functional, anti-functional or monotonic constraints to yield a very efficient algorithm. The idea of specializing generic algorithms for a par- ticular application can be carried much further. In this paper we evaluate a system, MULTI-TAC (Multi- Tactic Analytic Compiler), that can specialize a li- brary of generic algorithms and heuristics to synthesize programs for constraint-satisfaction problems (CSPs). MULTI-TAC not only proposes specialized versions of its generic heuristics, but also searches for a good com- bination of these heuristics, and integrates them into 120 Minton a complete application-specific search program. The issues we explore are quite different from those traditionally investigated in the CSP paradigm; our focus is on the specialization and integration of well- known heuristics for a given application rather than the development of new heuristics. For instance, few authors have explicitly considered that many CSP ap- plications require solving repetitive instances of a prob- lem (such as scheduling a factory) and are not “l-shot” problems. However, in our work we take this into ac- count when selecting heuristics, because, as we will show, the relative utility of different heuristics can de- pend on the population of instances encountered. We begin this paper with an overview of the MULTI- TAC system. We then present a case study showing the system’s performance on a particular problem and illustrating the utility of automatically tailoring a pro- gram to an instance distribution. The Problem The input to MULTI-TAC consists of a problem spec- ification and an instance generator for that problem. The instance generator serves as a “black box” that generates instances according to some distribution. The system is designed for a scenario where some com- binatorial search problem must be solved routinely, such as a staff scheduling application where each week jobs are assigned to a set of workers. MULTI-TAC outputs a Lisp program that is tailored to the particular problem and the instance distribu- tion. The objective is to produce as efficient a program as possible for the instance population. In practice, our goal is to do as well as competent programmers, as op- posed to algorithms experts. Achieving this level of performance on a wide variety of problems could be quite useful; there are many relatively simple appli- cations that are not automated because programming time is expensive. For our case study we chose an NP-complete prob- lem, “Minimum Maximal Matching” ( MMM), described in [Garey and Johnson, 19791. The problem is simple to specify, but interesting enough to illustrate the sys- tem’s performance. An instance of MMM consists of a From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. e vertex4 edge4 vertex3 (declare-parameter ‘K 2) (declare- type-size ‘edge 7) (declare-type-size ‘vertex 5) (declare-relation-data ‘((endpoint edge0 vertex0) (endpoint edge0 vertexl) (endpoint edge1 vertex0) (endpoint edge1 vertex3)...)) Figure 1: An instance of MMM with h’ = 2. A solution E’ = {edge2 edge5) is indicated in boldface. The instance specification is on the right. (iff (satisfies Edge; Val) that edgei and edgej share a common endpoint, and (and (implies (equal Val 1) edgej is assigned the value 1. (forall Vrtx suchthat (endpoint Edge; Vrtx) (forall Edgei suchthat (endpoint Edgei Vrtx) (or (equal Edgej Edgei) (assigned Edgei 0))))) (implies (equal VaZ 0) 3. The cardinality of the set of edges assigned the (exists Vrtx suchthat (endpoint Edgei Vrtx) (exists Edgej suchthat (endpoint Edgej Vrtx) (and (not (equal Edgej Edgei)) (assigned Edgej 1))))) (exists Solset (set-of Edge suchthat (assigned Edge 1)) (exists Solsize suchthat (cardinlty Solset Solsize) (leq So&&e K))))) value I must be less than or equal to K. The constraint language includes two types of rela- tions, problem-specific user-defined relations such as endpoint, and built-in system-defined relations, in- cluding assigned, equal, leq, and cardinality. The assigned relation has special significance since the sys- tem uses it to maintain its state. During the search for a solution, variables are assigned values. Thus, for ev- ery variable Vur there is at most one value Vul such that (assigned Var Vul) is true at any time. A sodu- tion consists of an assignment for each variable such that the constraints are satisfied. Figure 2: Description of MMM Constraints graph G = (V, E) and an integer K <I E 1. The prob- lem is to determine whether there is a subset E’ E E with ] E’ 15 K such that no two edges in E’ share a common endpoint and every edge in E - E’ shares a common endpoint with some edge in E’. In order to present a problem to MULTI-TAC, it must be formalized as a CSP. Our CSP specification lan- guage is relatively expressive. Variables have an in- teger range. A problem specification defines a set of types (e.g., vertex and edge) and a set of constraints described in a predicate logic. An instance specifica- tion (see Figure 1) instantiates the types and relations referred to in the problem description. To formulate MMM as a CSP, we employ a set of boolean variables, one for each edge in the graph. If a variable is assigned the value 1, this indicates the corresponding edge is a member of the subset E’. A value of 0 indicates the corresponding edge is not a member of E’. Figure 2 shows how the constraints are stated in the MULTI-TAC problem specification. The statement specifies the conditions under which a value Val satisfies the constraints on a variable Edgei. We can paraphrase these conditions as follows: 1. If Val equals 1, then for every edgej such that edgej shares a common endpoint with edgei, it must be the case that edgej is assigned the value 0. 2. If Vu! equals 0, then there must exist an edgej such MULTI-TAC'S specification language enables us to represent a wide variety of combinatorial problems, including many scheduling problems and graph prob- lems. There are, of course, limitations imposed by the specification language. Currently only decision prob- lems are specifiable, since one cannot state “optimiza- tion criteria”. (We plan to include this in the future.) Furthermore, the system’s expertise is limited in many respects by the pre-defined relations. So, for example, geometric concepts such as “planarity” present diffi- culties. The MULTI-TIC MULTI-TAC rests on the supposition that intelligent problem solving can result from combining a variety of relatively simple heuristics. At the top level, the syn- thesis process is organized by an “algorithm schema” into which the heuristics are incorporated. Currently only a backtracking schema is implemented (although we also plan to include an iterative repair [Minton et al., 19921 schema), so the remainder of the paper assumes a backtracking search. As in the standard backtracking CSP search[Kumar, 19921 variables are instantiated sequentially. Backtracking occurs when no value satisfies the constraints on a variable. Associated with an algorithm schema are a variety of generic heuristics. For backtracking, these can be divided roughly into two types: heuristics for vari- able/value selection and heuristics for representing and Constraint-Based Reasoning 121 propagating information. Let us consider variable and value selection first. Below we list five generic vari- able/value ordering heuristics used by MULTI-TAC: o Most-Constrained-Variable-First: This variable or- dering heuristic prefers the variable with the fewest possible values left. o Most-Constraining-Variable-First: A related vari- able ordering heuristic, this prefers variables that constrain the most other variables. 8 Least-Constraining-Value-First: A value-ordering heuristic, this heuristic prefers values that constrain the fewest other variables. e Dependency-directed backtracking: If a value choice is independent of a failure, backtrack ‘over that choice without trying alternatives. Specialized versions of these heuristics are generated by refining a meta-level description of the heuristic, as described in [Minton, 19931. The refinement process incorporates information from the problem description in a manner similar to partial evaluation. Alterna- tive approximations are generated by dropping condi- tions. For example, one of the specializations of Most- Constraining-Variable-First that MULTI-TAC produces for MMM is: Prefer the edge that has the most adja- cent, unassigned edges. An approximation of this is: Prefer the edge that has the most adjacent edges. MULTI-TAC also employs heuristic mechanisms for maintaining and propagating information during search. These include: 8 Constraint propagation: MULTI-TAC selects whether or not to use forward checking. If forward checking is used, then for each unassigned variable the system maintains a list of possible values - the values that currently satisfy the constraints on the variable. e Data structure selection: MULTI-TAC chooses how to represent user-defined relations. For example, it chooses between list and array representations. e Constraint simplification: The problem constraints are rewritten (as in [Smith, 19911 and [Minton, 19SS]) so they can be tested more efficiently. e Predicate invention: MULTI-TAC can “invent” new relations in order to rewrite the constraints, us- ing test incorporation [Braudaway and Tong, 1989; Dietterich and Bennett, 1986] and finite-differencing [Smith, 19911. F or example, in MMM the system can decide to maintain a relation over edges which share a common endpoint (i.e., the “adjacent” relation). We are currently working on a variety of addi- tional h>uristic mechanisms, such as identifying semi- independent subproblems. Compiling a target program The specialization and approximation processes may produce many candidate heuristics that can be incor- porated into the algorithm schema. For MMM, MULTI- TAC generates 52 specialized variable- and value- selection heuristics. In order to find the best combina- tion of candidate heuristics, MULTI-TAC'S evaluation module searches through a space of configurations. A configuration consists of an algorithm schema together with a list of heuristics. Multiple heuristics of the same type are prioritized by their order on the list. For ex- ample, if two variable-ordering heuristics are included, then the first heuristic has higher priority; the second heuristiE is used only as a tie-breaker when the first heuristic does not determine a unique “most-preferred” candidate. A configuration can be directly compiled into a LISP target program by the MULTI-TAC code generator and then tested on a collection of instances. As mentioned earlier, MULTI-TAC'S objective is to find the most efficient target program for the instance distribution. For the experiments reported here, “most efficient” is defined as the program that solves the most instances given some fixed time bound per instance. If two programs solve the same number of instances, the one that requires the least total time is preferred. Although this is a simplistic criterion, it is sufficient for our purposes. The evaluation module carries out a parallel hill- climbing search (essentially a beam search) through the space of configurations, using a set of training instances, a per-instance time bound and an integer B, the “beam width”. The evaluator first tests each heuristic individually by compiling a program with only the single heuristic; the system records the num- ber of problems solved within the time bound and the total time for the training instances solved. The top B configurations are kept. For each of these config- urations, the system then evaluates all configurations produced by appending a second heuristic, and so on, until either the system finds that none of the best B configurations can be improved, or the process is man- ually interrupted. In our experiments with MMM, the search took anywhere from one hour to several hours, depending on B, the time-limit and the difficulty of the training instances. One drawback to this scheme is that occasionally two heuristics may interact synergistically, even though they are individually quite poor, and MULTI-TAC'S evaluator will never discover such pairs. This is a general problem for hill-climbing utility-evaluation schemes [Minton, 1988; Gratch and DeJong, 19921, which Gratch calls the composability problem [Gratch and DeJong, 19911. To overcome this, we have exper- imented with a preprocessing stage in which MULTI- TAC evaluates pairs of heuristics in order to find any positive interactions, and then groups these pairs to- gether during the hill-climbing search. Unfortunately, this can be time-consuming if there are several hundred candidate heuristics. We believe that meta-knowledge can be used to control the search, so that only the pairs most likely to interact synergistically are evaluated. 122 Minton Experimental Results This section describes a set of experiments illustrat- ing the performance of our current implementation, MULTI-~~c1.0, on the MMM problem and contrast- ing the synthesized programs to programs written by NASA computer scientists. In our initial experiment, two computer scientists participated, one a MULTI-TAC project member (PM) and the other, a Ph.D. working on an unrelated project (Subjectl). We also evalu- ated a simple, unoptimized CSP engine for compara- tive purposes. The subjects were asked to write the fastest pro- grams they could. Over a several-day period Subject1 spent about 5 hours and PM spent 8 hours work- ing on their respective programs. The subjects were given access to a “black box” instance generator. The instance generator randomly constructed solvable in- stances (with approximately 50 edges) by first gen- erating E’ and then adding edges. MULTI-TAC em- ployed the same instance generator, and required ap- proximately 1.5. hours to find its “best” configuration. Table 1, Experiment 1, shows the results on 100 randomly-generated instances of the MMM problem, with a lo-CPU-second time bound per instance. The first column shows the cumulative running time for all 100 instances and the second column shows the num- ber of unsolved problems. The results indicate that PM (the project member) wrote the best program, fol- lowed closely by MULTI-TAC and then by Subjectl. The unoptimized CSP program was by far the least efficient. These conclusions regarding the relative ef’fi- ciencies of the four programs can be justified statisti- cally using the methodology proposed by Etzioni and Etzioni [1993]. Specifically, any pairwise comparison of the four programs’ completion times using a sim- ple sign test is statistically significant with p 5 .O5. In fact, we note that for the rest of the experiments summarized in Table 1, a similar comparison between MULTI-TAC’S program and any of the other programs is significant with p 5 .05, so we will forgo further mention of statistical significance in this section. We also tried the same experiment on another f rob- lem selected from [Garey and Johnson, 1979 , K- Closure, with very similar results (see [Minton, 19931). We were encouraged by our experiments with these two problems, especially since MULTI-TAC performed well in comparison to our subjects. It is easy to write a learning system that can improve its efficiency; it is more difficult to write a learning system that performs well when compared to hand-coded programs. 1 (Pub- lished MMM algorithms would provide a further basis for comparison, but we are not aware of any such algo- ’ However, we should note that MMM and K-Closure were not “randomly” chosen from [Carey and Johnson, 19791, but were selected because they could be easily specified in MULTI-TAC’S CSP language and because they appeared amenable to a backtracking approach. Outline of Subjectl’s algorithm: Procedure Solve( E’ , EdgesLeft, K) if the cardinality of E’ is greater than K return failure else if EdgesLeft = 8 return solution else for each edge e in EdgesLeft if Solve(E’ U {e}, EdgesLeft -{e} - {AdjcntEdges e}, K) return solution finally return failure Outline of B&X’s algorithm: Procedure Solve(E’, SortedEdgesLeft, SolSize, K) if SortedEdgesLeft = 0 return solution else if SolSize = K return failure else for each edge e in SortedEdgesLeft SortedEdgesLeft + SortedEdgesLeft -{e} if Solve(E’ U {e}, SortedEdgesLeft -{ Adjcnt Edges e}, 1 + SolSize, W return solution finally return failure Figure 3: Subjectl’s algorithm and PM’s algorithm rithms.) It is also notable that none of the MULTI-TAC project members was familiar with either MMM or K- closure prior to the experiments. Problems that are “novel” to a system’s designers are much more inter- esting benchmarks than problems for which the system was targeted [Minton, 19881. One of the interesting aspects of the experiment is that it demonstrates the critical importance of program-optimization expertise. Consider Subjectl’s algorithm shown in Figure 3 (details omitted). The recursive procedure takes three arguments: the edges in the subset E’, the set of remaining edges E - E’, and the parameter h’. The algorithm adds edges to E’ until either the cardinality of E’ exceeds h’ or a solution is found. PM’s algorithm, also shown in Fig- ure 3, improves upon Subjectl’s algorithm in several respects: 0 he-sorted Edges: The most significant improve- ment involves sorting the edges in a preprocess- ing phase, so that edges with the most adjacent edges are considered first. Interestingly, Subject 1 reported trying this as well, but apparently he did not experiment sufficiently to realize its utility. No redundancy: Once PM’s algorithm considers adding an edge to E’, it will not reconsider that edge in subsequent recursive calls. (This is a source of redundancy in Subjectl’s program. For example, in Subjectl’s program, if the first edge in EdgesLef t fails it may be reconsidered on each recursive call.) Size of E’ incrementally maintained: PM’s program uses the counter SodSize to incrementally track the cardinality of E’, rather than recalculating it on each recursive call. Efficiently updating SortedEdgesLeft: Al- Constraint-Based Reasoning 123 TabIe 1: Experimental results for three distributions though Figure 3 does not show it, PM used a bit vector to represent SortedEdgesLeft. This vector can be efficiently updated via a bitwise-or operation with a stored adjacency matrix. e Early failure: PM’s program backtracks as soon as E’ is of size K, which is one level earlier than Subject l’s program. MULTI-TAC'S program behaves similarly to the hand-coded programs in many respects. The program iterates through the edges, assigning a value to each edge and backtracking when necessary. In synthe- sizing the program, MULTI-TAC selected the variable- ordering heuristic: “prefer edges with the most neigh- hors”. This is a specialization of “Most-constrained- Variable-First”. The system also selected a value- ordering rule: “try 1 before 0”, so that in effect, the program tries to add edges to E’. This rule is a spe- cialization of “Least-Constraining-Value-First”, since the value 1 is least-constraining with respect to the second constraint (see Section 2). MULTI-TAC also in- cluded many of the most significant features of PM’s algorithm: o Pre-sorted Edges: MULTI-TAC'S code generator determined that the variable-ordering heuristic is static, i.e., independent of variable assignments. Therefore, the edges are pre-sorted according to the heuristic, so that edges with the most adjacent edges are considered first. o No redundancy: MULTI-TAC'S program is free of redundancy simply as a result of the backtracking CSP formalization. o Size of E’ incrementally maintained: This was accomplished by finite differencing the third con- straint (which specifies that IE’I 5 K). Whereas the other two programs are recursive, MULTI- TAC'S program is iterative, which is typically more ef- ficient in LISP. There are additional minor efficiency considerations, but space limitations prevent their dis- cussion. A followup study was designed with a different vol- unteer, another NASA computer scientist (Subject2). We intended to make the distribution harder, but un- fortunately, we modified the instance generator rather naively - we simply made the instances about twice as large, and only discovered later that the instances 124 Minton were actually not much harder. Table 1, Experiment2, shows the results, again for 100 instances each with a ten-CPU-second time limit. The program submitted by PM was the same as that in Experimentl, since PM found that his program also ran quickly on this distri- bution, and he didn’t think any futher improvements would be significant. MULTI-TAC also ended up with essentially the same program as in Experiment 1. Our second subject rather quickly (3-4 hours) developed a program which performed quite well. The program was similar to PM’s program but simpler; the main opti- mizations were the same, except that no bit array rep- resentation was used. Surprisingly, SubjectYs program was the fastest on this experiment, and MULTI-TAC'S program finished second. PM’s program was slower than the others, apparently because it copied the state inefficiently, an important factor with this distribution because of the larger instance size. Finally, we conducted a third experiment, this time being careful to ensure that the distribution was indeed harder. We found (empirically) that the instances were more difficult when the proportion of edges to nodes was decreased, so we modified the instance generator accordingly. The results for 100 instances, this time with a 45-second time bound per instance, are shown in Table 1, ExperimentS. Our third subject spent about 8 hours total on the task. His best program used heuristic iterative repair [Minton et al., 19921, rather than backtracking. The edges in E’ are kept in a queue. Let us say that an edge is covered if it is adjacent to any edge in E’. If E’ is not a legal solution, then the last edge in the queue is removed. The program selects a new edge that is adjacent to the most uncovered edges (and not adjacent to any edge in E’) and puts it at the front of the queue. PM spent approximately 4 hours modifying his pro- gram for this distribution. The modified program uses iteration rather than recursion, and instead of presort- ing the edges, on each iteration the program selects the edge that has the most adjacent uncovered edges and adds it to E’. Other minor changes were also included. For this distribution, MULTI-TAC'S program is, in- terestingly, quite different from that of the previous distributions. MULTI-TAC elected to order its values 11 Experiment2 11 Experiment3 1 Instances sets I unslvd I&tances sets I unslvd MULTI-TAC prgrm2 27.8 1 1527 22 MULTI-TAC nrgrm3 804 69 449 7 Table 2: Results illustrating distribution sensitivity so that 0 is tried before 1. Essentially, this means that when the program considers an edge, it first tries as- signing it so it is nut in E’. Thus, we can view this program as incrementally selecting edges to include in the set E - E’. For variable ordering, the following three rules are used: 1. Prefer edges that have no adjacent edges along one endpoint. Since this rule is static, the edges can be pre-sorted according to this criterion. 2. Break ties by preferring edges with the most end- points such that all edges connected via those end- points are assigned. (I.e, an edge is preferred if all the adjacent edges along one endpoint are assigned, or even better, if all adjacent edges along both end- points are assigned). loring programs to a distribution is useful. This is sup- ported by Table 2, which shows that MULTI-TAC’S pro- gram2, which was synthesized for the instances in the second experiment, performs poorly on the instances from the third experiment, and vice-versa. (The hand- coded programs show the same trends, but not as strongly.) This appears to be due to two factors. First, there is a relationship between heuristic power and evaluation cost. The programs tailored to the easy distribution employ heuristics that are relatively inex- pensive to apply, but less useful in directing search. For example, pre-sorting the edges according to the num- ber of adjacent edges is less expensive than picking the edge with the most adjacent uncovered edges on each iteration, but the latter has a greater payoff on harder problems. Second, some heuristics may be qualitatively tai- lored to a distribution, in that their advice might actu- ally mislead the system on another distribution There is some evidence of this in our experiments. For exam- ple, the third variable-ordering rule used by MULTI- TAC in Experiment3 degrades the program’s search on the second distribution! The program actually searches fewer nodes when the rule is left out. 3. If there are still ties, prefer an edge with the fewest adjacent edges. Discussion Each of the above heuristics is an approximation of “Most-Constrained Variable First”. Intuitively speak- ing, these rules appear to prefer edges whose value is completely constrained, or edges that are unlikely to be in E’ (which makes sense given the value-ordering rule). As shown in Table 1, MULTI-TAC’S program solved the same number of problems as SubjectS’s program, and had by far the best run time in this experiment. Interestingly, MULTI-TAC also synthesized a configu- ration similar to PM’s program, which was rejected during the evaluation stage. One of the interesting aspects of this experiment is that none of our human subjects came up with an algorithm similar to MULTI-TAC’S. Indeed, MULTI- TAC’S algorithm initially seemed rather mysterious to the author and the other project members. In ret- rospect the algorithm seems sensible, and we can ex- plain its success as follows. In a depth-first search, if a choice is wrong, then the system will have to back- track over the entire subtree below that choice before it finds a solution. Thus the most critical choices are the early choices - the choices made at a shallow depth. We believe that MULTI-TAC’S algorithm for the third distribution is successful because at the beginning of the search its variable-ordering heuristics can identify edges that are very unlikely to be included in E’. We note that the graphs in the third distribution are more sparse than the graphs in the other two distributions; so, for instance, the first rule listed above is more likely to be relevant with the third distribution. MULTI-TAC’S program synthesis techniques were moti- vated by work in automated software design, most no- tably Smith’s KIDS system [Smith, 19911, and related work on knowledge compilation [Mostow, 1991; Tong, 19911 and analytical learning [Minton et al., 1989; Etzioni, 19901. There are also a variety of proposed frameworks for efficiently solving CSP problems that are related, although less directly, including work on constraint languages [Guesgen, 1991; Lauriere, 1978; Van Wentenryk et al., 1992b], constraint abstraction [Ellman, 19931, and learning [Day, 19911. Perhaps the closest work in the CSP area is Yoshikowa and Wada’s [1992] approach for automatically generating “multi- dimensional CSP” programs. However, this work does not deal with the scope of heuristic optimizations we deal with. In addition, we know of no previous CSP systems that employ learning techniques to synthesize distribution-specific programs. In the future, we hope to use the system for real ap- plications, and also to characterize the class of prob- lems for which MULTI-TAC is effective. Currently we have evidence that MULTI-TAC performs on par with human programmers on two problems, MMM and K- closure. (MULTI-TAC is also capable of synthesizing the very effective Brelaz algorithm [Turner, 1988] for graph-coloring). However, we do not yet have a good characterization of MULTI-TAC.‘S generality, and indeed it is unclear how to achieve this, short of testing the program on a variety of problems as in [Lauriere, 19781. An underlying assumption of this work is that tai- This study has demonstrated that automatically specializing heuristics is a viable approach for synthe- sizing CSP programs. We have also shown that the Constraint-Based Reasoning 125 utility of heuristics can be. sensitive to the distribution of instances. Since humans may not have the patience to experiment with different combinations of heuristics, these results suggest that the synthesis of application- specific heuristic programs is a promising direction for AI research. In our experiments, we also saw that MULTI-TAC can be “creative”, in that the system can take combi- natorial problems that are unfamiliar to its designers and produce interesting, and in some respects unantic- ipated, heuristic programs for solving those problems. This is purely a result of the system’s ability to spe- cialize and combine a set of simple, generic building blocks. By extending the set of generic mechanisms we hope to produce a very effective and general sys- tem. At the same time, we plan to explore the issues of organization and tractability that arise in an inte- grated architecture. Acknowledgements I am indebted to several colleagues for their contri- butions to MULTI-TAC. Jim Blythe helped devise the specialization theories and search control mechanism, Gene Davis worked on the original CSP engine and lan- guage, Andy Philips co-developed the code generator, Ian Underwood developed and refined the utility eval- uator, and Shawn Wolfe helped develop the simplifier and the utility evaluator. Furthermore, Ian and Shawn did much of the work running the experiments reported in this paper. Thanks also to my colleagues at NASA who volunteered for our experiments, to Oren Etzioni for his advice, and to Bernadette Kowalski-Minton for her help revising this paper. References Braudaway, W. and Tong, C. 1989. Automated syn- thesis of constrained generators. In Proceedings of the Eleventh International Joint Conference on Artificial Intelligence. Day, D.S. 1991. Learning variable descriptors for ap- plying heuristics across CSP problems. In Proceedings of the Machine Learning Workshop. Dietterich, T.G. and Bennett, J.S. 1986. The test in- corporation theory of problem solving. In Proceedings of the Workshop on Knowledge Compiliation. Ellman, T. 1993. Abstraction via approximate sym- metry. In Proceedings of the Thirteenth International Joint Conference on Artificial Intelligence. Etzioni, 0. and Etzioni, R. 1993. Statistical methods for analyzing speedup learning experiments. Machine Learning. Forthcoming. Etzioni, 0. 1990. A Structural Theory of Explanation- Based Learning. Ph.D. Dissertation, School of Com- puter Science, Carnegie Mellon University. Garey, M.R. and Johnson, D.S. 1979. Computers and Intractability: A Guide to the Theory of NP- Completeness. W.H. Freeman and Co. Gratch, J and DeJong, G. 1991. A hybrid approach to guaranteed effective control strategies. In Proceedings of the Eighth International Machine Learning Work- shop. Gratch, J and DeJong, G. 1992. An analysis of learn- ing to plan as a search problem. In Proceedings of the Ninth International Machine Learning Conference. Guesgen, H. W. 199 1. A universal progamming lan- guage. In Proceedings of the Twelfth International Joint Conference on Artificial Intelligence. Kumar, V. 1992. Algorithms for constraint satisfac- tion problems. AI Magazine 13. Lauriere, J.L. 1978. A language and a program for stating and solving combinatorial problems. Artificial Intelligence 10:29-127. Minton, S.; Carbonell, J.G.; Knoblock, C.A.; Kuokka, D.R.; Etzioni, 0.; and Gil, Y. 1989. Explanation- based learning: A problem solving perspective. Arti- ficial Intelligence 40:63-118. Minton, S.; Johnston, M.; Philips, A.B.; and Laird, P. 1992. Minimizing conflicts: A heuristic repair method for constraint satisfaction and scheduling problems. Artificial Intelligence 58:161-205. Minton, S. 1988. Learning Search Control Knowledge: An Explanation- based Approach. Kluwer Academic Publishers. Minton, S. 1993. An analytic learning system for spe- cializing heuristics. In Proceedings of the Thirteenth International Joint Conference on Artificial Intelli- gence. . Mostow, J. 1991. A transformational approach to knowledge compilation. In Lowry, M.R. and Mc- Cartney, R.D., editors, Automating Software Design. AAAI Press. Smith, D.R. 1991. KIDS: A knowledge-based soft- ware development system. In Lowry, M.R. and Mc- Cartney, R.D., editors, Automating Software Design. AAAI Press. Tong, C. 1991. A divide and conquer approach to knowledge compilation. In Lowry, M.R. and Mc- Cartney, R.D., editors, Automating Software Design. AAAI Press. Turner, J.S. 1988. Almost all k-colorable graphs are easy to color. Journal of Algorithms 9:63-82. Van Hentenryk, P.; Deville, Y.; and Teng, C-M. 1992a. A generic arc-consistency algorithm and its specializations. Artificial Intelligence 57:291-321. Van Hentenryk, P.; Simonis, H.; and Dincbas, M. 199210. Constraint satisfaction using constraint logic programming. Artificial Intelligence 58:113-159. Yoshikawa, M. and Wada, S. 1992. Constraint satis- faction with multi-dimensional domain. In Proceed- ings of the First International Conference on Plan- ning Systems. 126 Minton | 1993 | 20 |
1,344 | it in Temporal Constraint Eddie Schwalb, Rina Dechter Department of Information and Computer Science University of California at Irvine, CA 92717 eschwalbQics.uci.edu, dechter@ics.uci.edu Abstract Path-consistency algorithms, which are polynomial for discrete problems, are exponential when applied to problems involving quantitative temporal information. The source of complexity stems from specifying rela- tionships between pairs of time points as disjunction of intervals. We propose a polynomial algorithm, called ULT, that approximates path-consistency in Temporal Constraint Satisfaction Problems (TCSPs). We com- pare ULT empirically to path-consistency and direc- tional path-consistency algorithms. When used as a preprocessing to backtracking, ULT is shown to be 10 times more effective then either DPC or PC-2. I. Introduction Problems involving temporal constraints arise in vari- ous areas of computer science such as scheduling, cir- cuit and program verification, parallel computation and common sense reasoning. Several formalisms for expressing and reasoning with temporal knowledge have been proposed, most notably Allen’s interval al- gebra (Allen 83), Vilain and Kautz’s point algebra (Vi- lain 86, Vanbeek 92)) and Dean & Mcdermott’s Time Map Management (TMM) (Dean & McDermott 87). Recently, a framework called Temporal Constraint Pro b/em (TCSP) was proposed (Dechter Meiri & Pearl 91)) in which network-based methods (Dechter & Pearl 88, Dechter 92) were extended to include continuous variables. In this framework, variables represent time points and quantitative temporal information is repre- sented by a set of unary and binary constraints over the variables. This model was further extended in (Meiri 91) to include qualitative information. The advantage of this framework is that it facilitates the following tasks: (1) finding all feasible times a given event can occur, (2) finding all possible relationships between two given events, (3) finding one or more scenarios consis- tent with the information provided, and (4) represent- ing the data in a minimal network form that can pro- *This work was partially supported by NSF grant IRI- 9157636, by Air Force Office of Scientific Research, AFOSR 900136, and by Xerox grant. vide answers to a variety of additional queries. It is well known that all these tasks are NP-hard. The source of complexity stems from specifying rela- tionships between pairs of time points as disjunctions of intervals. Even enforcing path-consistency, which is polynomial in discrete problems, becomes worst-case exponential in the number of intervals in each con- straint. On the other hand, simple temporal problems having only one interval per constraint are tractable and can be solved by path-consistency. Consequently, we propose to exploit the efficiency of processing sim- ple temporal problems for approximating path consis- tency. This leads to a polynomial algorithm, called ULT. We compare ULT empirically with path-consistency (PC-2) and directional path-consistency (DPC). Our results show that while ULT is always a very efficient algorithm, it is most accurate (relative to full path- consistency enforced by PC-2) for problems having a small number of intervals and high connectivity. When used as a preprocessing procedure before backtracking, ULT is 10 times more effective then DPC or PC. The paper is organized as follows. Section 2 presents the TCSP model. Section 3 presents algorithm ULT; Section 4 presents the empirical evaluation and the conclusion is presented in Section 5. 2. The T’CS ode1 A TCSP involves a set of variables, X1, . . . , X,, having continuous domains, each representing a time point. Each constraint T is represented by a set of intervals Tr&... , In> = {[al, h], . . . , [a,, bn]}. For a unary constraint Ti over Xi, the set of intervals restricts the domain such that (al 5 Xi < bl) U . . . U (a, 5 Xi 5 b,). For a binary constraint Taj over Xi, Xj , the set of intervals restricts the permissible values for the dis- tance Xj - Xi; namely it represents the disjunction (a1 F xj - Xi 5 bl) u . . . u (a, 5 Xj - Xi 5 b,). All intervals are pairwise disjoint. A binary TCSP can be represented by a directed con- Constraint-Based Reasoning 127 From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. straint graph, where nodes represent variables and an edge z’ ---+ j indicates that a constraint Tij is specified. Every edge is labeled by the interval set (Figure 1). A special time point X0 is introduced to represent the “beginning of the world”. Because all times a.re rela- tive to X0, thus we may treat each unary constraint Ti as a binary constraint Toi (having the same interval representation). For simplicity, we choose X0 = 0. To1 = m,21, 13,411 To3 = {[2, 131, (14,171, [18,20]} 7’12 = {[2,4], [6,71) T13 = {[0,4], [6,91, 1139 151) T23 = {[“>51) To1 E (1 5 Xl 52) ” (3 5 Xl i 4) To’03 z (2 5 X3 5 13) ” (14 < X3 5 17) ” (18 5 X3 5 2’3) T12 f (2 5 X2 - X1 5 4) U (6 5 X2 -X1 5 7) T13 = (0 5 X3 - X1 < 4) U (6 5 X3 - Xl 5 9) u (13 5 x3 - x1 < 15) T23 E (0 I X3 -X2 55) Figure 1: A graphical representation of a TCSP where X0 = 0, XI = 1.5, X2 = 4.5, X3 = 8 is a solution. A tuple X = (xl,..., x~) is called a solution if the assignment X1 = x1, . . . , Xn = zn satisfies all the constraints. The network is consistent iff at least one solution exists. A value v is a feasible value of Xi if there exists a solution in which Xi = v. The minimal domain of a variable is the set of all feasible values of that variable. The minimal constraint is the tightest constraint that describes the same set of solu- tions. The minimal network is such that its domains and cbnstraints are minimal. Definition 1: Let T = {II,. . .,Il} and S = {JI,... , jm} be two sets of intervals which can cor- respond to either unary or binary constraints. 1. The intersection of T and S, denoted by T @ S, admits only values that tire allowed by both of them. 2. The composition of T and S, denoted by T @I S, ad- mits only values T for which there exists t E T and s E S such that Y = t + s. -2 -1 0 1 2 3 4 5 6 7 8 9 T s T@S T = ([--1.25,0.25], [2.75,4.25]} S = {[-0.25J.251, [3.77,4.25]} T @ S = {[-0.25,0.25], [3.75,4.25]} T @I S = {[-1.50,1.50], [2.50,5.50], [6.50,8.50]} Figure 2: A pictorial example of the @ and @I operations. The @I operation may result in intervals that are not pairwise disjoint. Therefore, additional processing may be required to compute the disjoint interval set. Definition 2: The path-induced constraint on vari- ables X. X. is Rpa” 2’ 3 ij = @vk:(Tik 8 Tkj). A con- straint Tij is path-consistent iff Tij c Ry’fth and pclfh- redundant iff Tij _> Rflyth. A network is path-consisfc~,t ifF all its constraints are path-consistent. A general TCSP can be converted into a.n equivalent path-consistent network by applying the relaxation op- eration Tij c Tij cf, Ti,+ 0 Tkj, using algorit8hm PC-2 (Figure 3). Some problems may benefit from a weaker version, called DPC, which can be enforced more effi- ciently. Algorithm PC-2 1. Q t- ((4 k-,.i)l(i < j)and(k # G)l 2. while Q # {} do 3. select and delete a path (it k, j) from Q 4. if Tij # Tik @ Tkl then 5. T,, t- Z3 @ (%c @ T/cl) 6. if Ttj = {} then exit (inconsistency) 7. 8. end $t Q u RELATED-PATHS((i,k,j)) 9. end-while Algorithm DPC 1. for k t n downto 1 by -1 do 2. for Vi, j < k such that (i, I;), (k, j) E E do 3. if Til # % @J Tkl then 4. E t Eu (i,j) 5. X3 +- T2j CB (r/c C3 Tk3) 6. if Tzj = (} then exit (inconsistency) 7. end-if 8. end-for 9. end-for Figure 3: Algorithms PC-2 and DPC (Dechter Meiri & Pearl 91). 3. Upper-Lower Tightening (ULT) The relaxation operation Tij + Tij $Tih @Tkj increases the number of intervals and may result in exponential blow-up. As a result, the complexity of PC-2 and DPC is exponential in the number of intervals, but can be bounded by O(n3 R3) and O(n3R2), respectively, where n is the number of variables and R is the range of the constraints. When running PC-2 on random instances, we encountered problems for which path-consistency required 11 minutes on toy-sized problems with 10 vari- ables, range of [O,SOO], and with 50 input intervals in each constraint. Evidently, PC-2 is computationally expensive (also observed by (Poesio 91)). A special class of TCSPs that allow efficient process- ing is the Simple Temporal Problem (STP) (Dechter Meiri & Pearl 91). In this class, a constraint has a sin- gle interval. An STP can be associated with a directed edge-weighted graph, Gd, called a distance graph, hav- ing the same vertices as the constraint graph 6; each edge i --f j is labeled by a weight Waj representing the constraint Xj - Xi 5 wij (Figure 4). An STP is consistent iff the corresponding d-graph Gd has no negative cycles and the minimal network of the STP 128 Schwalb N N;l) PJ31 i2.41 114.173 W.71 WOI r2.71 [18.201 k71 ~41 b5.71 v.41 D5.71 Figure 5: A sample run of ULT. We start with N (Figure 1) and compute NII ), IV;;,, N(y). Thereafter, we perform a second iteration in which we compute Ni2,, N;;), N;I; and finally, in the third iteration, there is no change. The first iteration removes two intervals, while the second iteration removes one. corresponds to the ~ninimal distances of Gd. For a pro- cessing example, see figure 4. Alternatively, the mini- mal network of an STP can be computed by PC-2 in O(n3) steps. F'loyd - Warshall Minimal Distance 12201 Figure 4: Processing an STP. The minimal rletwork is in Figure 5, network N(i) . Motivated by these results, we propose an efficient algorithm that approximates path-consistency. The idea is to use the extreme points of all intervals associ- ated with a single constraint as one big interval, yield- ing an STP, and then to perform path-consistency on that STP. This process will not increasing the number of intervals. Definition 3 : Let xj = [II, . . . , I,,,] be the con- straint over variables Xi, Xj and let Lij, Uij be the lowest and highest value of Tij, respectively. We de- fine N’, N”, N”’ as follows (see Figure 5): Algorithm IJpper-Lower Tightening CULT) 1. input: N 2. N”’ c N 3. repeat 4. N c N”’ 5. compute Iv’, N”, N”‘. 6. until Vij (L:; = Lt3) and (Vi; = U;j) or 3ij (‘7;‘: < L{‘,/) 7. if Vij (V,:’ > L::) output: “Inconsistent.” otherwise output: N”’ Figure 6: The ULT algorithm. o N’ is an STP derived from N by relaxing its con- straints to Tij = [Lij, Uij]. e Iv” is the minimal network of Iv’. o N”’ is derived from N” and N by intersecting ‘I/; = q/ @ Tij . Algorithm ULT is presented in Figure 6. We can show that LJLT computes a network equivalent to its input network. Lemma 1: Let N be th,e input to ULT and R be its output. The networks N and R are equivalent. Regarding the effectiveness of LJLT, we can show that Lemma 2 : Every iteration of ULT (excluding the lust) removes at least one interval. Constraint-Based Reasoning I.29 This can be used to show that Theorem 1: Algor-%thm ULT terminates in 0( n3ek+ e’k”) steps where n is the number of variables, e is the number of edges, and k is the maximal number of il~tervnls in each constraint. In contrast to PC-2, ULT is guaranteed to converge in O(ek) iterations even if the interval boundaries are not rational numbers. For a sample execution see Figure 5. Algorithm ULT can also be used to identify path- redundancies. Definition 4: A constraint Tij is redundant-prone iff, after applying ULT, T[j’ is redundant in N”‘. Lemma 3: Tii’ is path-redundant in N”’ if T/j 5 Tij and T/j = @i/k (2-g c3 7-g. Corollary 1 : A single interval constraint Tij is redundant-prone # T/i = @vh(TiL 8 TL>). Consequently, after applying ULT to a TCSP, we can test the condition in Corollary 1 and eliminate some re- dundant constraints. A brute-force algorithm for solving a TCSP decom- poses it into separate STPs by selecting a single inter- val from each constraint (Dechter Meiri & Pearl 91). Each STP is then solved separately and the solutions are combined. Alternatively, a naive backtracking al- gorithm will successively assign an interval to a con- straint, as long as the resulting STP is consistent.’ Once inconsistency is detected, the algorithm back- tracks. Algorithm ULT can be used as a preprocess- ing stage to reduce the number of intervals in each constraint and to identify some path-redundant con- straints. Since every iteration of ULT removes at least one interval, the search space is pruned. More impor- tantly, if ULT causes redundant constraints to be re- moved, the search space may be pruned exponentially in the number of constraints removed. Note however that the number of constraints in the initial network is e while following the application of ULT, DPC or PC-2, the constraint graph becomes complete thus the number of constraints is O(n’). 4. Empirical Evaluation We conducted two sets of experiments. One comparing the efficiency and accuracy of ULT and DPC relative to PC-2, and the other comparing their effectiveness as a preprocessing to backtracking. Our experiments were conducted on randomly gener- ated sets of instances. Our random problem generator uses five parameters: (1) n, the number of variables, (2) k, the number of intervals in each constraint, (3) ‘We call this p recess “labeling the TCSP”. Effwiency of ULT, DPC, PC-2, PC- l(20 reps) on 10 variables, tightness .95, Pc=O. 14 100 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .OOl I . I 1 , . , 0 20 40 60 80 Number of Intervals Approximation Quality relative to PC on 10 variables, tightness .95, Pc=.14 -I- ULT DPC PC-2 PC- 1 L4 0 20 40 60 80 Number of Intervals Figure 8: The execution time and quality of the approx- imation obtained by DPC and ULT to PC. Each point represents 20 runs on networks with 10 variables, .95 tightness, connectivity PC = .14 and range [0,600]. R = [Inf,Sup], the range of the constraints, (4) TI, the tightness of the constraints, namely, the fraction of values allowed relative to the interval [Inf,Sup], and (5) PC, the probability that a constraint Tij exists. Intuitively, problems with dense graphs and loose con- straints with many intervals should be more difficult. To evaluate the quality of the approximation achieved by ULT and DPC relative to PC-2, we counted the number of cases in which ULT and DPC detected inconsistency given that PC-2 detected one. From Figure 8, we conclude that when the number of intervals is small, DPC is faster than ULT but produces weaker approximations. When the number of intervals is large, DPC is much slower but is more accurate. In addition, we observe that when the number of intervals is very large, DPC computes a very good approxima- tion to PC-2, and runs about 10 times faster. In Figure 9 we report the relative quality of ULT and DPC for a small (3) and for a large (20) number of intervals, as a function of connectivity. As the con- nectivity increases, the approximation quality of both 130 Schwalb Approximation Qndity relative to PC on 10 variables, 3 intervals, tightness .OS 0.08 0.10 0.12 0.14 0.16 0.18 0.20 Connectivity parameter Pc Approximation Quality relative to PC on 10 variables, 20 intervals, tightness .!I5 A La ULT DPC E 0.06 0.08 0.10 0.12 0.14 0.16 0.18 0.20 Connectivity parameter PC Figure 9: Quality of the approximation vs connectivity on problems with 10 variables, tightness .95 and range [0,600]. We measured on problems of 3 intervals (top) where each point represents 1000 runs, and on 20 inter- vals (bottom) where each point represents 100 runs, 1JLT and DPC increases. Note again that ULT is more accurate for a small number of intervals while DPC dominates for large nulnber of intervals. We measured the number of iterations performed by ULT, PC-2, and PC-l’ (Figure 10). We observe that for our benchmarks, lJLT performed 1 iteration (ex- cluding the termination iteration) in most of the cases, while PC-1 and PC-2 performed more (DPC performs only one iteration). In the second set of experiments we tested the power of ZIJLT, DPC and PC-2 as preprocessing to backtrack- ing. Without preprocessing, the problems could not be solved using the naive ba.cktracking. Our preliminary tests ra.n on 20 problem instances with 10 variables and 3 intervals and none terminated before 1000000 STP checks. We therefore cont,inuecl with testing backtrack- ing following preprocessing by ULT, DPC and PC-2. 2A brute force path-consistency algorithm (Dechter Meiri & Pearl). Iterntiw count of ULT, PC-2, PC- 1 20 30 40 Number of intervals - ULT h PC-2 - PC-I Iteration count for ULT, PC-2, PC-1 18 0.08 0.10 0.12 0.14 0.16 0.18 0.20 Connectivity Parameter PC ULT PC-2 PC- 1 Figure 10: The number of iterations ULT, PC-2 and PC-l performed (excluding the termination iteration) on problems with 10 variables, 20 intervals, PC = .14, and tightness .95. Each point represents 20 runs. Testing the consistency of a labeling requires solving the corresponding STP. An inconsistent STP repre- sents a dead-end. Therefore, we counted the number of inconsistent STPs tested before a single consistent one was found and the overall time required (including preprocessing). The results are presented in Figure 11 on a Iogarithnlic scale. We observe that ULT was able to remove intervals effectively and appears to be the most effective as a preprocessing procedure. For additional experiments with path-consistency for qual- itative temporal networks see (Ladkin & Reinefeld 92). Summary and conclusion In this paper we presented a polynomial approximation algorithm to path-consistency for temporal constraint problems, called ULT. Its complexity is 0( n3elc+e2k2), in contrast to path-consistency which is exponential in II-, where n is the number of variables, e is the number of constraints and k is the maximal number of intervals per constraint. We also argued that tr LT can be used to effectively identify some path-redundancies. We evaluated the performance of DPC and ULT em- Constraint-Based Reasoning 131 Backtracking efficiency using ULT, DPC, PC as preprocessing stage .ll ' a ' I ' I 'I 0.20 0.22 0.24 0.26 0.28 Connectivity Parameter Pc Backtracking Efficiency using ULT, DPC, PC as preprocessing stage PC References Allen, J.F. 1983. Ma.int,aining knowledge about temporal intervals. 6’A CM 26( 11):832-843. Dechter, R.; Meiri, I.; Pearl, J. 1991. Temporal Constraint Networks. Artificial Intelligence 49:61-95. Dechter, R.; 1992. “Constraint Networks”, Encyclopedia of Arti’ciad Intelligence, (2nd ed.) (Wiley, New York, 1992) 276-285. Dechter, R.; Pearl, J. 1988. Network-based heuristics for constraint, satisfaction problems. Artificial Intelligence 3411-38. Dechter, R. 1990. Enhancement schemes for constraint processing: Backjumping, learning and cutset decomposition. Artificial Intelligence 411273-312. Dechter, R.; Dechter, A. 1987. Removing redundancies in constraint networks. In Proc. of AAAI-87, 105-109. c t I Dean, T.M.; McDermott, D. V. 1987. Temporal data base management. Artificial Intelligence 32: l-55. I ’ I Q I ’ 0.20 0.22 0.24 0.26 0.28 Connectivity Parameter Pc Figure 11: Backtracking performance following prepro- cessing by ULT, PC-2 and PC-l respectively, on prob- lems with 10 variables, 3 intervals, and tightness .95. Each point represents 20 runs. The time includes pre- processing. pirically by comparing their run-time and quality of output relative to PC-2. The results show that while ULT is always very efficient, it is most accurate (i.e. it generates output closer to PC-2) for problems hav- ing a small number of intervals and high connectiv- ity. Specifically, we saw that: 1. The complexity of both PC-2 and DPC grows exponentially in the num- ber of intervals, while the complexity of ULT remains almost constant. 2. When the number of intervals is small, DPC is faster but produces weaker approxima- tions relative to ULT. When the number of intervals is large, DPC is much slower but more accurate. 3. For a large number of intervals, DPC computes a very good approximation to PC-2, and runs about 10 times faster. 4. When used as a preprocessing procedure be- fore backtracking, ULT is shown to be 10 times more effective then either DPC or PC-2. Finally, our experimental evaluation is by no means complete. We intend to conduct additional experi- ments with wider range of parameters and larger prob- lems. Freuder, E.C. 1985. A sufficient condition of backtrack free search. JACK 32(4):510-521. Ladkin, P. B.; Reinefeld, A. 1992. Effective solution of qualitative interval constraint problems. Artificial Intelligence 571105-124. Meiri, I. 1991. Combining qualitative and quantitative constraints in temporal reasoning. In Proc. of AAAI-91, 260-267. Meiri, I.; Dechter, R.; Pearl, J. 1991. Tree decomposition with application to constraint processing. In Proc. of AAAI-91, 241-246. Kautz, H.; Ladkin, P. 1991. Intergating metric and qualitative temporal reasoning”, In Proc. of AAAI-91, 241-246. Koubarakis, M. 1992. Dense time and temporal constraints with #. In Proc. KR-92, 24-35. Poesio, M.; Brachman, R. J. 1991. Metric constraints for maintaining appointments: Dates and repeated activities. In Proc. AAAI-91, 253-259. Van Beek, P. 1992. Reasoning about qualitative temporal information. Artificid Intelligence 58:297-326. Vilain, M.; Kautz, H. 1986. Constraint propagation algorithms for temporal information. In Proc AAAI-86, 377-382. 132 Schwalb | 1993 | 21 |
1,345 | Nondeterrninistic Lis as a Substrate for Csnstrai rogram Jeffrey Mark Siskind* University of Pennsylvania IRCS 3401 Walnut Street Room 407C Philadelphia PA 19104 215/898-0367 internet: Qobi@CIS.UPenn.EDU Abstract We have implemented a comprehensive constraint- based programming language as an extension to COMMON LISP. This constraint package provides a unified framework for solving both numeric and non-numeric systems of constraints using a com- bination of local propagation techniques including binding propagation, Boolean constraint propaga- tion, generalized forward checking, propagation of bounds, and unification. The backtracking facility of the nondeterministic dialect of COMMON LISP used to implement this constraint package acts as a general fallback constraint solving method mit- igating the incompleteness of local propagation. Introduction Recent years have seen significant interest in constraint logic programming languages. Numerous implementa- tions of such languages have been described in the lit- erature, notably CLP(%) (Jaffar and Lassez 1987) and CHIP (Van Hentenryck 1989). The point of departure leading to these systems is the observation that the unification operation at the core of logic programming can be viewed as a method for solving equational con- straints between logic variables which range over the universe of Herbrand terms. A natural extension of such a view is to allow variables to range over other domains and augment the programming language to support the formulation and solution of systems of constraints appropriate to these new domains. The *Supported in part by a Presidential Young Investiga- tor Award to Professor Robert C. Berwick under National Science Foundation Grant DCR-85552543, by a grant from the Siemens Corporation, by the Kapor Family Foundation, by AR0 grant DAAL 03-89-C-0031, by DARPA grant N00014-90-J-1863, by NSF grant IRI 90-16592, and by Ben Franklin grant 91S.3078C-1 + Supported in p art by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-91- J-4038. avid Allen McAllestert M. I. T. Artificial Intelligence Laboratory 545 Technology Square Room NE43-412 Cambridge MA 02139 617/253-6599 internet: dam@AI.MIT.EDU notion of extending a programming language to sup- port constraint-based programming need not be unique to logic programming. In this paper we present the constraint package included with SCREAMER, a non- deterministic dialect of COMMON LISP described by Siskind and McAllester (1993). This package pro- vides functionality analogous to CLP(%) and CHIP in a COMMON LISP framework instead of a PROLOG one. SCREAMER augments COMMON LISP with the ca- pacity for writing nondeterministic functions and ex- pressions. Nondeterministic functions and expressions can return multiple values upon backtracking initiated by failure. SCREAMER also provides the ability to per- form local side effects, ones which are undone upon backtracking. Nondeterminism and local side effects form the substrate on top of which the SCREAMER con- straint package is constructed. Variables and Constraints SCREAMER includes the function make-variable which returns a data structure called a variable. SCREAMER variables are a generalization of PROLOG logic variables. Initially, new variables are unbound and unconstrained. Variables may be bound to values by the process of solving constraints asserted between sets of variables. Both the assertion of constraints and the ensuing binding of variables is done with local side effects. Thus constraints are removed and variables become unbound again upon backtracking. SCREAMER provides a variety of primitives for con- straining variables. Each constraint primitive is a “constraint version” of a corresponding COMMON LISP primitive. For example, the constraint primitive +v is a constraint version of +. An expression of the form (+v 2 y) returns a new variable t, which it con- strains to be the sum of 2 and y by adding the con- straint z = (+ x y) . By convention, a SCREAMER primitive ending in the letter v is a constraint version of a corresponding COMMON LISP primitive. Table 1 lists the constraint primitives provided by SCREAMER. All of these primitives have the property that they accept variables as arguments-in addition to ground Constraint-Based Reasoning 133 From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. Type Restrictions: numberpv realpv integerpv booleanpv memberv Boolean: andv orv notv Numeric: <v <=v >v >=v =v /=v +v -v *v /v minv maxv Expression: equalv Functions: funcallv applyv Table 1: The constraint primitives provided by SCREAMER. . values-and return a variable as their result. The con- straint primitive installs a constraint between the ar- guments and the returned variable stating that, un- der any interpretation of the variables involved, the value of the result variable must equal the corre- sponding COMMON LISP primitive applied to the val- ues of the arguments. As another example, the ex- pression (CV x y) returns a variable z and adds the constraint z = (< x y) . This constraint is satisfied when z is either t or nil depending on whether x is less than y. For the most part, each constraint primitive obeys the same calling convention as the correspond- ing COMMON LISP primitive. SCREAMER performs a variety of optimizations to improve run time efficiency. In particular, if the value of a variable returned by a constraint primitive can be determined at the time the function is called then that value is returned directly without creating a new variable. In SCREAMER, most constraints are of the form z = (f x1 . . . x,) where f is the COMMON LISP primi- tive corresponding to some constraint primitive. Con- straints of this form can imply type restrictions on the variables involved. For example, a constraint of the form z = (< x y) implies that z is “Boolean”, i.e., either t or nil. Furthermore, this constraint im- plies that x and y are numeric. In practice, a variable usually has a well defined type, e.g., it is known to be Boolean, known be real, known to be a cons cell, etc. Knowledge about the type of a variable has sig- nificant ramifications for the efficiency of SCREAMER’S constraint satisfaction algorithms. SCREAMER has spe- cial procedures for inferring the types of variables. Be- cause knowledge of the types of variables is impor- tant for efficiency, in contrast to the COMMON LISP primitives and, or, and not which accept any argu- ments of any type, the SCREAMER constraint primi- tives andv, orv, and notv require their arguments to be Boolean. This allows the use of Boolean constraint satisfaction techniques for any constraints introduced by these primitives. Similarly, constraint “predicates” return Boolean variables. For example, in contrast to the COMMON LISP primitive member which can return the sub-list of the second argument whose head satisfies the equality check, the result of the memberv primitive is constrained to be Boolean. SCREAMER includes the primitive assert! which can be used to add constraints other than those added by the constraint primitives. Evaluating the expres- sion (assert ! x) constrains x to equal t. This can be used in conjunction with other constraint primitives to install a wide variety of constraints. For example, (assert ! (<v z y) ) effectively installs the constraint that x must be less than y.’ Cer- tain constraint primitives in table 1, in conjunc- tion with assert!, can be used to directly constrain the type of a variable. For example, evaluating (assert! (numberpv x)) effectively installs the con- straint that x must be a number. Likewise evaluat- ing (assert ! (booleanpv x)) installs the constraint that x must be Boolean. This is effectively the same as evaluating (assert! (memberv x ’ (t nil))). All constraints in SCREAMER are installed either by assert! or by one of the constraint primitives in ta- ble 1. A constraint installed by assert! states that a certain variable must have the value t. A con- straint installed by a constraint primitive always has the form z = (f xi . . . x,) where f is either a COMMON LISP primitive or a slight variation on a COMMON LISP primitive. The variations arise for con- straint primitives such as orv and memberv where the semantics of the constraint version differs slightly from the semantics of the corresponding COMMON LISP primitive as discussed above. An attempt to add a constraint fails if SCREAMER determines that the resulting set of constraints would be unsatisfiable. For example, after evaluat- ing (assert ! (<v x 0) > a subsequent evaluation of (assert ! (>v a: 0) > will fail. A call to a constraint primitive can fail when it would generate a constraint inconsistent with known type information. For exam- ple, if x is known to be Boolean then an evaluation of (+v 2 y) will fail. Constraint Propagation In this section we discuss the five kinds of con- straint propagation inference processes performed by SCREAMER. First, SCREAMER implements binding propagation, an incomplete inference technique some- times called value propagation. Second, SCREAMER implements Boolean constraint propagation (BCP). This is an incomplete form of Boolean inference that can be viewed as a form of unit resolution. Third, SCREAMER implements generalized forward checking (GFC). This is a constraint propagation technique for discrete constraints used in the CHIP system. Fourth, SCREAMER implements bounds propagation on numeric variables. Such bounds propagation- when combined with the divide-and-conquer technique ‘To mitigate the apparent inefficiency of this conceptu- ally clean language design, the implementation optimizes most calls to assert!, such as the calls (assert! (notv (realpv 2) >> and (assert ! (<=v 1: y)), to eliminate the creation of the intermediate Boolean variable(s) and the re- sulting local propagation. 134 Siskind discussed later in this paper-implements a general- ization of the interval method of solving systems of nonlinear equations proposed by Hansen (1968). Fi- nally, SCREAMER implements unification. Unification is viewed as a constraint propagation inference tech- nique which can be applied to equational constraints involving variables that range over S-expressions. The constraint propagation techniques are incrementally run to completion whenever a new constraint is in- stalled by assert ! or one of the constraint primitives. The five forms of constraint propagation are described in more detail below. Each form of constraint propagation can be viewed as an inference process which locally derives infor- mation about variables. All forms of propagation are capable of inferring values for variables. For example, after evaluating (assert ! (orv x y) > and (assert! (notv x)) BCP will infer that y must have the value t. If some constraint propagation inference process has determined a value for some variable x then we say that x is bound and the inferred value of a: is called the binding of x. Binding Propagation: As noted above, most constraints in SCREAMER are of the form z = (f Xl . . . xn) where f is a COMMON LISP primitive, z is a variable, and each 2; is either a variable or a specific value. For any such constraint SCREAMER im- plements a certain value propagation process. More specifically, if bindings have been determined for all but one of the variables in the constraint, and a binding for the remaining variable follows from the constraint and the existing bindings, then this additional bind- ing is inferred. This general principle is called binding propagation. Binding propagation will always bind the output variable of a constraint primitive whenever the input variables become bound. For example, given the constraint 2 = (+ x y) , if x is bound to 2 and y is bound to 3, then binding propagation will bind z to 5. Often, however, binding propagation will derive a bind- ing for an input from a binding for the output. For ex- ample, given the constraint z = (+ x y) , if z is bound to 5 and x is bound to 2, then binding propagation will bind y to 3. Boolean Constraint Propagation: BCP is sim- ply arc consistency (cf. Mackworth 1992) relative to the Boolean constraint primitives andv, orv, and notv. BCP, like arc consistency, is semantically incomplete. For example, after evaluating (assert! (orv z ul>> and (assert ! (orv (notv z) w) ) , any variable in- terpretation satisfying the installed constraints must assign ul the value t . However, BCP will not make this inference. Semantic incompleteness is necessary in order to ensure that the constraint propagation pro- cess terminates quickly. Later in the paper we dis- cuss how we interleave backtracking search with con- straint propagation to mitigate the incompleteness of local propagation. Generalized Forward Checking: GFC applies to variables for which a finite set of possible val- ues has been established. Such a set is called an enumerated domain. Variables with enumerated do- mains are called discrete. For example, after evalu- ating (assert! (memberv x ‘(a b c d))) the vari- able x is discrete because its value is known to be either a, b, c, or d. Boolean variables are a special case of discrete variables where the enumerated domain con- tains only t and nil. Similarly, bounded integer vari- ables are considered to be discrete. For each discrete variable SCREAMER maintains a list representing its enumerated domain. The enumerated domain for a given variable can be updated by the GFC inference process. The GFC inference process operates on con- straints of the form z = (funcall f x1 . . . 2,). These constraints are generated by the constraint primi- tive funcallv. Unlike most constraint primitives, the primitive funcallv will signal an error-rather than fail- if its first argument is bound to anything but a deterministic procedure. Now consider the con- straint z = (funcall f ~1 . . . 2,). GFC will only operate on this constraint when f is bound and all but one of the remaining variables in the constraint have been bound. If the unbound variable is the output variable Z, then GFC simply derives a binding for z by applying f. If the unbound variable is one of the arguments xi then GFC tests each element 21 of the enumerated domain of the discrete variable xi for con- sistency relative to this constraint. Elements of the enumerated domain of xi that are inconsistent with the constraint are removed. For example, suppose that we have evaluated (assert! (memberv x ’ (1 5 9>>>, (assert! (memberv y ‘(3 7 12))) and (assert! (funcallv #‘< x y) >. In this case the output vari- able of the funcallv constraint is bound to t. Now suppose that some constraint propagation inference process infers that y is 3. In this case GFC will run on the funcallv constraint and remove 5 and 9 from the enumerated domain of x. Whenever the enumer- ated domain of a discrete variable is reduced to a single value, GFC binds the variable to that value. An exam- ple of GFC running on the N-Queens problem is given later in the paper. Bounds Propagation: Bounds propagation ap- plies to numeric variables. For each numeric value the system maintains an upper and lower bound on the possible values of that variable. These bounds prop- agate through constraints generated by numeric con- straint primitives such as +v, *v and <v. For example, after evaluating (assert ! (=v x (+v 2 y))), if z is known to be no larger than 5.7, and x is known to be no smaller than 2.2, then bounds propagation will infer that y is no larger than 3.5. Bounds propagation can also derive values for the Boolean output variables of numeric constraint predicates such as <v and =v. For example, if we have the constraint z = (< x y)) and the system has determined that x is at least 2.0 but y is no larger than 1.0, then the system will infer that t Constraint-Based Reasoning 135 is nil. Bounds propagation will not infer a new bound unless the new bound reduces the known interval of the variable involved by at least a certain mini- mum percentage. This ensures that the bounds prop- agation process terminates fairly quickly. For ex- ample, SCREAMER avoids the very large number of bounds updates that would result from the constraints (assert! (>v x O)), (assert! (<v x 1000)) and (assert! (<v 2 (-v z 0.001))). Unification: Unification operates on constraints of the form 20 = (equal & TJ) which are gen- erated by the constraint primitive equalv. At any given time there is a system of equations de- fined by the set of equalv constraints whose out- put variable has been bound to t. SCREAMER in- crementally maintains a most general unifier CT for this system of equations. For example, evaluating (assert! (equalv (cons x 2) (cons y w>)> will result in a unifier CT that equates 5, y, and 20, i.e., a unifier u such that cr[x], a[~], and ~[zu] are all equal. SCREAMER also implements disunification as in PROLOG-II (Colmerauer 1984). Thus, after evalu- ating (assert ! (notv (equalv 2 y))), any attempt to bind x or y to be equal will fail. The different forms of constraint propagation can in- teract with each other. For example, a given variable can be both discrete and numeric. The system removes non-numeric elements from the enumerated domains of discrete numeric variables. Furthermore, if a bound is known for a discrete numeric variable then elements violating that bound are eliminated from its enumer- ated dom .ain. &REAMER also derives bounds informa- tion from the enumerated domains of discrete numeric variables. Unification also interacts with SCREAMER bindings. For example, if CT is the most general uni- fier maintained by SCREAMER, and x and y are two variables such that ~[x] equals ~[y], then any bind- ing for x becomes a binding for y and vice versa. If a[x] equals a[~]/], and x and y have incompatible bind- ings, then a failure is generated. Solving Systems of Constraints By design, all of the constraint primitives described so far use only fast local propagation techniques. Such techniques are necessarily incomplete; they cannot al- ways solve systems of constraints or determine that they are unsolvable. SCREAMER provides a number of primitives for augmenting local propagation with backtracking search to provide a general mechanism for solving systems of constraints. One such primi- tive, linear-force, can be applied to a variable to cause it to nondeterministically take on one of the val- ues in its domain. Linear-force can be applied only to discrete variables or integer variables. Constrain- ing a variable to take on a value using linear-force may cause local propagation. Thus a single call to linear-force may cause a number of variables to be bound, or alternatively may fail if the variable cannot consistently take on any value. A second primitive, divide-and-conquer-force, can be applied to a vari- able to nondeterministically reduce the set of possible values it may take on. Divide-and-conquer-force can be applied only to discrete variables or real vari- ables with finite upper and lower bounds. When ap- plied to discrete variables, the enumerated domain is split into two subsets and the variable nondetermin- istically constrained to take on values from either the first or second subset. When applied to bounded real variables, the interval is split in half and the variable nondeterministically constrained to take on values in either of the two subintervals. The above two functions operate on single variables. More generally, one must find the values of several vari- ables which satisfy the given constraints. SCREAMER provides two primitives to accomplish this. Both are higher-order functions which take a single vari- able force function as an argument (e.g. linear-force or divide-and-conquer-force) and produce a func- tion capable of forcing a list of variables using that force function. Each incorporates a different strategy for choosing which variable to force next. The first, static-ordering, simply forces the variables in the order given. The single variable force function is re- peatedly applied to each variable, until that variable takes on a ground value, before proceeding with the next variable. All variables are bound upon termina- tion. The second, reorder, selects the variable with the smallest domain, applies the single variable force function to this variable, and repeats this process un- til all variables are bound. Since the choice of single variable force function is orthogonal to the choice of variable ordering strategy, SCREAMER thus provides four distinct constraint solving strategies. More can easily be added. Examples We will illustrate the power of the SCREAMER con- straint language with two small examples. The first, shown in figure 1, solves the N-Queens problem. The function n-queensv creates a variable for each row and constrains each row variable to take on an integer be- tween 1 and n indicating the column occupied by a queen in that row. The function (a-member-ofv s) is simply syntactic sugar for the following. (let ((v (make-variable))) (assert! (memberv v s>> VI The SCREAMER primitive (solution x f > gathers all of the variables nested inside the structure x, applies the multiple variable forcing function f to this list of variables, and returns a copy of x where the variables have been replaced by their bound values. In the above example, SCREAMER applies GFC as the technique for solving the underlying constraint sat- 136 Siskind (defun attacks? (qi qj distance) (or (= qi qj> (= Cabs (- qi qj)) distance))) (defun n-queensv (n) (solution (let ((q (make-array n) 1) (dotimes (i n) (setf (aref q i) (an-integer-betweenv 1 n))) (dotimes (i n) (dotimes (j n) (if (> j i) (assert ! (notv (funcallv #‘attacks? (aref q i) (aref q j> (- j i))))))) (coerce q ‘list)) (reorder #‘domain-size #‘(lambda (x) (declare (ignore xl> nil) #‘< #‘linear-force))) (def un nonlinear 0 (let ((x (a-real-betweenv -1e40 le40)) (y (a-real-betweenv -1e40 le40)) (z (a-real-betweenv -1e40 le40))) (assert! (andv (=v (+v (*v 4 x x y) (*v 7 y z z) (*v 6 x x z z)) 1356.14) ('V (+v (*v 3 x y) (*v 2 y y) (*v 5 x y z)) -141.375) (=v (*v (+v x y) (+v y z)) -7.7625))) (solution (list x y z> (reorder #‘range-size #‘(lambda (x) (< x le-6)) #‘> #‘divide-and-conquer-force)))) Figure 1: Two constraint-based SCREAMER programs, one for solving system of nonlinear equations using numeri c bounds propagation. the N-Queens problem and one for solving a isfaction problem. SCREAMER chooses this technique since all of the variables involved are discrete. The second example, shown in figure 1, illustrates how bounds propagation can be used to solve sys- tems of nonlinear equations expressed as constraints between numeric variables. The function nonlinear finds a solution to the following system of nonlinear equations. 42’~ + 7yz2 + 6x2z2 = 1356.14 3xy + 2y2 + 5xyz = -141.375 (x + y)(y + z) = -7.7625 The expression (a-real-betweenv -ie40 ieM) cre- ates a variable constrained to be a real number be- tween the given upper and lower bounds. After the constraints have been asserted between the variables, divide and conquer search-interleaved with bounds propagation-is used to find a solution to the equa- tions. One such solution is x M -7.311, y = 6.113, z M 0.367. Note that unlike the simplex method used in cLP(!J?)-which is limited to solving linear sys- tems of equations-the combination of divide and con- quer search interleaved with bounds propagation al- lows SCREAMER to solve complex nonlinear systems of equations. These techniques also enable SCREAMER to solve numeric constraint systems with both inequal- ities and equational constraints. Furthermore, since all of the constraint satisfaction techniques are inte- grated, SCREAMER can solve disjunctive systems of equations as well as systems which mix together nu- meric, Boolean, and other forms of constraints. We wish to point out the intentional similarity in the names of the SCREAMER primitives a-member-of and a-member-ofv.’ Both describe a choice between a set of possible alternatives. The former enumerates that set nondeterministically by backtracking. The latter instead, creates a variable whose value is constrained to be a member of the given set. The former lends itself to a generate-and-test style of programming. (let ((21 (a-member-of ~1)) . . izn (a-member-of ~~1)) (unless @[zr . . . xn] (fail>> (list 21 . . .z,>) The latter lends itself to constraint-based program- ming. (let ((21 (a-member-ofv sr)) . . izn (a-member-ofv ~~1)) (assert ! (f uncallv #’ (lambda (21 . . . z,) Q[zr . . . zn]) 21...2rz)) (solution (list zr . . . z,> (static-ordering #‘linear-force))) Though these two program fragments are structurally very similar, and specify the same problem, they en- tail drastically different search strategies. The lat- ter constitutes a lifted variant of the former. A fu- ture paper will we discuss the possibilities of perform- ing such lifting transformations automatically. Such 2We adopt the (unenforced) convention that the names of all nondeterministic generator functions begin with the prefix a- or an- and that functions beginning with the pre- fix a- or an-, and also ending with v, denote lifted gen- erators, functions which deterministically return a variable ranging over the stated domain instead of nondeterministi- tally returning a value in that domain. Constraint-Based lifting is not limited to the a-member-of primitive. SCREAMER includes the following syntactic sugar for (an-integer-betweenv I h). (let ( (v (make-variable)) ) (assert ! (andv (integerpv v) (<=v v h) (>=v v 2))) VI The function an-integer-betweenv is a lifted analog to the SCREAMER primitive an-integer-between. All SCREAMER generators have lifted analogs. Related Work Most of the individual techniques described in this pa- per are not new. What is novel is their particular com- bination. Programming languages which allow stating numeric constraints date back to SKETCHPAD (Suther- land 1963). L ocal propagation for solving systems of constraints was used by Borning (1979) in THINGLAB. Steele (1980) constructs constraint primitives very sim- ilar to ours and implements local propagation by pro- cedural attachment. These techniques were expanded on by the MAGRITTE system (Gosling 1983). The above systems differ from SCREAMER in two ways. First, they handled only numeric constraints, lacking the GFC capacity of SCREAMER embodied in memberv and funcallv, as well as unification and disunifica- tion embodied in equalv. More importantly, the con- straint solving techniques incorporated in all of these systems were incomplete, particularly those based on local propagation. None of these systems could re- sort to interleaving backtracking search with local propagation-as SCREAMER can-to provide a slow but complete fallback to faster but incomplete local propagation techniques when applied alone. More recently, numerous systems such as CLP(%) and CHIP have been constructed in the logic program- ming framework which add some form of constraint satisfaction-sometimes based on local propagation- to the backtracking search mechanism already present in logic programming languages. SCREAMER differs from such systems in a number of ways, some mi- nor and some major. First, SCREAMER uses only fast local propagation techniques as part of its con- straint mechanism. The numeric constraint mecha- nism of CLP(!R) instead uses more costly techniques based on the simplex method for linear program- ming. These techniques are incomplete for nonlin- ear constraints. CLP(%) and CHIP do not pro- vide mechanisms for dealing with this incomplete- ness. SCREAMER, on the other hand, can solve non- linear constraints using divide-and-conquer-force combined with local propagation. The second differ- ence lies in using COMMON LISP instead of PROLOG as a substrate for constructing constraint-based pro- gramming languages. Given the substrate of nonde- terministic COMMON LISP-especially its capacity for local side effects-the SCREAMER constraint package can be written totally in COMMON LISP. This gives SCREAMER three advantages over CLP(%) and CHIP. First, SCREAMER isportabletoany COMMON LISP im- plementation. Second, SCREAMER can be easily modi- fied and extended, to experiment with alternative con- straint types and constraint satisfaction methods. Fi- nally, SCREAMER can coexist and inter-operate with other current or future extensions to COMMON LISP such as CLOS and CLIM. The current version of SCREAMER, including the full constraint package, is available by anonymous FTP from the file /com/ftp/pub/screamer. tar. Z on the host ftp. ai. mit . edu. We encourage you to obtain a copy of SCREAMER and give us feedback on your ex- periences using it. References Alan Hamilton Borning. THINGLAB -A Construint- Oriented Simulation Laboratory. PhD thesis, Stan- ford University, July 1979. Also available as Stanford Computer Science Department report STAN-CS-79- 746 and as XEROX Palo Alto Research Center report SSL-79-3. A. Colmerauer. Equations and inequations on finite and infinite trees. In 2d International Conference on Fifth Generation Computer Systems, pages 85-99, 1984. James Gosling. Algebraic Constraints. PhD thesis, Carnegie-Mellon University, 1983. E. R. Hansen. On th e solution of linear algebraic equations using interval arithmetic. Mathematical Computation, 22:153-165, 1968. Joxan Jaffar and Jean-Louis Lassez. Constraint logic programming. In Proceedings of the 14th ACM Sym- posium on the Principles of Programming Languages, pages 11 l-l 19, 1987. Alan K. Mackworth. Constraint satisfaction. In Stu- art C. Shapiro, editor, Encyclopedia of Artificial In- telligence, pages 285-293. John Wiley & Sons, Inc., New York, 1992. Jeffrey Mark Siskind and David Allen McAllester. SCREAMER: a portable efficient implementation of nondeterministic COMMON LISP. Technical Report IRCS-93-03, University of Pennsylvania Institute for Research in Cognitive Science, 1993. Ivan E. Southerland. SKETCHPAD: A Man-Machine Graphical Communication System. PhD thesis, Mas- sachusetts Institute of Technology, January 1963. Guy Lewis Steele Jr. The Definition and Implemen- tation of a Computer Programming Language Based on Constraints. PhD thesis, Massachusetts Insti- tute of Technology, August 1980. Also avilable as M. I. T. VLSI Memo 80-32 and as M. I. T. Artificial Inteligence Laboratory Technical Report 595. Pascal Van Hentenryck. Constraint Satisfaction in Logic Programming. M. I. T. Press, Cambridge, MA, 1989. 138 Siskind | 1993 | 22 |
1,346 | EIC straint atisfacti Stephen F. Smith The Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213 sfs@isll.ri.cmu.edu Abstract In this paper, we define and empirically evaluate new heuristics for solving the job shop scheduling problem with non-relaxable time windows. The hypothesis underlying our approach is that by ap- proaching the problem as one of establishing se- quencing constraints between pairs of operations requiring the same resource (as opposed to a prob- lem of assigning start times to each operation) and by exploiting previously developed analysis techniques for limiting search through the space of possible sequencing decisions, simple, localized look-ahead techniques can yield problem solving performance comparable to currently dominating techniques that rely on more sophisticated anal- ysis of resource contention. We define a series of attention focusing heuristics based on simple anal- ysis of the temporal flexibility associated with dif- ferent sequencing decisions, and a similarly moti- vated heuristic for determining how to sequence a given operation pair. Performance results are reported on a suite of benchmark problems pre- viously investigated by two advanced approaches, and our simplified look-ahead analysis techniques are shown to provide comparable problem solving leverage at reduced computational cost. Introduction In this paper, we propose and evaluate the perfor- mance of new look-ahead heuristics for solving the job shop scheduling problem with non-relaxable time win- dows. The problem originates from the manufacturing domain, and, as classically defined, involves synchro- nization of the production of N jobs in a facility with M machines. The production of a given job requires the execution of a sequence of operations (its process plan in manufacturing parlance). Each operation has a specified processing time and its execution requires the *This research reported in this paper has been spon- sored in part by DARPA under contract F30602-90-C- 0119, NASA under contract NCC 2-707, and the Robotics Institute. Cheng-Chung Cheng The Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213 cceneisll .ri.cmu.edu exclusive use of a designated machine for the duration of its processing (i.e. machines have unit processing capacity). Each job has an associated ready time and a deadline, and its production must be accomplished within this interval. The problem can be extended in various ways - to include selection among designated resource alternatives for each operation, to associate multiple resource requirements (e.g. machine, opera- tor) with operations, etc. In any case, the objective is to determine a schedule for production that satisfies all temporal and resource capacity constraints. The job shop scheduling with non-relaxable time windows problem is known to be NP-Complete (Garey & Johnson 1979). Accordingly, the development of ef- fective heuristic procedures for solving this constraint satisfaction problem (CSP) has been the subject of considerable previous research. This work, with few exceptions, has sought to exploit the special structure of the problem, in particular the structure of resource capacity constraints, to enhance consistency enforce- ment and early search space pruning capabilities, to support more-informed backtracking, and to focus at- tention in elaborating the search (our principal interest in this paper). Most frequently, the job shop schedul- ing problem has been formulated as one of finding a consistent assignment of start times for each opera- tion of each job (Johnston 1990), (Keng & Yun 1989), (Minton et al. 1990), (Sadeh 1991), and (Zweben et al. 1990). Here, the development of focus of atten- tion (or variable ordering) heuristics has focused fairly exclusively on use of contention-based metrics. One recent approach which has produced strong compara- tive experimental results, relies on a dynamic variable ordering heuristic that maintains profiles of resource demand over time, repeatedly identifies the resource and time period of greatest expected contention, and focuses attention on scheduling the operation that con- tributes most to this “bottleneck” (Sadeh 1991). A smaller number of efforts have alternatively treated the problem as one of posting sufficient ad- ditional sequencing constraints between pairs of oper- ations contending for the same resource so as to en- sure feasibility with respect to time and capacity con- Constraint-Based Reasoning 139 From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. straints. The solutions generated in this way typically represent a set of feasible schedules (i.e., the sets of op- eration start times that remain consistent with posted sequencing constraints), as opposed to a single assign- ment of operation start times. In (Erschler et al. 1976, 1980) the structure of resource capacity constraints is exploited to define dominance conditions for pruning the set of feasible sequencing alternatives at each stage of the search. More recently, (Muscettola 1993) has demonstrated the utility of global resource capacity analysis techniques (similar in spirit to the approach in (Sadeh 1991)) as a focusing mechanism within this alternative search space; in this case sequencing con- straints are repeatedly posted between sets of conflict- ing operations until resource capacity analysis indi- cates no further possibility of resource contention. Like (Muscettola 1993), we believe that the inherent flexibility gained by providing sets of feasible solutions offers considerable pragmatic value over typically over- constrained fixed times solutions. The principal claim of this paper, however, is that this second formulation of the problem also provides a more convenient search space in which to operate. When the problem is cast as a search for orderings between pairs of operations vy- ing for the same resource, we argue that it is possible to obtain the look-ahead benefits of global resource capac- ity analysis through the use of simpler, local analysis of the sequencing possibilities associated with unordered operation pairs. We define a series of variable ordering heuristics based on measures of temporal slack which, when integrated with the search space pruning tech- niques developed in (Erschler et al. 1976), are shown to yield comparable problem solving performance to contention-based heuristics at a fraction of the compu- tational cost. The remainder of the paper is organized as follows. In Section 2, we specify the problem as a CSP search for operation pair orderings, and review dominance conditions that enable search space pruning relative to this model. In Sections 3 through 5, we propose a series of variable ordering heuristics and present comparative results on a previously studied suite of 60 test prob- lems. Finally, in Section 6, we outline current work in applying the approach to schedule optimization. Problem Representation and Search Framework In more precise terms, a solution to the basic job shop scheduling CSP requires a consistent assignment of val- ues to start time variables sti for each operation i, un- der the following constraints: o sequencing restrictions - for every precedence re- lation i ---) j specified between operations i and j in the process plan of a given job 3, sti + pi < stj, where pi is the processing time required by operation i of job 3. 140 Smith resource capacity constraints - for any two op- erations i and j requiring the same resource, sti +pa 5 Stj V Stj + Jlj 5 Sti ready times and deadlines - for each operation i of job 3, ry 5 sti and sti + pi 5 dg, where ~3 and dg are the ready time and deadline respectively associated with job 3. While this problem representation provides a direct basis for problem solving search (and in fact has been taken as the starting point of most previous research), the problem can be alternatively formulated as one of establishing sequencing constraints between pairs of operations contending for the same resource over time. In this case, we define a decision variable orderingi,j for each pair of operations i and j that require the same resource, which can take on either of two values: i + j (implying the constraint St; + pi < stj) and j + i (implying stj + pj 5 sti). A solution then is a consistent assignment of values to all ordering vari- ables. There are several potential advantages to this formulation. The advantage emphasized in this paper is that the simpler structure of the search space enables more straightforward accounting of resource capacity constraints and the use of simpler, localized analysis of current solution structure as a basis for variable and value ordering. Our problem solving framework assumes a backtrack search procedure in which the solution is incrementally extended through the repeated selection and binding of an as yet unconstrained orderingi,j variable (referred to as the posting of a new precedence relation). When- ever a new precedence relation is posted, constraint propagation is performed to ensure continued tempo- ral consistency and maintain current bounds on the earliest start time and latest finish time of each oper- ation. ’ If the decision i ----f j is taken, for example, then e&j (the earliest start time of j) and lfti (the latest finish time of i) are updated by eStj = max{estj , es& + p;}, and (1) lfti = min{bft;, lftj - pj}, (2) and these new values are then propagated forward or backward respectively through all pre-specified and posted temporal precedence relations. If during this process, estk + pn: becomes greater than /ftk for any operation k: then an inconsistent set of assignments has been detected. As indicated at the outset, our approach to directing the search integrates a procedure previously developed by Erschler et al, referred to as Constraint-based Anab- ysis (CBA), which exploits dominance conditions to prune the space of possible ordering assignments. To summarize their basic idea, assume that esti and afti l Since we ar e assuming in this paper that operation processing times are fixed, we could equivalently reason in terms of earliest and latest start times. designate the current earliest start time and latest fin- ish time respectively of a given operation i. Then, for any unordered pair of operations, i and j, contend- ing for a particular resource, we can distinguish four different cases: 1. If if& - estj < pi + pj 5 dftj - es& then i must be scheduled before j in any feasible extension of the current ordering decisions. (case 1) 2. If lftj - es& < pi + pj < Zfta - e&j then j must be scheduled before i in any feasible extension of the current ordering decisions. (case 2) 3. If pi +pj > dftj - es& and pi +pj > dfti - estj then there is no feasible schedule. (case 3) 4. If pi+pj 5 lftj - esti and pi +pj 5 dfti - estj then either sequencing decision is still possible. (case 4) These dominance conditions of course provide only necessary conditions for determining a set of feasible schedules, and thus interleaved application of CBA and temporal constraint propagation yields an underspeci- fied search procedure. What is needed to generate so- lutions are heuristics for resolving the undecided states specified in case 4. In this regard, previous use of CBA has emphasized fuzzy integration of sets of dif- ferent scheduling rules. In (Bensana & Dubois 19SS), a voting procedure based on fuzzy set theory and ap- proximate reasoning was developed and used in con- junction with a set of fuzzy scheduling rules. In (Kerr & Walker 1989), fuzzy arithmetic together with fuzzy scheduling rules was utilized instead. Our goal, alter- natively, is to investigate the effectiveness of CBA in conjunction with simple look-ahead analysis of current ordering flexibility. This leads to the search procedure that is graphically depicted in Figure 1, which we will refer to as precedence constraint posting (PCP). In the following sections, we define and evaluate a specific set of variable and value ordering heuristics. Exploiting Estimates of Sequencing Flexibility Intuitively, in situations where CBA leaves the search in a state with several unresolved ordering assignments (i.e., for each unordered operation pair, both ordering decisions are still feasible), we would like to focus at- tention on the ordering decision that is currently most constrained. Since the posting of any sequence con- straint is likely to further constrain other ordering de- cisions that remain to be made, delaying the currently most constrained decision increases the chances of ar- riving at an infeasible problem solving state. Implementation of such a variable ordering strategy requires a means of estimating the current flexibility associated with a given unresolved ordering decision. One simple indicator of flexibility is the amount of tern- poral slack that is retained by a given operation pair if a decision to sequence them is taken. To this end, Figure 1: PCP Search Procedure we define two measures, corresponding to the two pos- sible decisions that might be taken. For a given pair of currently unordered operations (i, j) contending for the same resource, we define the “temporal slack re- maining after sequencing i before j” as slack(i + j) = lftj - CYSti - (pi + pj), (3) and similarly the “temporal slack remaining after se- quencing j before i” as sback(j + i) = afti - estj - (pi +pj). (4 Figure 2 provides a graphic illustration of sZacl(i -+ j) and sZack(j + i). Note that in either case the remaining slack is shared by both i and j. Thus, the larger the temporal slack, the greater the chance that subsequent ordering feasibly imposed. decisions involving i and j can be Given these measures of temporal slack, we now have * a basis for identifying the most constrained or “most critical” decision and for specifying an initial variable ordering heuristic. We define the ordering decision with the overall minimum slack, to be the decision orderingi,j for which min{sluclc(i --) j),sZucb(j ---f i)} = min{min{slucl(u (“,v) - ?I), sZuck(v + 24))) for all unassigned ordering,,, . Using this notion of criticality, we define a variable ordering heuristic that selects this decision at each unresolved state of the search. With respect to the decision of which sequencing constraint to post (i.e., value assignment), we intu- itively prefer the decision that leaves the search with the most degrees of freedom. Thus we post the se- quencing constraint that retains the largest amount of temporal slack. Constraint-Based Reasoning 141 I estil ’ lfti est .’ current JI i I eSti ’ li if i->j est .I lft J . I \ j slack(i->j) if j->i Figure 2: Slack(i + j) and Slack(j + i) Summarizing then, our initial configuration of vari- able and value ordering heuristics is defined as follows: I. Mb-Slack variable ordering: Select the sequencing decision with the overall minimum temporal slack. Suppose this decision is orderinga,j. II. Max-Slack value ordering: choose the sequencing constraint i + j if sbaclc(i ---f j) > slacl(j -+ i); otherwise choose j -+ i. A Computational Study In this section we evaluate the performance of the above heuristics in conjunction with the PCP search procedure on a suite of job shop scheduling CSPs stud- ied by two recently developed scheduling procedures: ORR/FSS (Sadeh 1991) and CPS (Muscettola 1993). Both ORR/FSS and CPS rely on global estimations of resource contention to dynamically direct their respec- tive search processes. In the former case, profiles of resource demand over time are deterministically con- structed according to probabilistic assumptions, and inspected to identify contention “peaks”. In the case of CPS, expected resource “conflicts” are identified from demand profiles constructed via stochastic simulation in a relaxed solution space where resource constraints are ignored. ORR/FSS and CPS also differ in the type of decision that is taken at each step of the search. ORR/FSS takes a decision to fix the start time of the operation contributing most to the highest contention peak. CPS identifies the set of operations involved in the most severe resource conflict, and posts a sequenc- ing constraint to reduce the level of contention among these operations. Unlike our approach, which estab- lishes orderings between pairs of operations, CPS posts precedence constraints between sets of operations and attempts to post only as many constraints as are neces- sary to eliminate the possibility of resource contention (thus retaining additional flexibility in the final solu- J tion). Both ORR/FSS and CPS have reported very strong results on the set of scheduling problems used in this study. As an additional point of comparison, we also in- clude results obtained with three priority dispatch rules from the field of Operations Research: EDD, COVERT, and ATC (Vepsalainen & Morton 1987). These heuristics are frequently used and have been determined to work very well in job shop scheduling circumstances where expected job tardiness is low (as would likely be the case if a feasible solution exists). The set of problems used in this study come from the dissertation of Sadeh (Sadeh 1991). The problem set consists of 60 randomly generated scheduling prob- lems. Each problem contains 10 jobs and 5 resources. Each job has 5 operations. In all problems, deadlines were generated randomly within a specified range. A controlling parameter was used to generate problems in three different deadline ranges: wide (w), median (m), and tight (t). A second parameter was used to generate problems with both 1 and 2 “bottleneck” re- sources. Combining these two parameters, 6 different categories of scheduling problems were defined, and 10 problems were generated for each category. The prob- lem categories were carefully defined to cover a vari- ety of manufacturing scheduling circumstances. While each problem has at least one feasible solution, they range in difficulty from easy to hard. The results obtained on these problems, along with those previously reported, are given in Table 1 (where problem difficulty increases from top to bottom). The number of problems solved by each approach by prob- lem category are indicated. In the case of ORR/FSS runs, search was terminated on a given problem if a so- lution was not found after a pre-determined number of search states had been expanded. Two sets of results are reported for this procedure, Sadeh’s original disser- tation results using simple chronological backtracking and a subsequent study (labeled ORR/FSS+) where the original procedure was augmented with the “intel- ligent” backtracking techniques described in (Xiong et al. 1992). In the case of CPS, which operates with a stochastic resource analysis, the search was restarted from scratch upon detection of an infeasible state. In the case of our approach, no backtracking mechanism was employed and the search was terminated in failure if an infeasible solution state was reached. Examining the results, we see that our simple slack- based variable and value ordering heuristics, in con- junction with the search space pruning techniques pro- vided by CBA, perform remarkably well in compar- ison to both contention-based scheduling procedures, and while not solving all 60 problems, provide evi- dence in support of our hypothesis that comparable performance can be obtained with localized and less sophisticated look-ahead analysis techniques. From the standpoint of computational performance, average solution times of 128 and 78 seconds were obtained 142 Smith ORR ORR co- PCP FSS FSS+ CPS Edd vert Ate w/l 10 10 10 10 10 7 7 w/2 10 10 10 10 10 8 8 m/l 10 8 10 10 8 5 5 42 10 9 10 10 8 8 8 t/1 10 7 10 10 3 6 6 t/2 6 8 10 10 8 8 8 sum 56 52 6060742 42 Table 1: Results of the experiments with ORR/FSS+ and CPS respectively in these ex- periments. Our approach averaged 0.2 seconds for each solved problen~.2 In the next section we attempt to refine our ini- tial variable and value ordering heuristics to improve problem solving performance. We note in passing that the priority rules perform rather poorly on this set of scheduling problems. Incorporating Additional Search While Min-Slack performed quite well over the tested problem set, it does not in fact utilize all of the infor- mation provided by the temporal slack data. In par- ticular, it relies exclusively on the smaller slack value in determining the criticality of a ordering decision orderingi,j, and ignores any information that might be provided by the larger one. The most common problem created by disregarding this additional value appears in a form of tie-breaking. Consider the following example. Suppose that we have two unsequenced operation pairs, one with associated temporal slack values of (20,3), and the other with val- ues of (4,3). M in-Slack does not distinguish between the criticality of these two ordering decisions, since the minimum value in both cases is 3. In the event that the overall minimum slack over all candidate decisions is also 3, then Mill-Slack will choose randomly. But, in this case sequencing the second operation pair is cer- tainly more critical since the flexibility that will be left after the decision is made will be considerably less than t,he flexibility that will remain if the first unsequenced operation pair is instead chosen and sequenced. Given this insight, we define a second variable or- dering heuristic, which operates exactly as Min-Slack except in situations where more than one pending de- cision orderingi,j is identified as a decision with over- all minimum temporal slack. In these situations, ties are broken by selecting the decision with the minimum larger temporal slack value. Applying the PCP proce- dure with this extended heuristic to the same suite of GO problems yielded 57 solved problems. Although this 2All computation times were obtained on a Decstation 5000. Both ORR/FSS aud CPS are Lisp-based systems; onr procedure is implemented in C. improvement is slight, it suggests the potential advan- tage of incorporating additional information. A more subtle problem created by the information ignorance inherent in Min-Slack is the problem of similarity. Let’s again consider an example. Suppose that we are again deciding between two unsequenced operations pairs. This time the temporal slack values associated with the first are (3,3), and the values as- sociated with the second are (5,5). Which one is more critical? Without any ambiguity, the first one is more critical than the second one, and this is also the answer provided by Min-Slack. But what if we change the values for the first pair to (20,3). Is the first pair still more critical than the second one? The answer is not obvious. The point is that there exists a tradeoff be- tween relying on minimum slack values and relying on information relating to the degree of similarity of both slack values in determining criticality. The strong per- formance of Min-Slack suggests that minimum slack values should remain the dominant consideration. But we hypothesize that the introduction of bias to increase criticality as the similarity of large and small slack val- ues increases and decrease criticality as the slack values become more dissimilar might provide more effective search guidance. Let us define a measure of similarity in the range [0, l] such that for slack value pairs with identical values, the similarity value is 1 and as the distance between large and small slack values increases, the similarity value approaches 0. More precisely, we estimate the similarity between two slack values by the following ratio expression: s= min{slack(i 3 j), slack(j - i)} max{slacb(i --+ j), slack(j -+ i)} (5) Given the definition of S and the direction of bias desired, we now define a new criticality metric, referred to as biased temporal slack, as follows: Bslack(i + j) = slacb(i - j) f(S) ’ where f is a monotonically increasing function. With little intuition as to the appropriate level of bias to exert on the criticality calculation, but assum- ing that the level of bias should not be too great, we use e,ri>2,t o e d fi - ne a set of alternatives, yielding Bsback(i -+ j) = slack(i + j) v3 - By empirical reasoning, we also define a composite form of the metric with two different parameters, n1 and n2, as Bslacb(i -+ j) = slack(i ---) j) + slack(i ---f j) “-t/s “VT * (8) Table 2 presents results obtained using overall win- hum 13slack as a variable ordering criterion for dif- ferent values of n in Eqn. (7) and 1x1 and n2 in Eqn. (8) Constraint-Based Reasoning 143 n=2 n=3 n=4 n1=2 n1=3 122 =3 n2=4 WI 10 10 10 10 10 WI2 10 10 10 10 10 m/l 10 10 10 10 10 42 10 10 10 10 10 t/1 10 10 10 10 10 t/2 8 8 8 10 9 total 58 58 58 60 59 Table 2: Performance using Min-Bslack heuristic on the same suite of 60 problems. From the results, we can see that use of Bslaclc(i + j) as a variable order- ing criterion does in fact yield improved performance on this suite of 60 problems. As expected, performance is sensitive to the amount of bias specified. In the case where all 60 problems are solved, average solution time was 0.3 seconds. Conclusions In this paper, we have proposed and evaluated new heuristics for solving the job shop scheduling prob- lem with non-relaxable time windows. Our hypoth- esis has been that by approaching the problem as one of establishing sequencing constraints between pairs of operations requiring the same resource and by ex- ploiting analysis techniques for limiting the’search of possible sequencing decisions, simple, localized look- ahead techniques can yield problem solving perfor- mance comparable to techniques that rely on more sophisticated analysis of resource contention. We de- fined a series of attention focusing heuristics based on simple analysis of the temporal flexibility associated with different sequencing decisions, and a similarly mo- tivated heuristic for determining how to sequence a given operation pair. Evaluation of these heuristics on a suite of benchmark problems previously investi- gated by two contention-based scheduling procedures has shown that our heuristics provide comparable re- sults at very low computational expense. Our current interest is in adapting the PCP ap- proach to solve more common, optimization-based for- mulations of scheduling problems. In this context, cer- tain problem constraints (e.g., due dates) are not in- terpreted as rigid, but instead specify preferred values over which objective criteria are defined (e.g., mini- mizing tardiness cost). To adapt the PCP procedure to this class of problems, two basic issues must be ad- dressed. First, since CBA depends on the assumption that time and capacity constraints are non-relaxable, its advantage as a search space pruning mechanism is lost. We are exploring use of an alternative mecha- nism, inspired by standard branch and bound search procedures, which bases pruning on a dynamically re- fined upper bound solution. The second issue con- cerns the inappropriateness of temporal slack as a basis for estimating the criticality of various ordering deci- sions. This metric, however, can be straightforwardly replaced by the objective function itself (e.g., comput- ing the increase in tardiness cost resulting from alter- native ordering decisions for a given pair of operations), giving rise to variants of the variable and value order- ing heuristics defined in this paper. References Bensana, E., Bel, G., and Dubois, D. 1988. OPAL: A multi-knowledge based system for job-shop schedul- ing. Int. J. Production Research, 26(5), 795-819. Erschler, J., Roubellat, F., and Vernhes, J. P. 1976. Finding some essential characteristics of the feasible solutions for a scheduling problem. Operations Re- search, 24, 772-782. Erschler, J., Roubellat, F., and Vernhes, J. P. 1980. Characterizing the set of feasible sequences for n jobs to be carried out on a single machine. European Jour- nal of Operational Research, 4, 189 - 194. Garey, M. R. and Johnson, D. S. 1979. Comput- ers and Intractability, a Guide to the Theory of NP- Completeness, W.H. Freeman Company. Johnston, M. D. 1990. SPIKE: AI scheduling for NASA’s Hubble Space Telescope. In Proc. 6th IEEE Conference on AI Applications, Santa Barbara, CA. Keng, N. and Yun, D. Y. Y. 1989. A plan- ning/scheduling methodology for the constrained re- source problem. In Proc. IJCAI-89, Detroit, MI. Kerr, R. M. and Walker, R. N. 1989. A job shop scheduling system based on fuzzy arithmetic. In Proc. 3rd Int. Conf. on Expert Systems and the Lead- ing Edge in Production and Operations Management. M.D. Oliff, Ed. 433-450, Hilton Head Island, SC. Minton, S., Johnston, M. D., Philips, A. B., a.nd Laird, P. 1990. Solving large-scale constraint satisfac- tion and scheduling problems using a heuristic repair method. In Proc. AAAI-90, Boston, RIA. Muscettola, N. 1993. Scheduling by Iterative Parti- tion of Bottleneck Conflicts. In Proc. 9th IEEE Con- ference on AI Applications, Orlando, FL. Sadeh, N. 1991. Look-ahead Techniques for Micro- Opportunistic Job Shop Scheduling. CMU-CS-91- 102, School of Comp. Sci., Carnegie h1ellon Univ. Vepsalainen, A. P. J. and Morton, T. E. 1987. Prior- ity rules for job shops with weighted tardiness costs. Management Science, 33(8), August, 1035-1047. Xiong, Y., Sadeh, N, and Sycara, K. 1992. Intelligent Backtracking Techniques for Job Shop Scheduling. In Proc. 3rd Int. Conf. on Principles of Iinowledge Rep- resentation, Cambridge, MA. Zweben, M., Deale, M., and Gargan, R. 1990. Any- time Rescheduling. In Proc. DARPA Workshop on Innovative Approaches to Planning, Scheduling and Control, Morgan Kaufmann Pub. 144 Smith | 1993 | 23 |
1,347 | A Constraint Deco osition Method uration Pro Toshikazu Tanimoto Digital Equipment Corporation Japan 1432 Sugao, Akigawa, Tokyo, 197 Japan tanimoto@ jrd.dec.com Abstract This paper describes a flexible framework and an efficient algorithm for constraint-based spatio- temporal configuration problems. Binary constraints between spatio-temporal objects are first converted to constraint regions, which are then decomposed into hierarchical data structures; based on this con- straint decomposition, an improved backtracking al- gorithm called HBT can compute a solution quite efficiently. In contrast to other approaches, the pro- posed method is characterized by the efficient han- dling of arbitrarily-shaped objects, and the flexible integration of quantitative and qualitative con- straints; it allows a wide range of objects and con- straints to be utilized for specifying a spatio- temporal configuration. The method is intended pri- marily for configuration problems in user inter- faces, but can effectively be applied to similar prob- lems in other areas as well. Introductio Spatio-temporal configuration problems based on con- straints arise in many important application areas of artificial intelligence, such as planning, robotics and user interfaces. Although most generic problems are known to be NP-complete, a number of practical tech- niques suitable for special cases have been developed. In addition to frameworks for constraint-based geo- metric reasoning [ 13, 141, potentially useful formal- isms have been developed for temporal reasoning [ 1, 41. As far as configuration problems are concerned, temporal and spatial frameworks share many impor- tant features. In fact, a qualitative formalism for tem- poral reasoning [l] has been extended to spatial quali- tative formalisms [9, 191; a quantitative formalism based on linear binary inequalities [4] has been ex- tended to a multi-dimensional formalism in a similar way [22]. The multi-dimensional extension of temporal for- malisms is certainly an attractive approach to spatio- temporal problems. The approach is based on the ex- pectation that a seemingly difficult multi-dimensional problem can be decomposed into a set of l- dimensional problems, which can be solved much more easily than the original problem. However, di- rect extensions attempted in [9,22] have restricted tar- get problems to orthogonal domains; in other words, objects must be rectangular in a certain coordinate system. Obviously, this restriction is not always desir- able in practical situations. Thus arises the need for a framework to handle arbitrarily-shaped objects, with little sacrifice in computing resources. Techniques to handle spatial objects have been studied in the area of computational geometry. Spatial objects can be han- dled efficiently by hierarchical data structures; many powerful techniques have been devised [20]. How- ever, further study will be needed to fit the techniques into the constraint-based framework. Another attractive feature of the multi-dimensional extension of temporal formalisms is the possibility of integrating qualitative and quantitative constraints. In many practical situations, qualitative expressions are not enough to specify spatio-temporal constraints. This is particularly true when we must solve a configu- ration problem for user interface objects [8]. On the other hand, it is also true that there are cases where qualitative constraints suffice. It is therefore desirable to handle both qualitative and quantitative constraints in the same framework. There have been attempts to integrate qualitative and quantitative formalisms for temporal reasoning [12, 181. However, it is not imme- diately obvious how to extend these frameworks to spatial counterparts. In this paper, we propose a framework and an algo- rithm to address these issues; in common with other constraint-based formalisms, the proposed method handles spatio-temporal configurations based on con- straints between objects. The method has three stages. Firstly, a binary constraint between two spatio- temporal objects, such as left and after, is converted to a constraint region; this step is called the interpreta- tion of a constraint. Secondly, the constraint region is decomposed into a hierarchical data structure called a constraint region tree; this representation enables the use of metric features of spatio-temporal constraints. Finally, a hierarchical backtracking algorithm called Constraint-Based Reasoning 145 From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. HBT is applied to region trees; HBT is effective either by itself or in combination with other algorithms, in- cluding PC-Z. The proposed method can be characterized by two major advantages; it can efficiently handle arbitrarily- shaped objects, not just rectangular ones; and it inte- grates quantitative and qualitative constraints through the interpretation of constraints. Although the pro- posed method is intended primarily for constraint- based spatio-temporal configuration problems in areas such as user interface management, automated anima- tion and multimedia presentation generation, it would be possible to apply the method to a wider range of configuration problems in areas such as scheduling and space planning. In typical spatio-temporal configuration problems, im- portant constraints are binary constraints such as dis- joint, left, after and before. If we do not consider the transformation of an object, such as rotation and scal- ing, this type of constraint can be represented by a re- gion where a displacement between the two objects is restricted. Having mentioned that, we begin by defin- ing several concepts (for generality, we will use real numbers in definitions, rather than integers). Let a be a d-dimensional object, and A be a set of objects; a configuration on A is defined as mapping w: A+Rd; for each object a, Cr)(a)ERd is called its location. Given objects a, and a2, constraint c between a, and a2 is written as c(a,,a,). Let C be a set of constraints, and D be a set of subsets of Rd; an interpretation on C is defined as mapping ‘p: C-+D; (pod is called the (constraint) region of constraint c. A constraint satis- faction problem defined by A, C and q is written as CSP(A,C,(p). A concept similar to the constraint re- gion was first suggested by [15]; another was dis- cussed by the name of admissible region in [23]. In this paper, we use this concept to handle d- dimensional linear binary constraints. Definition 1 Given a set of objects A, a set of con- straints C and interpretation cp, configuration o is said to satisfy constraint c(a, ,a& C, iff o(a,)-o(u,>E q(c) holds. If configuration ci) satisfies all the constraints in C, it is said to be a solution of CSP(A,C,q). In general, a solution of CSP(A,C,(p) can take two forms. One is a configuration satisfying the con- straints as in Definition 1, and the other is a set of configurations to bound the locations of objects. The latter form is more generic, but in many situations, the former form suffices. The generic form could be ob- tained by merging all single solutions. Below we define a set of canonical constraints and their interpretations used in our framework, but con- straints and/or interpretations are not necessarily lim- ited to those given. We can define a new constraint (either qualitative or quantitative) and/or interpreta- tion with no changes to our framework. One of the ad- vantages of this approach is flexibility in defining con- straints specific to the problem being addressed (see Example 2). The canonical constraints consist of eight topologi- cal and two directional constraints. Topological con- straints are defined based on the relationships given in [I]: disjoint, contains, inside, equal, meet, covers, cov- eredby and overlap. Directional constraints less and greater are also defined for each dimension; they can be called left, right, below, above, before and after, depending on context. Figure 1 shows all the canoni- Figure 1. Canonical Constraints GZB al css3 a2 . 146 Tanimoto cal constraints (contains(a,,a,) should be read as ltaZ contains aI” etc.), their interpretations and exemplary depictions. We assume that all objects are embedded in ’ and are homeomorphic to closed solid spheres. In Figure 1, aa, a0 and a- denote the boundary, inte- rior and exterior of object a. For less, and greater, (i=l,...,d), 2; indicates a line parallel to i-th coordi- nate. If \dLi,li~alo=JZIVEiAa~o=O, the constraint region is defined as 0. It is easy to see that the logical closure (by T, A and v) of the canonical constraints for d=l over single in- tervals contains Allen’s 13 relationships [I] (e.g. “a2 started-by a,” in Allen’s notation can be written as “covers(al,aZ)AZess(az,,l),t in our notation). As a spa- tial formalism, the canonical constraints are much more powerful than linear constraints between the sur- faces of polyhedra [17], but in some cases, cross- product relationships of Allen’s 13 relationships can be more suitable [9]. Note that the cross-product rela- tionships are in fact interpretable and can be added to our framework as well. Example 1 Suppose the following binary constraints between 2-dimensional objects a,, a,, a3 and a4 shown in Figure 2 (in this paper, for simplicity, all examples are taken from d=2 cases, though they appear spatial rather than spatio-temporal): c I : inside(a 1 ,a,)vcoveredby (a, ,a,) c2: inside(a ,,a,>vcoveredby(a, ,a,) cg: inside(a,,a,)vcoveredby(a,,a,) c4: disjoint(az,a,)Aright(a21a,> c5: disjoint(a,,a&below(a,,a,) c6: disjoint(a,,a,) A possible solution o satisfying the above constraints is also shown in Figure 2. In order to find o, we first have to obtain the constraint regions of the above con- straints as follows: 1) determine the reference points of objects (usually their bottom-left corners); w(a) is represented by coordinate values of the refer- ence point of a. 2) compute a constraint region for each item in a constraint expression using the interpreta- tions in Figure 1. 3) if the constraint expression contains logical operations, perform corresponding set opera- tions between the regions. Figure 2. A Sample Configuration Problem (P(c4) Figure WJ (P(c3) .:.:.:.~.:.:5’:.:.:.:.:.:.: :.:.:5’~:5.:.:.:.:.:.:.:.:.:.:.: . . . . . . . . . . . . . . . . . . . . . . . . . . . ..~...~. , . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hl!~ cp(C,j (P(%) 3. Constraint Regions The objects are usually pre-processed to employ algo- rithms such as plane sweeping [20] in Step 2 (in some cases, however, it can be more efficient to compute a constraint region directly from the arithmetic defini- tions of objects). Figure 3 shows the resulting con- straint regions (meshed area) obtained after discretiz- ing the objects. In our framework, all constraints are eventually rep- resented by their constraint regions, which are simply subsets of Rd. The interpretation of constraints is basi- cally a geometric problem. This framework is much simpler than existing approaches to integrate qualita- tive and quantitative constraints. The following exam- ple is quantitative constraints which can be used in combination with the canonical constraints: Example 2 Distance constraints could be defined as: near(a, ,a2)Wdisjoint(a,,a2)A~Ol(a2)-001<8, far(a,,a2)=disjoint(a,,a2)&0(a2)-61(a1)l>82 where 6, and 6, are constants. Note that a unary constraint to specify an abso- lute location of an object can be represented as a bi- nary quantitative constraint between the object and a special object used as a reference frame. Once all constraints are converted to constraint re- gions, the next step is to construct data structures suit- able for constraint satisfaction algorithms. Due to met- ric features of spatio-temporal problems, we expect that points located close to each other within a con- straint region will behave similarly as far as constraint satisfaction is concerned (in other words, as far as the existence of a solution including the point is con- cerned). We define a partial order relation between problems to make use of this similarity. Given CSP(A,C,(p) and CSP(A,C,(p’), if the inconsistency of Constraint-Based Reasoning 147 graph G has no negative cycles. If cJijzcii for any cor- responding intervals CijEC and C’i~E C’ of STP and STP’, STY is said to be tighter than STP. A constraint network of the tightest problem of all equivalent (hav- ing the same set of solutions) problems to an STP is called its minimal network. A solution of an SZ’P can be constructed from its minimal network, which can be computed by Warshall-Floyd’s algorithm: ] STP can be solved in O(n3) time, Getting back to our own framework, if all the con- straint regions of CSP(A,C,(p) are d-rectangular, CSP(A,C,(p) is said to be simple. Simple CSP(A,C,(p) is the multi-dimensional counterpart of STP and can be solved by decomposing it into d STPs and solving the STPs separately. Theorem 3 [22] Simple CSP(A,C,(p) can be solved in O(dn3) time, where n=lAl. A general problem can be solved by applying a backtracking algorithm to the meta constraint satisfac- tion problem, where a domain for each variable is a set of leaf nodes (d-rectangles) in the region tree. By using a classical backtracking algorithm (referred as BT) given in [4], we obtain: Theorem 4 o(dn3rm(d- 1) A solution of CSP(A,C,q) is obtained in ) time if all the constraint regions are sim- ple polygons and in O(dn3rmd) time in general, where n=IAl and m=ICI. until SOLVE-SIMPLE(Tu { s)) Tc (all non-leaf nodes in Tu (s) ) if T is empty then return else select and delete t from T St (all children of t) pushdown (S,T) and Forward Backward if stack is empty then exit else popup (S,T) Proof According to Theorem 1, q(c) has O(rd-‘) rec- tangles if it is a simple polygon. In the worst case, the backtracking algorithm checks all combinations, which are O(rm(d-l) ). Each step takes O(dn3) time ac- cording to Theorem 3. Thus total time complexity is o(dn3@+‘) O(dn3rmd) in general. y, total time complexity is if S is not empty then Forward else Backward Backtracking algorithms incorporating techniques such as variable ordering, value ordering and network- based heuristics have been used to improve average performance in constraint satisfaction [3, lo]. How- ever, most of them are designed for discrete domain problems and cannot utilize metric features of a prob- lem. As described in the previous section, after con- structing region trees from constraint regions, each node in a constraint region tree corresponds to a rec- tangle; any sub-problem is in fact simple and can be solved in polynomial time (according to Theorem 3). By using this characteristic, average running time can be significantly improved. HBT is independent of other performance improve- ment schemes and can be used in combination with them; in the above algorithm description, a value or- dering scheme can be applied to select s from S, and a variable ordering scheme can be applied to select t from T. HBT can also be used in combination with other algorithms such as path consistency algorithms, which will be discussed in the next section. Computing a path consistent network of the original CSP(A,C,(p) is a powerful pre-processing technique for constraint satisfaction problems [15, 161. Path con- ‘stency in our framework is defined as follows: ehition 3 A path i(l),..., i(k) is consistent iff for any pair Of Oi(l), Oi(k) such that “i(k)-Oi(l~E (Pi( there exists configuration o such that W(ai(,))=Wi(,), m(q;(k))=Wi(k), ~i~+l)-~i~)E CpiG)iG+l), .i=L...&L Wh ‘pii 1s the constraint region of constraint cq=c(ai,aj). Based on the preparation we have made so far, we CSP(A,C,(p) is path-consistent iff every path is path define a hierarchical backtracking algorithm (abbrevi- consistent. Path consistency algorithms do not guaran- ated as HBT). SOLVE-SIMPLE computes a solution tee finding a solution; however, they are quite useful for each simple sub-problem and returns true if the for pre-processing. These algorithms require two op- sub-problem is consistent andfalse if inconsistent. Al- erations: intersection cpl@p, and composition ‘pl@q,. gorithm HBT consists of two functions: Forward and Backward. It uses a stack to store a search path as a set of pairs (S,T), where S and T are sets of nodes. Given all the region trees, it starts by calling Forward with initial stack (S,,T,), where S,=(aZZ children of one of the root nodes) and T,=(aZZ the root nodes ex- cept for the parent of S,) . Once a sub-problem is known to be inconsistent, HBT does not search sub-problems dominated by the inconsistent sub-problem. The performance of HBT depends on the distribution of possible solutions over constraint regions. The worst case happens when all non-terminal sub-problems are consistent. Forward do if S is not empty then select and delete s from S else Backward Constraint-Based Reasoning 149 WJ cp(c,) (P&j) Figure 5. Path Consistent Regions The two operations in our framework are defined as: qpp2= (XE RdlXE q+, XE q2) q,@q2={XERdlX=X1+x~, XIE cpl, X2E CpJ The distributivity of @ over 0 does not hold except in cases where both (pl and (p2 are rectangles. More- over, when performed between region trees, 0 does not preserve the simplicity of regions. With regard to the efficiency of the two operations, we obtain: Lemma 2 When performed between region trees, cplG%p, and (p1@(p2 can be obtained in O(dr2d) time. Proof In general, (pl and (p2 have O(rd) nodes (or cor- responding rectangles). cp1&p2 and (p1@)(p2 require O(r2d) operations between two rectangles and create O(rZd) rectangles; the resulting region tree can be con- structed from the rectangles in O(dr2d) time. w Note that if quadtrees (or their d-dimensional equivalents) are used as region trees, ‘pr@p, can be obtained by comparing corresponding nodes in two trees, and the time complexity of cplO~, can be re- duced to O(d#). An efficient path-consistency algorithm called PC-2 is described in [ 151. Relaxation operation RE- VISE((i,k,j)) is defined as cPijtcpijO<pi~~cp,, using in- tersection and composition defined above. REVISE re- turns true if ‘pij is modified, otherwise it returns false. RELATED-PATHS returns the set of 2-length paths relevant to the changed path (see [15] for details). PC- 2 is formulated as: Algorithm PC-2 [15] Qt{ (i,kj)l(i<j)~(k+j)} while Q is not empty do select and delete a path (i,kj) from Q, and if REVISE((i,k,j)) then QtQuRELATED-PATHS((i,kJ)) With regard to the efficiency of PC-2, we obtain: Theorem 5 Algorithm PC-2 computes the path consis- tent network of CSP(A,C,(p) in O(dn3r3d) time. Proof Algorithm PC-2 needs O(n3rd) steps to termi- nate, because in the worst case, for each constraint re- gion, which can have O(rd) pixels, may be decreased by only a pixel in a single REVISE operation. Accord- ing to Lemma 2, each REVISE operation consumes O(dr2d) time. Total time is therefore O(dn3r3d). Theorem 5 is the multi-dimensional counterpart of Theorem 5.7 in [4]. Example 4 PC-2 can be applied to the sample prob- lem as follows: 1) construct region trees as in Figure 4 from the constraint regions in Figure 3, which are used as initial values for PC-2. 2) each time REVISE is called, 0 and @ opera- tions are performed between sets of rectan- gles in the region trees. At the termination of PC-2, we obtain resulting path consistent constraint regions (constraint regions of a path consistent netw ) illustrated as in Figure 5 (darkly meshed area). Table 1 summarizes the above results about worst- case time complexity (the left cell of (H)BT is com- plexity when all constraint regions are simple poly- gons; the right is complexity in general). However, by storing solutions of non-terminal sub-problems (for HBT) or partial instantiations (for BT), we can reduce the worst-case time complexity of (H)BT to q&&d- 1) ) and O(dn2rmd) respectively. Empirical results from 192 small-scale problems (d=2, range=16, the number of objects=4, quadtrees are used as region trees) are shown in Table 2. Per- formance is measured by the number of times SOLVE- SIMPLE is called to find a solution (denominated by 1,000 calls) and is averaged over 192 cases (the aver- age number of rectangles corresponding to leaf nodes in a region tree was 33, and PC-2 could reduce it to 19). Although a full-scale experimental analysis should be done in the future, Table 2 well suggests Table 1. Worst-Case Time Complexity WBT REVISE PC-2 O(dn31/“‘d-“) 1 O(dn3r”‘> O(dr2d> O(dn3r3d) Table 2. Performance Comparisons BT PC-2+BT HBT PC-2+HBT 156 5 2 1 150 Tanimoto that HBT is more efficient than the combination of PC-2 and BT. In combination with PC-2, HBT works even better. Taking into consideration that PC-2 is not inexpensive, HBT without PC-2 would suffice for many practical problems. ConcIusion In this paper, we have proposed a new method to solve constraint-based spatio-temporal configuration problems; the method centers around the interpreta- tion of constraints and the decomposition of constraint regions into region trees. In contrast to other ap- proaches, our method is more flexible and efficient; the flexibility lies in defining interpretation proce- dures most suitable for given problems; the efficiency results from employing region trees as the basis for the hierarchical backtracking algorithm HBT. It is sug- gested that HBT is more efficient than the combina- tion of classical backtracking and path consistency al- gorithms. However, further research in a couple of different directions will be necessary to refine the pro- posed framework and algorithm: the capability of han- dling a wider range of geometric operations such as rotation and scaling; and the empirical evaluation of performance when HBT is used in combination with other techniques including network-based heuristics. References [l] Allen, J.F. Maintaining Knowledge about Tempo- ral Intervals. Commun. ACM, 26 (1983), 832-843. [2] Chiang, Y.-J. and Tamassia, R. Dynamic Algo- rithms in Computational Geometry. Proc. IEEE, 80 (1992), 1412-1434. [3] Dechter, R. and Pearl, J. Network-Based Heuris- tics for Constraint-Satisfaction Problems. Artificial In- telligence, 34 (1988), l-38. [4] Dechter, R., Meiri, I. and Pearl, J. Temporal Con- straint Networks. Artificial Intelligence, 49 (1991), 61-95. [5] Edelsbrunner, H. A New Approach to Rectangle Intersections Part I. Intern. J. Computer Math., 13 (1983), 209-219. [6] Edelsbrunner, H. A New Approach to Rectangle Intersections Part II. Intern. J. Computer Math., 13 (1983), 221-229. [7] Egenhofer, M.J. and Al-Taha, K.K. Reasoning about Gradual Changes of Topological Relationships. In Frank, A.U., Campari, I. and Formentini, U. (eds.) Theories and Methods of Spatio-Temporal Reasoning in Geographic Space, Springer-Verlag, Berlin, Ger- many, 1992. [8] Freeman-Benson, B.N., Maloney, J. and Borning, A. An Incremental Constraint Solver. Commun. ACM, 33 (1990), 54-63. [9] Guesgen, H.W. and Hertzberg, J. A Perspective of Constraint-Based Reasoning. Springer-Verlag, Berlin, Germany, 1992. [lo] Haralick, R.M. and Elliott, G.L. Increasing Tree Search Efficiency for Constraint Satisfaction Prob- lems. Artificial Intelligence, 14 (1980), 263-313. [ 1 l] Hunter, G.M. and Steiglitz, K. Operations on Im- ages Using Quad Trees. IEEE Trans. Pattern Anal. and Machine Intell., 1 (1979), 145153. [12] Kautz, H.A. and Ladkin, P.B. Integrating Metric and Qualitative Temporal Reasoning. In Proc. AAAI ‘91 (Anaheim, CA, 1991), 241-246. [13] Kin, N., Takai, Y. and Kunii, T.L. PictureEditor II: A Conversational Graphical Editing System Con- sidering the Degree of Constraint. In Kunii, T.L. (ed.) Visual Computing. Springer-Verlag, Tokyo, Japan, 1992. [14] Kramer, G.A. A Geometric Constraint Engine. Artificial Intelligence, 58 (1992), 327-360. 1151 Mackworth, A.K. Consistency in Networks of Re- lations. Artificial Intelligence, 8 (1977), 99- 118. [16] Mackworth, A.K. and Freuder, E.C. The Com- plexity of Some Polynomial Network Consistency Al- gorithms for Constraint Satisfaction Problems. Artifi- cial Intelligence, 25 (1985), 65-74. [17] Malik, J. and Binford, T.G. Reasoning in Time and Space. In Proc. IJCAI ‘83 (Karlsruhe, Germany, 1983), 343-345. [18] Meiri, I. Combining Qualitative and Quantitative Constraints in Temporal Reasoning. In Proc. AAAI ‘91 (Anaheim, CA, 1991), 260-267. [19] Mukerjee, A. and Joe, G. A Qualitative Model for Space. In Proc. AAAI ‘90 (Boston, MA, 1990), 721- 727. [20] Samet, H. The Design and Analysis of Spatial Data Structures. Addison-Wesley, Reading, MA, 1990. [21] Shahookar, K. and Mazumder, P. VLSI Cell Placement Techniques. ACM Computing Surveys, 23 (1991), 143-220. [22] Tanimoto, T. Configuring Multimedia Presenta- tions Using Default Constraints. In Proc. PRICAI ‘92 (Seoul, Korea, 1992), 1086-1092. [23] Tokuyama, T., Asano, T. and Tsukiyama, S. A Dynamic Algorithm for Placing Rectangles without Overlapping. J. of Information Processing, 14 (1991), 30-35. Constraint-Based Reasoning 151 | 1993 | 24 |
1,348 | e Randall Davis davis@ai.mit .edu MIT Artificial Intelligence Lab Abstract Two observations motivate our work: (a) model- based diagnosis programs are powerful but do not learn from experience, and (b) one of the long- term trends in learning research has been the in- creasing use of knowledge to guide and inform the process of induction. We have developed a knowledge-guided learning method, based in EBL, that allows a model-based diagnosis program to selectively accumulate and generalize its experi- ence. Our work is novel in part because it produces sev- eral different kinds of generalizations from a single example. Where previous work in learning has for the most part intensively explored one or another specific kind of generalization, our work has fo- cused on accumulating and using multiple differ- ent grounds for generalization, i.e., multiple do- main theories. As a result our system not only learns from a single example (as in all EBL), it can learn multiple things from a single example. Simply saying there ought to be multiple grounds for generalization only opens up the possibility of exploring more than one domain theory. We pro- vide some guidance in determining which grounds to explore by demonstrating that in the domain of physical devices, causal models are a rich source of useful domain theories. We also caution that adding more knowledge can sometimes degrade performance. Hence we need to select the grounds for generalization carefully and analyze the re- sulting rules to ensure that they improve perfor- mance. We illustrate one such quantitative analy- sis in the context of a model-based troubleshoot- ing program, measuring and analyzing the gain resulting from the generalizations produced. 1 Introduction Two observations motivate our work: (a) model-based diagnosis programs are powerful but do not learn from experience, and (b) one of the long-term trends in learning research has been the increasing use of knowl- 160 Davis Paul Resnick presnick@eagle.mit .edu MIT Center for Coordination Science edge to guide and inform the process of induction. We have developed a knowledge-guided learning method that allows a model-based reasoner to selectively accu- mulate and generalize its experience. In doing so we have continued in the line of work demonstrated by programs that use knowledge to guide the induction process and thereby increase the amount of information extracted from each example. Previous work in this line includes the comprehensibility crite- rion used in [lo] ( a constraint on the syntactic form of the concept), the notion of a near-miss [15], and the use of knowledge from the domain to aid in distin- guishing plausibly meaningful events from mathemati- cal coincidences in the data [2]. Work on explanation- based learning (EBL) [12,3,9] has similarly emphasized the use of domain specific knowledge as the basis for generalizations, allowing the system to develop a valid generalization from a single example. While these systems have used increasing amounts of knowledge to guide the induction process, they have also for the most part intensively explored methods for doing one or another specific type of generalization. In addition, EBL generally takes the domain theory as given. Yet as is well known, the type of generalizations EBL can draw is determined by the domain theory, in particular by what the theory parameterizes and what it builds in as primitive. Our work can be seen in these terms as showing the domain theory author how to find and use multiple different theories, thereby extending the range of generalizations that can be drawn. We report on experiments with an implemented set of programs that produce several distinctly different generalizations from a single example; as a result the system not only learns from a single example, it can learn a lot from that example. For instance, given a single example of an adder mis- behaving to produce 2+ 4 = 7, the system can produce a number of generalizations, including: the pattern of inputs and outputs consistent with a stuck-at-l on the low order input bit, the N patterns of inputs/outputs consistent with a stuck-at-l on any of the N input bits, as well as the pattern of inputs/outputs consistent with a stuck-at-0 on any of the input bits. We show why From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. this last example is both a sensible and useful gener- alization even though no stuck-at-0 fault can explain the original example. Simply saying there ought to be multiple grounds for generalization only opens up the possibility of explor- ing more than one space. We provide some guidance in determining which grounds to explore by demon- strating that in the domain of physical devices, causal models are a rich source of useful grounds for general- ization and can help spur the creation of systems that generalize in ways we might not have thought of oth- erwise. Finally, not everything we can learn will improve performance; sometimes adding more knowledge will only slow us down. Hence we need to select the grounds for generalization carefully and analyze the resulting rules to ensure that the new knowledge actually im- proves performance. We illustrate one such quanti- tative analysis in the context of a model-based trou- bleshooting program that improves its performance over time as it learns from experience. 2 A View Of Learning From Experience Learning from experience means seeing a new problem as in some way “the same as” one previously encoun- tered, then using what was learned in solving the old problem to reduce the work needed to solve the new one. What it means to be “the same” is a fundamental issue in much of learning. The simplest definition is of course exact match, i.e., the simplest form of learning by experience is literal memorization. Any more inter- esting form of learning requires a more interesting def- inition of “same;” we explore several such definitions in Section 3. Clearly the more (different) definitions of similarity we have, the more use we can make of a previously solved problem: Each new definition allows the trans- fer of experience from the old problem to another, dif- ferent set of new problems. This issue of the definition of “same” lies at the heart of all inductive learning. It is most obvious in systems like case-based reasoners (e.g., [S]), where a distance metric selects from the library the case that is most nearly the same as the new one. But the identical issue underlies all forms of generalization: every gen- eralization embodies and is a commitment to one or another definition of similarity. For example, general- izing a concept by dropping a single conjunct from its definition is a commitment to defining two instances as “the same” if they share all the remaining conjuncts. Different and more elaborate forms of generalization yield correspondingly different and more elaborate def- initions of “same,” but the issue is unavoidable: every generalization embodies a definition of “same.” As we explain in more detail below, we have found it useful to view learning from experience in model-based diagnosis as an exercise in finding and using several different definitions of same. 3 Multiple imensions of Generalization: Ex- amples Our approach has been to study the device models used in model-based reasoning, looking for useful ways in which two examples might be viewed as the same, then create generalization machinery that can make that similarity easily detected from available observations. Consider for instance the familiar circuit, misbehav- ior, and diagnosis shown in Example 1. What general lessons can we learn from this example? We show that at least five distinct generalizations are possible. We explore each of these in turn, describing them in terms of the dimension of generalization they use (i.e., what definition of “same” they employ), what gets general- ized and how, and the rules that result, allowing the program to carry over experience from this single ex- ample to multiple different situations it may encounter in the future. A=3- Ml Al &13 7 A2 &I2 E=3 - M3 - Example 1: Inputs and observed outputs shown; either Al or Eni is broken. The contribution here is not machinery, since small variations on traditional EBL suffice for the task at hand. Our focus is instead on finding multiple ways to view the device and thereby provide guidance to the person writing the domain theory (or theories) that EBL will use to produce generalizations. 3.1 The Same Conflict Set Imagine that we troubleshoot Example 1 using a di- agnostic engine in the general style of [5]: we use the behavior rules for each component to propagate val- ues and keep track of which components each propa- gated value depends on. The prediction that F should be 12 in Example 1, for instance, depends on values propagated through ~1, #2 and Al. When two prop- agations (or an observation and a propagation) offer two different values for the same spot in the circuit, we construct a conflict set, the set of all components that contributed to both predictions.’ Conflict sets are useful raw material from which single and multi- ple point of failure hypotheses can be constructed [5]. Our concern in this paper is with single points of fail- ure; the consequences for multiple points of failure are ‘The intuition is that at least one of the components in a conflict set must be malfunctioning. If they were all working properly (i.e., according to their behavior rules), there would have to be two different values at the same point in the circuit, which is of course impossible. Hence at least one is broken. Diagnostic Reasoning 161 discussed in [14]. In Example 1 two conflict sets are constructed: (Ml M2 Al) and (Ml H3 Al A2). Now consider Example 2. B=4 c=2 M2 D=l E=5 s A2 =I4 M3 Example 2. Even though the symptoms are different the pattern of reasoning (i.e., the set of propagations) is the same, and hence so are the resulting conflict, sets. There are also numerous other sets of symptoms which produce the same reasoning pattern and conflict sets. Thus if we can determine from Example 1 the general con- ditions on the values at input/output ports that will produce those conflict sets, we would in future exam- ples be able to check those conditions at the outset, then jump immediately to the result without having to replay that pattern of reasoning (and hence save the work of propagating through the components). 3.1.1 Mechanism and Results We find those conditions by replacing the specific val- ues at the inputs and outputs with variables, then re-running the propagations, thereby generalizing the pattern of reasoning ‘that led to that answer. Our al- gorithm is a small variation on standard explanation- based learning (see, e.g., [4]), described in detail in [14] and omitted here for reasons of space. For the first conflict set the result is the generalized rule:2 RI: (IF (NOT (= ?F (+ (* ?A ?C> (* ?B ?D)))) (THEN (CONFLICT-SET ‘(Ml ~2 Al))) For the second conflict, set we get: R2: (IF (NOT (= (+ (- ?F (* ?A ?C)) (* ?C ?E)) ?G)) (THEN (CONFLICT-SET ‘(Ml M3 AI ~2) )) As a result of this process, from Example 1 the system has produced in Rl the condition on the I/O ports for which the pattern of reasoning will lead to the con- flict set (Ml M2 Al) (viz., AC + BD # F), and in R2 the condition leading to the conflict set (Ml M3 Al A2) (viz., F - AC + CE # G). Both of these rules derived from Example 1 are applicable to Example 2, hence the system would now be able to derive the conflict, sets for Example 2 in one step each. While these particular rules simply encapsulate in one expression the steps of the derivation, they still provide useful speedup (28%, a figure we document and analyze in Section 4.2). 2Symbols preceded by question marks are variables. The same technique was also applied to the more realistic carry-lookahead adder circuit in Example 3, where it, produced a 20% speedup. Ex. 3. C2 SF8 3.2 The Same Fault in Same Component There is another, different sense in which Examples 1 and 2 are the same: they can both be explained by the same fault (a stuck-at-l) in the same compcr nent (the low order output of MI). Hence another, different, generalization that we can draw from Exam- ple lis the set of examples consistent with the hypothesis that the low order bit of Ml is stuck-at-l. As before, there are numerous sets of symptoms con- sistent, with this hypothesis, so if we can determine the general conditions and check them at the outset,, we may be able to save work. In this case the machinery we use to derive the general rules is symbolic fault en- visionment: the system simulates the behavior of the circuit with the fault model in place, but uses variables in place of actual values. The result in this case is: R3: (IF (AND (= ?F (+ (stuck-at-l 0 (* ?A ?C>> (* ?B ?D))) (= ?G (+ (* ?B ?D> (* ?C ?E)))) (THEN (fault-hypothesis ‘(stuck-at-l 0 (OUTPUT Ml))))) The resulting rule3 in this case is not, particularly deep (it simply encapsulates the fault simulation) but it can still provide useful speedup. In Section 6 we suggest, what would be required to enable deriv- ing a rule that used terms like “the value at F is high-by-l ,” which would be both more interesting and more powerful. The main point, here is that Example 1 has been gen- eralized in a new and different way, based on a differ- 3stuck-at-l takes two arguments: a number indicat- ing which bit is affected, and the value affected; hence (stuck-at-l 0 (OUTPUT Ml)) means the 0th (low order) bit of the output of Ml is stuck at 1. 162 Davis ent domain theory. Where the previous generalization arose from a domain description phrased in terms of correct behav ior (and conflict sets), this generalization comes from a domain theory described in terms of fault modes (and their consistency with observations). The system ‘can now use that generalization to apply the experience of Example 1 to a different set of problems: those with the same fault in the same component (Ml). 3.3 The Same Fault in a Component Playing the Same Role Example 1 can be generalized in yet another way: given the additional information that E91 and M3 play similar roles, we can derive the general conditions consistent with the hypothesis that-the low order output bit of MB (rather than Ml) is stuck-at-l. That is, the fault is the same but it is occurring in a different component, a component that happens to be playing the same role.4 We use symbolic fault envisionment once again to produce the rule: R4: (IF (AND (= ?F (+ (* ?A ?c) (* ?B ?D))) (= ?G (+ (* ?B ?D) (stuck-at-i 0 (* ?C ?E))))) (THEN (fault-hypothesis ‘(stuck-at-l 0 (OUTPUT M3))))) A more interesting and realistic example is the ap- plication of this to the inputs of an N-bit carry-chain adder (Example 4): given the symptoms 2 + 4 = 7, the diagnosis of a stuck-at-l on the low order bit of one of the inputs, and the knowledge that all the in- put bits play the same role, we can produce rules for a stuck-at-l on any of the 2N input bits. B3 A3 - 1 c s3 s2 Sl so Example 4: A malfunctioning four-bit carry chain adder: 2 + 4 = 7. Two comments about this dimension of generaliza- tion help make clear the nature of the undertaking 4The role of a component refers to what it does in the device. The symmetry of the circuit in Example 1, for instance, means that Ml and I#3 play the same role, while Example 4 can be viewed as a collection of 4 bit slices, each of which plays the same role, viz., adding its inputs to produce a sum and carry bit. and the role of domain knowledge. First, the gener- alizations created are guided and informed by knowl- edge about this device. In Example 4, the jump was not from a stuck-at-l on the low order bit input to a stuck-at-l on every wire in the device. The example was instead generalized to a small subset of the wires that “made sense” in the current context, namely those that were playing analogous roles. This restriction makes sense because we are rely- ing on the heuristic that role equivalent components are likely to fail in similar ways. Hence if we create generalizations only for the analogous components, we improve the chances that the generalizations will in fact prove useful in the future (i.e., the component will actually break in that fashion). Second, note that while our current system must be told explicitly which components are playing equiva- lent roles, the information about structure and func- tion in the device model is just the sort of knowledge needed to derive that. More important, it is by exam- ining such models and asking how to see different ex- amples as the same that we are led to notions like role equivalence as potentially useful dimensions of gener- alization. 3.4 Same Family of Fault Example 1 can be generalized in yet another way: given the additional information that stuck-at-l is a member of a family of faults that also includes stuck-at-O, we can generalize across the fault family.5 From the sin- gle example with a stuck-at-l, we can generalize to the other member of the family, deriving the general conditions under which it is consistent to believe that the output of HI is stuck-at-O. We use symbolic fault envisionment to produce the rule: R5: (IF (AND (= ?F (+ (stuck-at-0 0 (* ?A ?C>> (* ?B ?D))) (= ?G (+ (* ?B ?D) (* ?C ?E))))) (THEN (fault-hypothesis ‘(stuck-at-0 0 (OUTPUT B3)))) This generalization is motivated by the heuristic that if we encounter a device affected by one of the faults in a family, it is likely in the future to be affected by others in the family as well, hence it is worthwhile to create the corresponding generalizations. 3.5 The Same Family of Fault in A Compo- nent Playing the Same Role Example 1 can be generalized in yet one final way, by simply composing the ideas in Sections 3.3 and 3.4. From the original fault in the observed location, we move to a hypothesized fault in the same family in an analogous location. That is, while the experience in 5The family here is the simple but real hierarchy with stuck-at at the root and stuck-at-l and stuck-at-0 as child nodes. Diagnostic Reasoning 163 Example 1 was a stuck-at-l in Ml, the system next derives the conditions that indicate a stuck-at-0 in M3 (using the same fault envisionment machinery): F16: (IF (AND (= ?F (+ (* ?A ?c> (* ?B TD))) (= ?G (+ (* ?B ?D) (stuck-at-0 0 (* ?C ?E)))))) (THEN (fault-hypothesis 3 (stuck-at-0 0 (OUTPUT M3) > ) ) If this approach is applied to the carry-chain adder in Example 4, we have the first step toward the gen- eralization from the single example 2 + 4 = 7 to the general rule an adder that is off by a power of two indicates a stuck-at on one of the bits. We comment in 6.1 this next level of generalization. on the prospects for 4 Comments on the Examples 4.1 Multiple Dimensions of Generalization Several things stand out about the sequence of exam- ples reviewed. First is the number and diversity of the lessons that have been learned. A number of dif- ferent kinds of generalizations were derived from the single instance in Example 1, by relying on the notion that problems encountered in the future might be “the same” as Example 1 with respect to: (i) the conflict set generated, (ii) the kind and location of fault, (iii) the role of the faulty component, and (iv) the family the fault belonged to, as well as combinations of those. Each lesson learned means that the single experience in Example 1 can be seen as applicable to a new and different set of examples in the future. We can view the results in terms of the Venn diagram in Fig. 5: The universe is the set of all possible diagnostic problems (I/O values) for th e circuit in Example 1; the sets are the generalizations captured by rules RI thru R6 (and are labeled with the rule name). The specific problem presented by Example 1 is noted by a point marked +. RI: any example in which Ml, M2, or A2 is a candidate. R2: any example in which Ml, M3, Al, A2 is a candidate. R3: any example in which Ml SA-1 is a candidate. R4: any example in which M3 SA-1 is a candidate. R5: any example in which Ml SA-0 is a candidate. R6: any example in which M3 SA-0 is a candidate. Fig. 5. Sets RI and R2 are produced by traditional use of EBL on Example 1, while sets R3-R6 are produced because the system has available a variety of different descriptions of (i.e., a variety of domain theories for) the device. R3-R6 are also more specific than RI and R2. In the case of sets RI-R3 the process of gener- alizing from Example 1 can be seen as different ways to expand Example 1 into distinct circles. Note that R4-R6 are appropriately produced by the system even though they don’t explain the original example. We discover multiple ways in which to generalize by examining the models of structure and behavior to de- termine what kinds of similarities can be exploited, and by developing methods for recognizing those similari- ties. Hence we are not simply suggesting using multi- ple generalizations, we are suggesting in addition where they might be found: Causal models have turned out to be a particularly useful source of inspiration, sup- plying a rich set of different dimensions along which examples can be generalized. Despite the number of different generalizations, the machinery we have used to create them is both sim- ple and relatively uniform. It is simple in the sense that straightforward application of explanation-based generalization (for RI and R2) and symbolic simulation (for the rest) sufficed to produce the generalizations; it is relatively uniform in the sense that two mechanisms suffice across a range of different grounds for general- ization. Note also that each of these dimensions of gener- alization provides significant guidance. The system is aggressive in the number and variety of generalizations it draws, but is still far from doing exhaustive search. For instance, when given a stuck bit in an input wire of a 32 bit adder and generalizing to components playing the same role, it will explore faults in the 63 other input bits, but that is far less than the total number of wires in the device. The subset of wires selected is guided by knowledge from the domain about component role and function. Finally, our emphasis on domain knowledge as a source of guidance and inspiration appears to be in keeping with the increasing trend toward more in- formed varieties of inductive learning. The earliest efforts attempted to find statistical regularities in a large collection of examples and hence might be said to be regularity based; later efforts became explana- tion based. The approach described here, with its re- liance on domain-specific information is in the spirit of knowledge guided generalization. 4.2 Useful Dimensions of Generalization Thus far through the discussion we have implicitly as- sumed that if we are guided by the domain knowledge, every lesson learned will improve performance. But this need not be true: sometimes it’s cheaper to re- derive an answer than even to check whether we have relevant experience (i.e., examine the generalizations to see if we have one that matches), much less actually apply that experience. We suggest that to be useful a dimension of gen- eralization must capture situations that are recurrent, 164 Davis manifest, and exploitable. 6 By recurrent we mean that the situations the rule applies to must reoccur often enough in the future to make the rule worth checking; if the rule describes a situation the problem solver will never encounter, the rule can’t produce any benefit and will only slow us down. By manifest we mean the sit- uations must be relatively inexpensive to recognize; if recognizing costs more than rederiving the answer, the rule can only slow us down. By exploitable we mean that the rule must provide some power in solving the problem, i.e., some discrimination in the diagnosis. This style of analysis is illustrated in general terms by examining the idea in Section 3.3 of generalizing to the same fault in a component playing an equiva- lent role. The rules are likely to be useful because the situations they describe should reoccur: role equiva- lent components are likely to fail in similar ways. The situations they describe are exploitable: if the rule ap- plies we get a specific location for the fault. But the situations are not all that easy to check, primarily be- cause for the N bit carry-chain adder we have 2N sepa rate rules rather than one that captures the appropri- ate level of generalization (viz., a result high by a power of two). We return to this question of level of generalization and level of language in Section 6 below. A more quantitative analysis is provided by the data in Table I (below). We report tests on 200 cases of the circuit in Example 1 (100 training cases, 100 test cases) and 300 cases of the carry-lookahead adder (150 training, 150 testing). 7 The results demonstrate that a troubleshooter with these generalizations provides a speedup of 28% and 20% on the circuits of Examples 1 and 3. The speedup shown in Table I arises in small part due to the “encapsulation” effect of rules Ri and R2: i.e., arriving at the conflict sets from the single arith- metic calculation in the rule premise, without the over- head of TMS-style recording of intermediate calcula- tions required when running the standard diagnostic system. A larger part of the speedup arises from the focus provided by the generalized rules. In Example 1, for instance, two of the generalized rules narrow the pos- sible candidates down to (Al Ml) at the outset. The system still has to check each of these, since it can not assume that its set of generalized rules is complete. As this is in fact the minimal set of single fault candidates, no additional conflict sets are derived when checking these candidates, and a large part of the speedup arises from avoiding the deriving (and re-deriving) of conflict sets that occurs in the standard diagnostic system.8 Finally, the data also provide a basis for quantita- tive calibration of the cost/benefit involved in learning. Additional measurement indicated that of the 5.66 sec- onds taken on average case in Example 3, an average of .70 seconds was devoted to checking the general- ized rules. Hence those rules cost .70 sec., but saved 7.07 - (5.66 - .70) = 2.11 sec. Had the rules been three times as expensive to check, there would have been negligible benefit from having them; any more expense would have meant that learning was disadvantageous, it would on average slow down the system. Alternatively we can determine that having the gen- eralized rules saved on average 7.07-(5.66-.701 3.15 = .67 set per generalization rule used, but each rule cost .70= .0032 set on average. This produces a cost ben- z?i’t ratio of .67/.0032 or 211. Hence generalized rules that are on average applicable in fewer than 1 out of 211 examples will slow down the system. The basic message here is that it is important to be both creative and analytical: we need to be cre- ative in discovering as many dimensions of generaliza- tion as possible, but we also then need to be analytical (as above) in checking each dimension to determine whether in fact it is in fact going to provide a net gain in performance. Table I. In Example 1 there are only 3 conflict sets (and hence 3 generalized rules) possible; the third conflict set is (PI3 H2 A2). Due to the symmetry of the circuit, two of the rules are always applicable to any single-fault diagnosis. In the more realistic circuit of Example 3, there are 221 different rules possible; on average 3.15 of them are applicable to any single-fault problem. ‘These three factors have also been independently iden- tified in [ll]. 7Each case was a randomly chosen set of inputs and outputs for the circuit, constrained only by the criterion that the I/O al v ues had to be consistent with a single stuck- at failure somewhere in the device. 80ur original implementation used a JTMS, but a sec- ond implementation using an ATMS demonstrated the same basic result, due to savings in bookkeeping of environments. Diagnostic Reasoning 165 5 Related Work This work fits in the tradition of the long trend noted earlier: the increasing use of domain knowledge to guide learning. It also shares with other work the notion of using a simulation model to support learn- ing. Work in [ 131, f or instance, uses a simulation model to guide rule revision in the face of diagnostic failure. Work in [l] uses a simulation model as a convenient generator of test cases. There are a number of minor differences: unlike [13], we learn from success rather than failure, and unlike [l] we are in a relatively sim- ple noise-free domain and hence can learn from a single example. A more important difference, however, is our empha- sis on learning multiple things from the model. The point is not simply that a device model can be used to support learning, but that there is a considerable body of knowledge in such models that can be used in multiple ways. Work in [9] is in some ways similar to ours, explor- ing learning from experience in model-based diagnosis. That work relied on EBL as the mechanism to produce its generalization but had a single theory of the world and hence produced only a single generalization. By comparison our work, seen in EBL terms, urges using multiple theories, suggests that causal models can be a rich source of those theories, and provides a framework for evaluating the likely utility of each theory. Work in [ll] d emonstrated some of the first efforts to quantify the costs and benefits of learning. Our discussion in Section 4.2 offers additional data on that subject. Finally, some work in EBL has explored the “multi- ple explanation problem” (e.g., [3]), suggesting ways to find a valid explanation when the theory supports mul- tiple, possibly invalid explanations (because the theo- ries may be incomplete or approximate). This work ex- plores instead the “multiple explanation opportunity?’ we want to learn as much as possible from the example and use a variety of correct theories to derive multiple useful and valid generalizations. 6 Limitations and Future Work The current implementation is an early step in the di- rections outlined here and has some important limits tions. The results cited, for instance, come from a set of programs and experiments rather than from a sin- gle well-integrated body of code. In addition, as noted earlier, the domain knowledge used to guide the system must be supplied directly: to generalize across compo- nents that play the same role, the system must be told explicitly which components match; when generalizing across fault families the system must be told explicitly which faults are in the family. Most fundamentally, we have supplied the human with a set of guidelines for thinking about the world, rather than a program that automates such thinking. When our system draws multiple generalizations, it is because we have told it what to do. Our work is thus a first step in making such knowledge explicit, because we can specify the general framework that led to the results, but have not yet automated its application. As with all EBL systems, our program could in prin- ciple produce all of its generalizations before encoun- tering any actual example [6,7]. The heuristic of allow- ing experience to trigger generalization depends on the belief that the past is a good predictor of the future, hence past experience is a useful “seed” from which to generalize. This also highlights the utility of multiple domain theories: It can be difficult to say in what way the past will be a good predictor of the future. Will the same conflict sets occur? Will the same components fail in the same way? The use of multiple domain the- ories allows us to hedge the bet that lies at the heart of this heuristic, by simply making several such bets about how the past will predict the future. Another interesting and pervasive limitation be- comes evident in examining the language in which the rules are stated. The good news is that, like EBL, our system does not need an externally supplied inductive bias; the language used to construct the generaliza- tions comes from the domain theory. But all we can do is use that language as it stands; some of the rules could be made both more intuitive and easier to check if we could develop the appropriate elaborations of the language. It would be useful, for example, to be able to rewrite ~3 in simpler terms. In this case the crucial non-trivial knowledge is the recognition that stuck-at-l at the low order input to an adder (in this case ~1) will result in a symptom that might be called high-by-l. Once given this, it is relatively simple to rewrite the rule into a form far closer to the normal intuition about this case, viz., “if F is high by 1 and G is correct, then possibly Ml is stuck at 1 in the low order bit.” The difficult task is deriving the initial insight about adder behavior, i.e., the connection between behavior described at the level of bits (stuck-at) and described at the level of numbers (high by 1). A second example arises in generalizing to the same fault in role equivalent components. As Example 4 il- lustrated, when creating those generalizations the sys- tem can do only one at a time, rather than capturing the entire set of analogous components in a single rule that referred to a result “high by a power of two”. This is difficult in general; for Example 4 the difficulty is recognizing the relevant generalization: each bit rep- resents a different power of two. We have speculated elsewhere [14] that a design verification might already have the information needed. 7 Conclusion All of these are potential directions for useful further development. The primary utility in the notion of mul- tiple dimensions of generalization, however, is not that we can make the process entirely autonomous when 166 Davis it is given only a description of structure and behav- ior. The primary utility is rather that the notion of multiple kinds of generalizations and the use of such models provides to the researcher a source of inspira- tion, urging the creation of domain theories that can be generalized in ways we might not have thought of otherwise. The machinery used to produce those gen- eralizations can be improved in many ways; the issue here is one of having suggested a set of directions in which to work. We displayed the result of those new directions by showing how five distinctly different general lessons can be learned from the single example in Figure 1, provided a framework in which those generalizations can be evaluated for effectiveness, and documented the quantitative speedup provided by one of them. Acknowledgments This report describes research done at the Artifi- cial Intelligence Laboratory of the Massachusetts In- stitute of Technology. Support for the laboratory’s artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Depart- ment of Defense under Office of Naval Research con- tract N00014-91-J-4038. Additional support for this research was provided by Digital Equipment Corpora- tion, an NSF Graduate Fellowship, McDonnell Douglas Space Systems, and General Dynamics Corp. References [l] Buchanan B, et al., Simulation-assisted inductive learning, Proc AAAI-86, pp. 552-557. [2] Buchanan B, Mitchell T, Model-directed learning of production rules, in Pattern-Directed Inference Sys- tems, Waterman, Hayes-Roth (eds.), Academic Press, 1978. [3] Cohen W, Abductive explanation based learning: a solution to the multiple explanation problem, TR ML- TR-26, Rutgers Univ CSD, 1989. [4] DeJong G, Mooney R, Explanation-based learning, Machine Learning, l(2), 1986. [5] deKleer J, Williams B, Diagnosing multiple faults, AI Jnl, April 1987, pp. 97-130. [6] Etzioni 0, Why PRODIGY/EBL works, Proc AAAI- 90, pp. 916-922. [7] Etzioni 0, STATIC - A problem space compiler for prodigy, Proc AAAI-91, pp. 533-540. [9] Koseki Y, Experience learning in model-based di- agnostic systems, Proc IJCAI-89, pp. 1356-1362. [lo] Michalski R, A theory and methodology of induc- tive learning, in Machine Learning, Michalski, Car- bonell, Mitchell (eds.), Tioga Press, 1983. [ll] Minton S, Quantitative results concerning the utility of explanation-based learning, Proc AAAI-88, pp. 564-569. [12] Mitchell T, et al, Explanation-based generaliza- tion, Machine Learning, l(l), 1986, pp.47-80. [13] Pazzani, Refining the knowledge base of a diag- nostic expert system, Proc AAAI-86, pp 1029-1035. [14] Resnick P, Generalizing on multiple grounds, MIT AI Lab TR-1052 (MS Thesis), May 1988. [15] Winston P H, Learning structure descriptions from examples, in Winston (ed.), The Psychology of Com- puter Vision, McGraw-Hill, 1975. [8] Kolodner J, et al., A process model of case-based reasoning, Proc IJCAI-85, pp. 284-290. Diagnosltic Reasoning 167 | 1993 | 25 |
1,349 | Hybrid Case-Based Reasoning for the Diagnosis of Complex Devices M. P. F&et and J. I. Glasgow Department of Computing & Information Science, Queen’s University, Kingston, Ontario, Canada, K7L 3N6 {feret,janice}@qucis.queensu.ca Abstract A novel approach to integrating case-based reasoning with model-based diagnosis is presented. The main idea is to use the model of the device and the results of diagnostic tests to index and match cases representing past diagnos- tic situations with the current one. The initial diagnostic methodology is presented as well as the problems encoun- tered while applying this methodology to two real-world de- vices. The incorporation of a case-based reasoning system is then motivated and described in detail. Experimental results show the effectiveness of both the indexing schema and the matching algorithm. The paper also discusses how and why these results can be generalized to a multiple fault situation, to other types of device models and to other ap- plications in the field of artificial intelligence. Introduction This paper presents an approach to integrating case- based reasoning with a traditional diagnostic method for complex devices. This generic approach to diag- nosis is based on a hierarchical decomposition of me- chanical devices and uses sensor data, collected in real- time and stored in a database, to guide the search to- wards hypothetical diagnoses [6, 7, 8, 161. Some of the difficulties encountered while applying a model-based reasoning (MBR) diagnostic method to two real-world devices are identified. These difficulties arose from im- perfections of the device model, due to human errors or misconceptions. These imperfections lead to incorrect models which produced inadequate diagnostic perfor- mance. In this paper we further develop ideas initially introduced in [7], providing additional motivations for *This research was supported through a contract from the Canadian Space Agency (STEAR program), a scholarship and an operating grant from the Natural Sciences and Engineering Research Council (NSERC) of Canada. We also would like to thank Spectrum Engineering Corporation Ltd., Peterborough, Ontario, Canada. 168 F&et our approach, and experimental results that support the claims of the paper. CBR has traditionally been used as a stand-alone problem-solving method [ 13, 141, sometimes applied to diagnostic problems (e.g. [20]). Only recently has CBR been used in association with other problem-solving paradigms [15, 10, 191. Our approach is novel in that it uses CBR only after the MBR process has taken place, and in that it uses the model and the results of the MBR process to index the cases. The hybrid CBR/MBR methodology described in this paper incorporates a critique of the results of the model-based approach in the light of past experience and provides the human operator with a means for exploring alternative hypotheses. The integration of CBR with the structural isolation process allows for a simple and effective indexing schema as well as a com- putationally inexpensive similarity measure for cases. This paper initially presents the structural isolation process. It then lists some of the problems arising when trying to apply any MBR. diagnostic method to com- plex devices. The hybrid approach combining CBR and MBR is presented along with some experimental results. The final discussion summarizes the contribu- tions of the research. Generic iagnosis Our approach to generic diagnosis has been fully im- plemented in the Automated Data Management Sys- tem (ADMS), which has previously been described and compared to other techniques for diagnosis in [6]. This section describes the structural isolation process and outlines the problems that arose in applying it to two real-worlcl devices, a robotic system called the Fairing Servicing Subsystem, and a Reactor Building Ventila- tion System [6, 71. The first device is a robot placed at the rear of a ship to automatically replace damaged fairings on a cable which drags an underwater detec- From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. tion system. The fairings prevent the detection system from drifting away from the axis of the ship. The Re- actor Building Ventilation System is a modified model of a ventilation system for an existing nuclear power plant. Structural Isolation Process Govindaraj and Su suggested that empirical con- straints should direct the formation of knowledge rep- resentation in a format that reflects how experts solve problems [l2]. They also observed that human di- agnosis proceeds in a hierarchical manner, starting at higher levels of abstraction to generate hypothe- ses that guide the diagnosis at lower levels. Hierar- chical, structural and conceptual data structures have already been found useful for diagnostic applications (e.g. [ll, 12, 251). The ADMS methodology for diagnosis of mechani- cal devices is based on considering the device as a hi- erarchy of components or groups of components. This hierarchy is expressed using a frame language and con- stitutes the backbone of the knowledge base built for individual applications of the ADMS to a mechanical device. All recognizable components that can be diag- nosed as sources of failure are situated at the bottom of the hierarchy. Each component has associated failure mechanism patterns. These patterns are conjunctions of sensor functions, testing specific conditions on the sensor data stored in a database [16]. Between these top and bottom levels, the device is decomposed into meaningful substructures, associated with test condi- tions indicating whether these substructures are po- tentially faulty. While st,ructural knowledge is usually available from engineering design documents and easily encoded, the knowledge about failure modes and patterns tends to be complex and of various types. In the case of sensor- based diagnosis, the queries to the database and their use in the sensor functions lead to numerous difficul- ties. These clifficulties const#itute the major differences between complex devices and electronic circuits, where failures and their consequences are straightforward to characterize. The models used by the ADMS methodology there- fore consist, of a structural decomposition of the de- vices, along with necessary conditions for substructures and basic components being potential diagnoses. They are fault models in which all testing conditions use the real-time sensor data stored in the database through a set of predefined queries. In the context described above, performing diagnosis involves traversing the hierarchy according to results of necessary conditions applied to the substructures rep- resented in the hierarchy. At a given node in the hier- archy, if there is no evidence that the substructure can be faulty, then the whole substructure can be pruned from the search space of potentially faulty components. If there is such evidence, the substructure is examined further and more local conditions are applied to subn- odes of the current node. This process is known as the structural isolation process [17]. It also handles multiple faults by simultaneously investigating multi- ple paths in the model. The output of the diagnostic algorithm is a ranked list of potential diagnoses from which the operator can (or not) select the final (supposedly correct) diagno- sis. The ranking involves relative levels of confidence in each potential diagnosis as well as criteria related to the history of each component (such as meantime between failures, time to life expectancy, etc.). iagnosis aknesses Diagnosis from first-principles is believed to be NP- hard in the general case [ 19, 21, 221. Many researchers have tried to focus the search for minimal diagnoses in the diagnosis from first-principle approach [2, 3, 4, 51 or to reduce the complexity of similar methods (e.g. P, w Analyzing and compiling human diagnost!ic problem- solving capabilities is difficult. Misunderstandings, in- correct specifications, typos, etc. typically lead to par- tially incorrect models which are difficult to debug, es- pecially when there is no simulation program available for the device. Moreover, device models are not al- ways the most natural or efficient representation for diagnosing faulty components [23]. These knowledge acquisition and validation problems clearly weaken the reliability of model-based systems such as the ADMS and need to be addressed before the system can be used for critical, real-world applications. Fault mod- els consisting of necessary conditions (for parts to be faulty) are simpler to express and to implement than correct behavior models. However, fault models have a well-known drawback: they only model foreseen, pre- dictable faults. Therefore, diagnostic systems based on fault models are ineffective in the presence of unfore- seen faults. The process of human diagnostic problem-solving is often suboptimal [12, 261. Resulting shortcomings are likely to be found in any model designed and imple- mented by humans. In our experience, we have found such mistakes in the experts’ explanations and reason- ing processes. This leads to models that are either in- complete or inconsist’ent because they incorporate hu- man limitations. Diagnostic Reasoning 169 Recalling past relevant decisions (diagnostic cases) is an effective way of reducing the impact of both the model’s and the human’s inadequacies. The next sec- tion describes the addition of a CBR system to the ap- proach described above. This hybrid approach assists in overcoming the bottlenecks of knowledge acquisi- tion and human reasoning imperfections that limit the capabilities of current model-based approaches to di- agnosis. c and Structural Isolation The philosophy behind CBR is that “raw”, unab- stracted experiences can be used as a source of knowl- edge for problem-solving [14,24]. A CBR system stores past experiences in the form of cases. When a new problem arises, the system retrieves the cases most sim- ilar to the current problem, then combines and adapts them to clerive and criticize a solution. If the solution is not satisfactory, new cases are retrieved to further adapt it in the light of new constraints, expressed from the non-satisfactory parts of the proposed solution. The process is iterated until the proposed solution is judged acceptable. After a problem is solved, a new case can be created and stored in the casebase. The main issues to address when building CBR systems are to define an effective indexing schema, efficient re- trieval and storage mechanisms, a reliable similarity measure for cases and an adaptation mechanism. CBR is limited by the difficulty of indexing, retriev- ing and evaluating previous experiences. This is espe- cially true in the case of diagnostic applications, where similar symptoms can have very different or multiple causes. Techniques that work for small problems do not necessarily handle scaled-up versions of the same problem. Model-based approaches are limited by the fact that complete and consistent models of complex devices are difficult to produce. Researchers have previously combined CBR with other problem solving paradigms. Rajamoney and Lee use CBR to decompose a novel, large problem into smaller known ones that are then solved with a model- based reasoning (MBR) system [19]. They use CBR for a separate task than the one the MBR system is used for. Koton’s CASEY system uses CBR to speed up a model-based diagnosis system by storing previ- ous experiences and recalling them when appropriate [15]. In this system, CBR is tried first as an attempt to reason by analogy. The cases are directly derived from the MBR system and are used, once created, in isolation from the MBR system. In these two systems, the paradigms are used independently from one an- other. Golding and Rosenbloom use CBR to improve the accuracy of a rule-based system [lo]. Their cases denote exceptions to rules and are indexed by the rules they confirm and by the rules they contradict. Cases do not store the same information as the rules, but provide a different source of information for decision making. These systems all show that CBR does im- prove the performance of the whole system by either speeding up the same process without bringing new information or by storing a different type of experien- tial knowledge that is used to improve the accuracy of the overall system. However, all these systems have to face critical problems in retrieving and matching their cases, problems that are typical of CBR systems. The ADMS hybrid approach to diagnosis is unique in that it uses the results of the structural isolation process to index cases. There is little overhead in re- trieving relevant cases and matching is simplified since it is only applied to similar cases. We are considering CBR as a tool for assisting the operator in the final stage of a diagnostic session. Once the structural iso- lation has produced a list of potential diagnoses, the operator can call on the CBR component of the system to validate diagnoses or investigate other paths in the search tree. The remainder of this section describes the content of cases, the retrieval process and the matching algo- rithm currently being used to evaluate case similarity. What is a case? In general, a case stores a fragment of a past expe- rience. In the context of the ADMS, a case stores a past diagnostic scenario, consisting of a description of the fault that occurred (fault type, fault time, detect- ing sensor), the series of pruning steps used to produce a list of potential diagnoses (i.e. the tests performed during diagnosis and their values), the list of potential diagnoses produced by the structural isolation process and the correct diagnosis selected by the operator. A successful case is a case where the correct diagnosis was produced by the structural isolation process and confirmed by the human operator. A failure case is a case where the diagnosis failed to find the correct di- agnosis, and for which the operator chose a component that was not in the list of proposed diagnoses. Indexing and Storage of Cases The structural isolation process can be seen as a rough estimate of the location of a component whose failure explains the observed symptoms. The list of potential diagnoses is used as a means of indexing the casebase, leaving the values of the associated sensor functions for 170 F&et Figure tion. 1: The casebase and the structural decomposi- Cawbase the matching step which is a finer judgement of simi- larity. For either a successful or a failure case, we use each potential diagnosis produced by the structural iso- lation process as an index for the case. Figure 1 illus- trates this indexing schema. Each case is stored at the bottom of the structural decomposition under the ba- sic components it contains as Ijotential diagnoses. The lashed arrows originate from failure cases and point to the components representing the correct diagnoses for those cases. This indexing schema is satisfying because of the in- terdependencies that exist among “neighboring” com- ponents. Such components often share the same char- acteristics and are likely to appear in each other’s lists of potential diagnoses. They will likely share the same cases, or cases that are very similar to each other, ex- cept for failure cases. This ensures a useful grouping of similar cases with “bridges” from one grouping to the next provided by failure cases. This indexing schema, based on the information generated by the structural isolation process, is therefore both simple and effective. etrieval and atching of Cases Cases are retrieved from the casebase to evaluate and criticize the current list of potential diagnoses. The operator can ask the ADMS to explore its casebase and to criticize or confirm a potential diagnosis or to suggest new diagnoses that were not generated by the structural isolation process. If the current potential diagnosis is supported by a previous successful case, the level of confidence in this potential diagnosis can be raised. If the correct diagnosis for the most similar case disagrees with all the suggested diagnoses, and points towards a failure case that matches sufficiently well with the current situation, the validity of the current diagnosis is lowered. The diagnosis stored in the failure case is extracted from the casebase and presented to the user as a new potential diagnosis that can, in turn, be evaluated. Retrieved cases are matched with the current situ- ation, using finer criteria to evaluate similarity. Such criteria include state information from both the past case and the current situation. The measure of similar- ity yielded by the matching algorithm is a normalized, weighted sum of the number of sensor functions sharing the same value, the number of substructures shared in the path followed during the session represented by the past case and during the current session, the common characteristics of the symptoms, etc. All steps in the matching algorithm involve comparisons of booleans or of reals and are computationally inexpensive. Because of the indexing method described above, components at the bottom of the hierarchy serve as pointers to cases that represent diagnostic sessions caused by similar or related failures. The matching is effective because the knowledge contained in those cases and on which the matching is based is relevant in both the current and the past cases. The matching algorithm is focused on the part of the system that is the most relevant to the current situation. eri The thesis of this paper and the hypothesis for our experiments is that the analysis of past experiences can aid in the diagnostic process. In particular, CBR can be used to effectively validate or critique a diagnos- tic decision resulting from an imperfect system model. Testing the effectiveness of the hybrid approach to di- agnosis requires an initial casebase of diagnostic cases. Such a casebase could be created with the normal run- ning of the system. For testing purposes, however, a simulator was constructed for the Fairing Servicing Subsystem to automatically generate sensor data and the corresponding sensor function boolean values, rep- resenting single faults. This simulator was used to gen- erate initial casebases and to simulate test cases for evaluating the CBR component of the system. Some simplifying assumptions were made in carry- ing out the two experiments. First, we assumed that all failure modes for components are equally probable. Although this does not reflect reality, it does present - to some degree - a worst case scenario. Secondly, we assume that the sensor information is accurate, i.e. the sensors are not included among the components that may fail. The hybrid approach described above is imple- mented as an interactive process where the human op- erator explores the casebase indexed by the structural decomposition of the device. To test our approach, Diagnostic Reasoning 171 we have implemented a non-interactive version of the same program that retrieves all the cases stored under the potential diagnoses produced by the structural iso- lation. We progressively degrade the model, starting from a model that produces perfect diagnosis perfor- mance, and moving towards models that contain er- rors. This degradation process takes place in the sen- sor functions, which are randomly selected and failed. A failed sensor function returns an incorrect result, falsely describing the state of the device. The model for the Fairing Servicing Subsystem contains 19 sensor functions, describing 15 failure patterns (conjunctions of sensor functions) for 100 components of 8 different component types, e.g. motors (4 failure patterns), ca- bles (2 failure patterns). The following experiments involved running 77 simulations of faults occurring in two modules of the Fairing Servicing Subsystem con- taining 36 components and 7 sensors. Experiment 1 The first experiment’s goal is to measure the diagnostic performance degradation in relation to model degra- dation and to show the effectiveness of the indexing mechanism for CBR. The hypothesis is that the di- agnostic performance becomes worse as the model de- grades. Another goal of this experiment is to show how many potential diagnoses the CBR system contributes to the final result of the integrated approach. Method. The simulations are run twice. The first pass does not use a casebase but generates one. The second pass uses the casebase containing 77 cases. A casebase containing the same cases as the ones that are currently being run is a not a good test for mea- suring the performance of the CBR system. However, the goal of this experiment is not to measure diagnos- tic performance itself but rather its degradation. Results. Figure 2.a shows that the degradation is approximately linear in the number of failed sensor functions. Figure 2.b shows the number of potential diagnoses produced in the same experiment. In this experiment, all retrieved cases are considered similar enough to the current situation. Figure 2.b shows the influence of the CBR system in generating new po- tential diagnosis. The number of potential diagnoses produced by both the hybrid method and the single model-based method are linear in the number of failed sensor functions. The linearity of the degradation in Figure 2.a is the result of a balanced use of sensor functions in the fail- ure patterns. Failure patterns consist of 3 to 6 sen- sor functions, possibly negated. Figure 2.b shows the number of retrieved cases, when the casebase contains b) 1 3 $ 0.8 d 0.6 0.4 0 2 4 6 8 10 12 14 16 number of failed sensor functions Figure 2.a) Performance Degradation from Model Degradation - without CBR - xd.-k _ with CBR -+--- J .--J /+I / / 0 2 4 6 8 10 12 14 16 number of failed sensor functions Figure 2.b) Number of Potential Diagnoses all the cases that were generated by the simulations. This gives an indication of the computational cost of the CBR system. This cost increases linearly when the quality of the model decreases. Although significant, this cost remains within reasonable bounds. This ex- periment also shows the effectiveness of the retrieval process. Regardless of the quality of the model, the appropriate cases were always retrieved. Intuitively, Figures 2.a and 2.1) correspond to the intuition that the worse the device model is, the less accurate it is and the more experience is required to compensate for erroneous knowledge. Experiment 2 The hypothesis of the second experiment is that the hy- brid approach, including a CBR component performs better than a the model-based approach alone, for a small additional computational cost and without over- 172 FCret 0.6 l- 2 -- 3 ..*.- 4- 5 -a--.- fj -.*-.e 7 -a--. 8 ..+..... 10 20 30 40 50 60 70 80 casebasesize Figure 3.a) Failure Rates 8 I I 7 ..*..... 8 .-I-.-. 9 .-O--- 0 10 20 30 40 50 60 70 80 casebase size Figure 3.b) Average Number of Potential Diagnoses whelming the operators with potential diagnoses. It also aims at showing the influence of experience (cases) on the diagnostic performance. Experiment 1 already showed that the number of retrieved matches remains within reasonable bounds. Method. In this experiment, casebases of different sizes are used for different numbers of failed sensor functions. For each combination of failed sensor func- tions, and for each casebase size, the 77 simulations are run ‘on 10 different casebases. The matching algorithm described above is used to measure the similarity and the plausibility of the retrieved matches compared to the current diagnostic situation. Results. The horizontal axes in Figure 3.a and 3.b represent the size of the casebase. Each curve repre- sents a number of failed sensor functions. Figure 3.a shows that the improvement is linear in the size of the casebase. Figure 3.b shows that the average number of potential diagnoses increases only marginally with the size of the casebase. This experiment shows that the performance of the hybrid diagnostic system increases linearly with the size of the casebase. The matching algorithm is itself generic and does not use any specific knowledge about the Fairing Servicing Subsystem. We therefore believe that it could be applied, with similar results, to the Reactor Building Ventilation System, or to any other complex device. Figure 3.b illustrates its effectiveness in pruning the cases that are irrelevant to the current situation. The number of potential diag- noses is almost constant as the casebase grows larger. Figure 3.a shows the failure rates decreasing linearly with the increasing size of the casebase. Better results could be achieved if the retrieval and the matching al- gorithms allowed for generalizations over component types. For example, most motors in the Fairing Ser- vicing Subsystem share the same installation configu- ration and are monitored by similar sensors. Therefore cases related to one motor could be adapted to other motors in the device. Closing emarks on the Experiments There are other possible ways to test this approach. The size of the models could be decreased by removing sensor functions. This simulates a decreasing num- ber of sensors as opposed to a decreasing number of working sensors. This would show how important, the model, even a partially incorrect model, is in the index- ing. This approach could also be tested for incorrect failure patterns, or for erroneous structural knowledge. Our experience shows that these types of errors are typically easier to detect than errors in sensor func- tions. This paper presents a hybrid approach to diagno- sis. It is based on a structural isolation search for potential diagnoses, enhanced by contributions from an integrated CBR component that assists the human operators in their final decision by using both success- ful and failure experiences. Such assistance is useful because the structural decomposition and the pruning rules associated with it are not guaranteed to be either consistent or complete and because human operators might not consider all the cues that are available to them. Compared to an all-model-based approach to diag- nosis, a hybrid approach addresses a number of prob- lems. CBR allows the system to improve on the model built by humans. It overrules some mistakes that can be made in the design and implementation of this model. This is accomplished in a computation- ally inexpensive way, by using the decomposition tree Diagnostic Reasoning 173 as the basis for indexing previous cases. This origi- final diagnosis as one of its potential, or final diagnoses. nal indexing method allows for an accurate and effec- The final diagnosis of Y could be examined as a poten- tive retrieval of relevant previous cases and avoids the tial diagnosis for the current situation, potentially un- problems of computational complexity encountered by covering a double fault situation. Multiple faults could other hybrid CBR systems in their retrieval and match- therefore be diagnosed using the same hybrid approach ing tasks (e.g. [15, 191). with no added complexity. Experimental results show that the performance gain brought by the CBR system is significant. The CBR system improves the failure rate of partially incorrect models without overwhelming the operator with potential diagnoses. These results show that con- sidering CBR as a way to improve an existing search method is a valid approach. This solidifies the results presented in [lo]. The power of Golding and Rosen- bloom’s method as well as ours comes mostly from the indexing schema provided by the other problem- solving algorithm. Cases are indexed by relevant pieces of knowledge, organized in a hierarchical manner. The other problem-solving algorithm (be it rules trigger- ing or a structural isolation process) can be seen as an indexing schema that extensively uses background knowledge. Both systems can therefore be seen as con- strained instances of explanation-based indexing [l]. This paper describes how a hybrid model- based/case-based methodology permits the relaxation of the completeness and consistency constraints im- posed by the model-based diagnosis approach, and helps overcome shortcomings in human capabilities. We show how CBR can guide the human operator in the last phase of the diagnostic process, using previous experiences indexed by the state of the device. The paper also contributes to the area of CBR, by show- ing that CBR is well suited to applications where it is combined with an already existing, but imperfect, method or paradigm for problem-solving. References The CBR component does not depend on the de- vice nor on which type of model is used. The struc- tural decomposition of the device is device dependent. Its completion or correction by the CBR component is not. In fact, this hybrid approach can also be incor- porated in other model-based approaches to diagnosis, including first-principle approaches based on correct behavior device models, This is a definite advantage, both in the framework of the genericity of the ADMS as a diagnostic system and for the applicability of CBR as a complement to other problem-solving techniques. The Reactor Building Ventilation System’s model is an acyclic digraph (instead of a tree for the Fairing Servic- ing Subsystem). We have found that both the indexing and the matching apply to the Fairing Servicing Sub- system are directly applicable to the Reactor Building Ventilation System’s model, which also includes the possibility of failing sensors. R. Barletta and W. Mark. Explanation-based in- dexing of cases. In Proceedings of the 6th National Conference on Artificial Intelligence, pages 54l- 546, 1988. L. Console, L. Portinale, and D. Theseider Dupre. Focusing abductive diagnosis. AI Commzlnica- tions, 4(2/3):88-97, 1991. J. de Kleer. Diagnosis with behavioral modes. In Proceedings of the 11th International Joint Conference on Artificial Intelligence, pages 1324- 1330, Detroit, 1989. J. de Kleer. Using crude probability estimates to guide diagnosis. ArGj%aZ Intelligence, 45(3):381- 391, 1990. It is clear that this approach is applicable to a whole range of searching and problem-solving methods. An interesting extension to our work would be to apply this hybrid approach to other domains such as natu- ral language parsing and understanding, planning, or game playing, where hierarchies and context informa- tion expressed by context functions, equivalent to the sensor functions, are readily available. J. de Kleer. Optimizing focusing model-based di- agnosis. In Proceedings of the 3rd International Workshop on Principles of Diagnosis, pages 26- 29, Rosario, Washington, 1992. Cases could also effectively be used in diagnosing multiple faults. We have not tested this aspect of our approach yet. If a fault is confirmed by a case X, an- other case Y could also be retrieved that has case X’s M. P. F&et and J. I. Glasgow. Generic diagnosis for mechanical devices. In Proceedings of the 6th International Conference on Applications of Adi- ficial Intelligence in Engineering, pages 753-768, Oxford, UK, July 1991. Computational Mechanics Publications, Elsevier Applied Science. VI M. P. F&et and J. I. Glasgow. Case-based reason- ing in model-based diagnosis. In Proceedings of the 7th International Conference on Applications of 174 F&et Artificial Intelligence in Engineering, pages 679- 692, Waterloo, Canada, July 1992. Computational Mechanics Publications, Elsevier Applied Science. [8] M. P. F&et, J. I. Glasgow, D. Lawson, and M. A. Jenkins. An architecture for real-time diagno- sis systems. In Proceedings of the Third Inter- national Conference on Industrial and Engineer- ing Applications and Expert Systems, pages 9-15, Charleston, SC, July 1990. [9] G. Friedrich. Theory diagnoses: A concise char- acterization of faulty systems. In Proceedings of the 3rd International Workshop on Principles of Diagnosis, pages 117-131, Rosario, Washington, 1992. [lo] A. R. Golding and P. S. Rosenbloom. Improving rule-based systems through case-based reasoning. In Proceedings of the 9th National Conference on Artificial Intelligence, pages 22-27. AAAI Press, MIT Press, July 1991. [ll] F. Gomez and B. Chandrasekaran. Knowledge or- ganization and distribution for medical diagnosis. IEEE Transactions on Systems, Mans and Cyber- netics, 11:34-42, 1981. [12] T. Govindaraj and Y. L. Su. A model of fault diagnosis performance of expert marine engineers. International Journal on Man Machine Studies, 29:1-20, 1988. [13] K. J. Hammond. Chef. In Riesbeck 6. and Schank R., editors, Inside Case-Bused Reasoning. Lawrence Erlbaum Associates, 1989. [14] J. L. K o o 1 d ner. Improving human decision making through case-based decision aiding. A I Magazine, 12(2):52-68, 1991. [15] P. Koton. Reasoning about evidence in causal explanations. In Proceedings of AAAI-88, pages 256-261, 1988. [16] T. P. Martin, J. I. Glasgow, M. P. F&et, and T. G. Kelley. A knowledge-based system for fault. di- agnosis in real-time engineering applications. In Proceedings of DEXA ‘91 - International Confer- ence on Database and Expert System Applications, pages 287-292, Berlin, Germany, August 1991. [18] I. Mozetic. Reduction of diagnostic complexity through model abstractions. In Proceedings of the lrst International Workshop on Principles of Di- agnosis, pages 102-111, Stanford, CA, July 1990, 1990. [19] S. A. Rajamoney and H. Y. Lee. Prototype- based reasoning: An integrated approach to solv- ing large novel problems. In Proceedings of the 9th National Conference on Artificial Intelligence, pages 34-39, Anaheim, CA, July 1991. AAAI Press, MIT Press. [20] M. Redmond. Distributed cases for case-based reasoning; facilitating use of multiple cases. In Proceedings of the National Conference on Arti- ficial Intelligence (AAAI-90), Boston, MA, 1990. Morgan Kaufmann. [21] J.A. Reggia, D.S. Nau, and P.Y. Wang. A for- mal model of diagnosis inference. Information Sci- ences, 37~227-256, 1985. [22] R. Rymon. A final determination of the complex- ity of current formulations of model-based diag- nosis (or maybe not final). Technical Report MS- CIS-91-13, LINC LAB 194, Department of Com- puter and Information Science, School of Engi- neering and Applied Science, University of Penn- sylvania, Philadelphia, PA 19104-6389, 1991. [23] V. Sembugamoorthy and B. Chandrasekaran. Functional representation of devices and compila- tion of diagnostic problem-solving systems. Tech- nical Report Tech. Rep., Ohio Stat,e University, Colombus, Ohio, 1985. [24] S. Slade. Case-based reasoning: A research paradigm. AI Magazine, 12( 1):42-55, 1991. [25] P. Slavic, B. Fischoff, and S. Lichtenstein. Behavo- rial decision theory. Annual Review of Psychology, 28:1-39, 1977. [26] W. C. Yoon and J. M. Hammer. Deep-reasoning fault diagnosis: An aid and a model. IEEE Transactions on Systems, Man, and Cybernetics, 18(4):659-675, 1988. [17] R. Milne. Strategies for diagnosis. IEEE Trans- actions on Systems, Man, and Cybernetics, SMC- 17(3):333-339, 1987. Diagnostic Reasoning 175 | 1993 | 26 |
1,350 | Ira J. Haimowitz MIT Laboratory for Computer Science 545 Technology Square, Room 414 Cambridge, MA 02139 ira@medg.lcs.mit.edu Abstract We have written a computer program called TrenD, for automated trend detection during process monitoring. The pro- gram uses a representation called ttwzd templates that define disor- ders as typical patterns of relevant variables. These patterns consist of a partially ordered set of temporal intervals with uncer- tain endpoints. Attached to each temporal interval are value con- straints on real-valued functions of measurable parameters. As TrenD, receives measured data of the monitored process, the pro- gram creates hypotheses of how the process has varied over time. We introduce the importance of a distinct trend represen- tation in knowledge-based systems. Then we demonstrate how trend templates may represent trends that occur at fixed times or at unknown times, and their utility for domains that are quantitatively both poorly and well understood. Finally we present experimental results of TrenD, diagnosing pediatric growth disorders from heights, weights, bone ages, and pubertal data of twenty patients seen at Boston Children’s Hospital. ’ Introduction Our work is part of the growing body of artificial intelligence (AI) research on diagnostic process monitoring. Specifically, we have written a program that automatically detects trends: sequences of time-ordered data that together are clinically significant. These trends may be multivariate, and may consist of several distinct phases. Our trend detec- tion program, called TrenD,, can classify the trend and give a chronology of when the data was in each phase. In another paper [Haimowitz and Kohane 19931 we defined our trend template representation of clinically significant trends, and illustrated our trend diagnosis pro- gram TrenDx on a single pediatric growth patient. In this paper we demonstrate how trend templates may represent trends that occur at fixed times or at unknown times. We argue for the utility of trend templates for domains that are 1. This work has been supported (in’ part) by NH-I grant ROl LM 04493, NICHHD 5T32 HD07277-9, and by a U.S. Office of Naval Research Graduate Fellowship. 176 Haimowitz Isaac S. Kohane Children’s Hospital, Harvard Medical School 300 Longwood Avenue Boston, MA 02115 gasp@medg.lcs.mit.edu quantitatively poorly or well understood. Then we present experimental results of a clinical trial where TrenDx diag- nosed pediatric growth patterns of twenty patients at Bos- ton Children’s Hospital. Need for Trend Representation Diagnostic knowledge based systems are pro- grams that can reason abductively from symptoms in a patient to the disorders that cause them. However, the vast majority of these programs treat symptoms as fixed in time. These symptoms may be boolean, as in “chest pain = true” or one of a series of qualitative categories for a measurable parameter, as in “serum sodium = low.” Such a stationary representation of findings is insufficient for monitoring a process (such as a patient) being monitored over any period of time. A human expert monitoring a process has notions of trends: how the mea- sured parameters should vary over time under the current hpothesis.When the measurements vary from the expected, that expert may consider an alternative diagnosis. For a computer program to behave similarly, it must represent the expected trend. Merely checking laboratory values against a refer- ence interval can lead to ignoring a trend where the parame- ter is markedly decreasing, increasing, or periodically fluctuating within that range. A prime example of this comes from the domain of pediatric growth, where heights and weights are measured at least once a year and plotted on growth charts of standard deviations (SDS) for each measurement by age of United States children [Hamil et. al 19793. A height that decreases from the mean for some age to -1 SD two years later is still within a range of “normal” yet may strongly indicate either an endocrinological or nutritional disorder. In domains like pediatric growth where one cannot construct a predictive causal model, experts still demon- strate knowledge of how measured parameters vary under different diagnoses. This is the motivation behind our repre- sentation for trends called trend templates and our trend diagnosis program TrenD, that uses them. From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. Trend Templates A trend template is an archetypal pattern of data variation in a process disorder. Each trend template has a temporal component and a value component. The temporal component includes landmark points and intervals. Land- mark points represent significant events in the lifetime of the monitored process. They may be uncertain in time, and so are represented with time ranges (min max) expressing the minimal and maximal times between them. Intervals represent periods of the process that are significant for diag- nosis or therapy. Intervals consist of begin and end points whose times are declared either as: e offsets of the form (min mux) from a landmark point, or 0 offsets of the form (min nuzx) from another interval’s begin or end point. TrenD, represents time using the Temporal Utility Package (TUP) of [Kohane 19871. TUP is a temporal rea- soning program with both time points and time intervals; interval structures include a begin point and an end point. The value component of a trend template is a set of value constraints bound to each interval. Each value con- straint states that some function of a set of measurable parameters must fall within a certain range. Thus each value constraint is an expression of the form msj(D)<M (EQ 1) where f is some real valued function defined on patient data, m is a minimum (possibly -=), and M is a maximum (possibly +co). In the diagnostic program TrenDx, the func- tion f is evaluated on the set D of multi-parameter data cur- rently assigned to that interval and the result is compared to the bounds m and M. Another aspect of trend templates models failure- driven triggering of alternate diagnoses. Trend templates include a function TRIGGER that computes a set of alterna- tive trend templates for each value constraint and the direc- tion of failure. TRZGGER: vc x {lowhigh) + {TT,,TT2, . . . . TT,} @Q 2) This function prunes the diagnosis space by localizing small sets of alternate diagnoses to specific temporal inter- vals and failures. Trend Template for Normal Growth An example trend template (Figure 1) expresses the constraints of male average prepubertal growth. Time constraints, expressed in the years of age of a child, are drawn horizontally, and value constraints on real variables are drawn vertically. There are three landmark points: Birth occurs at age zero, Puberty Onset occurs between ages ten and fifteen years, and Growth Stops occurs between ages seventeen and nineteen years. The temporal [All SD’s taken in - PuberV population] Chron. age 1’ 0-i Birth -14 Serum GH Serum T4 Growth stops Q I Int3 @I th S Figure 1 Trend template for male average pre-pubertal growth. Ht is height, Wt is weight, CH is growth hormone, T4 is thyroid hormone, and SD is standard deviation. uncertainty in these points is depicted arrows that span the possible time range. with horizontal This trend template contains five intervals. Inter- val Intl denotes the time when height and weight standard deviations are established. Intl begins at Birth and ends between ages two and three years. We encode that height and weight standard deviations (SDS) vary in the same way by constraining the difference between the average velocity of height SDS and the average velocity of weight SDS to be within a small number a of zero. Interval Int2 represents the period of the boy staying in his established height and weight channels. Int2 begins at the endpoint of Intl, and Int2 ends at Puberty Onset. There are two value constraints: both the average velocities of height SDS and that of weight SDS are close to zero. Because Intl and Int2 represent con- secutive processes of growth, these intervals meet, and we represent the end point of Intl equal in time to the begin point of Int2. Intervals Int3, Int4 and Int5 constrain other patient parameters: serum growth hormone (GH), serum thyroid hormone (T4), bone age, testicular stage, and other screening tests. Trigger sets are bound to value constraints of sev- eral intervals. For example, if the height SD constraint of Int2 fails low, then the trend template for delayed puberty (termed as “constitutional delay”) is suggested. If the con- straint fails high, then the trend template for advanced puberty is suggested. Diagnostic Reasoning 177 Diagnosis with Trend Templates: TrenD, The program TrenD, diagnoses trends by match- ing process data to trend templates. A TrenD, hypothesis includes a trend template, an assignment of patient data to the intervals of the template, and a set of temporal asser- tions that further constrain the endpoints of the template’s intervals, TrenD, initializes a hypothesis for a patient by anchoring a landmark point of the trend template as equal to some time in the life of the process. For example,TrenD, assigns a patient the average normal growth hypothesis by anchoring the Birth landmark point of the trend template to the birth date of the patient. The algorithms for matching a datum d to a hypothesis hyp are detailed in [Haimowitz and Kohane 19931. In brief, if d meets all value constraints on all inter- vals that must temporally include that datum, then hyp is retained. TrenD, assigns d to all intervals that may contain it. If there are multiple intervals that may include the datum, the program branches to consider distinct data assignments for that same trend template. Because each interval represents a significant process stage, the distinct assignments in fact correspond to alternate hypotheses. If d fails some value constraint on an interval to which it must temporally belong, then hyp is removed. The failed value constraint produces a set S of potential new disorders to trigger. The disorders in S are triggered if and only if there exist no other active hypotheses with trend template equal to that of hyp. Thus as long as there is some data assignment that is valid for a disorder, TrenD, does not trigger another disorder. TrenD, activates new disorders with a generate and test paradigm. TrenD, generates the set S of trend tem- plates with the function TRIGGER, and tests those tem- plates for matching the patient data. A triggered template becomes an active hypothesis if and only if it matches the patient data. As TrenD, monitors a patient, the number of hypotheses increases exponentially in the number N of patient data that may be assigned to multiple intervals of the same trend template. N will be large only if the sam- pling period of the data is small relative to the uncertainty in the time of interval endpoints. In our experience testing pediatric growth patients with TrenD, we have always had N I 2. Furthermore hypotheses have been repeatedly pruned when later data fail some value constraint of a hypothesis with a spurious data assignment. Expressiveness of Trend Te whether Trend templates are capable of representing trends or not the onset time is known beforehand. They may also represent trends quantitative models. in of either or detailed Intervening Trends Thus far we have illustrated a trend template for a pattern where one knows the onset time (birth) of the trend beforehand. Statistical curve-fitting models of pediatric growth [Thissen and Bock 19901 are limited in describing only patterns beginning from birth. However, some trends in process monitoring can appear at any unanticipated time, perhaps due to an unexpected event or as the result of a stimulus not modeled. We call these intervening trends. In medicine intervening trends often signify a new disorder. Pediatric growth disorders marked by intervening trends include nutritional disorders such as malnutrition or obesity. Also included are endocrine disorders such as acquired growth hormone defiiency. In this disorder serum growth hormone levels unexpectedly decrease, with a con- sequent decrease in rate of bone elongation and height. On the growth chart, the child loses height standard deviations, even on standards for patients with delayed puberty. The child also appears heavier over time, which can be detected with an by increase in the body mass index (BMI), calcu- lated as weight/(height)2, expressed in kg/m2. We represent an intervening trend such as in acquired growth hormone deficiency with a trend template that includes a landmark point for the uncertain onset time. Anchoring this landmark point to the patient history requires additional reasoning. TrenD, must shift the land- mark point back in time until all subsequent patient data meets the constraints of the intervening trend’s template. Note that because intervening trends represent unantici- pated faults, the corresponding template is triggered due to the failure of a value constraint of another hypothesis. TrenDx must anchor the landmark point (onset time) of the intervening trend’s template failing the value constraint. Below is the hormone deficiency: trend template for acquired growth earlier in time than the data Serum GH BMI velocity Ht. SD velocity (delayed std.) G% e I Onset of Acquired GH Deficiency 0 Intl e ti!E Figure 2 Trend template for acquired growth hormone deficiency. BMI is body mass index. the The landmark point for this trend template denotes onset time of the growth hormone deficiency. Intl is an 178 Haimowitz interval beginning at that time and contains four value con- straints: serum GH is less than the lower threshold men- tioned in Figure 1, bone age is at least one year behind chronological age. Also, height SDS are falling signifi- cantly, even compared to the standards for children with delayed puberty, and the velocity of body mass index is greater than zero. We are just beginning to incorporate trend tem- plates for intervening trends into TrenD,. We plan to have the template for acquired GH deficiency triggered for three failed value constraints for the constitutional delay tem- plate: if the height SDS are falling too fast, if the weight SDS are falling too fast, or if the bone age is delayed more than 4 years behind chronological age. In this way TrenD, diagnoses a growth patient in the sequence: average growth, constitutional delay, growth hormone deficiency. This diagnosis sequence is familiar to expert pediatric endocrinologists. Trends with Value Ranges Over Time As noted in (EQ l), value constraints specify that real-valued functions evaluated on the data within a certain time interval must stay between minimum and maximum bounds. This form was chosen to correspond to those con- straints on laboratory values that make up much of the working knowledge of monitoring medical patient data. For example, in the section on “Diagnostic Proce- dures in Liver Disease” in Harrison’s Principles of Internal Medicine [Podolsky and Isselbacher 199 l] describe labora- tory results of patients with liver disorders. Of primary importance are the hepatic enzymes that aid decomposing and rebuilding of amino acids. One of these is aspartate transferase (AST). The authors note: in the patient with massive hepatic necrosis, there may be marked elevations [perhaps over 500 IU] in the early phase (i.e., 24 to 48 hours) , but by the time the patient is tested 3 to 5 days later the levels may be in the range of 200 to 350 IU [page 13091. The levels of AST are ill-specified in both time and value primarily because the text aims to summarize a pattern for all hepatic necrosis patients. plate as We represent the above description as a trend tem- shown below. Note that this is an intervening trend. Serum AST Onset of Necrosis 1 Intl (day4 The lone landmark point for this template is the onset time of hepatic necrosis. Interval Intl represents the period where AST levels are above 500 IU. Intl begins 1 day or later after the onset of necrosis and ends (at time t) 2 days or sooner after the onset of necrosis. Interval Ind rep- resents the period where AST levels are between 200 and 350 IU. Int2 begins 3 days or later after t and Int2 ends 5 days or sooner after t. Trends with Known Although value constraints were originally intended to represent constraints in highly uncertain areas like hepatic necrosis above, they may also be used in repre- senting processes whose quantitative processes are better known. A value constraint can specify, for example, accept- able bounds for process data to match a time series model. Consider expressing the constraint that height standard deviations vary minimally from one point to the next as a first order autoregressive (AR( 1)) model: @Q 3) where H, is a random-variable for the height SD at time t, and Ed is a white noise random variable; Ed - N(0, o> . We can equivalently recast the AR( 1) model of (EQ 3) as stat- ing that the first difference of height SDS is a white noise variable: Ht-Hl 1 = E~-N(O,CT) (EQ 4) When using this AR( 1) model for trend detection of patient data, one must specify a confidence interval for believing that the actual patent data does conform to this model. For example, if we require a 95% confidence inter- val to establish a match, the value constraint is: -1.96xoIf(D) = (Hl-Hl I) 21.96x0 (EQ 5) This is quite similar to the height SD value con- straint on Int2 of Figure 1, with the exception that the value constraint in the figure divides f(D) by the time between the two data, since in general we can not assume that the height data are equally spaced as autoregressive models do. that a One can similarly represent as a value constraint ceratin parameter Pt must be close to a curve model function g(t) constraint: over some time interval bY using the value -6 <f(D) = (PI -g(t)) 5 6 (EQ 6) The positive number 6 is a noise threshold that may be cho- sen as in the previous equation. Examples of potentially useful g(t) are polynomials, exponentials, and trigonometric functions. Figure 3 Trend template for serum AST pattern in hepatic necrosis. The time of the endpoint of Int2 is labeled t. Diagnostic Reasoning 179 Clinical Trial with Tren Methods We conducted a pilot clinical trial of TrenDx to evaluate its performance and to determine the weakness of the current representation and knowledge engineering. Data sets on 30 patients seen at the Division of Endocrinology at Children’s Hospital were retrieved from the Clinician’s Workstation (CWS), an on-line charting system [McCallie et. al. 19901. The data sets included height, weight, sexual staging and bone age measurements. The patients were selected by filtering the problem list associated with each patient record in the CWS. Since this was an exploratory experiment rather than a rigorous test of efficacy, we specif- ically selected those problems and patient types for which we had engineered trend templates, as well as growth hor- mone deficiency, to explore how best to implement tem- plates for intervening trends. The first ten patient data sets were used as training sets. As errors in the performance of TrenDx on the training set were identified, we modified the trend templates by changing time ranges of the interval end points, by chang- ing bounds of the value constraints, and by adding new intervals with new value constraints. The remaining twenty data sets were used as test cases. These included patients with growth hormone defi- ciency, constitutional delay, average tempo of development, and early puberty. The data was read by TrenDx in chrono- logical order. TrenDx recorded all the diagnoses and the age of the patient when they were considered or rejected. At the time of this trial, we had not developed trend templates for intervening trends like that for acquired growth hormone deficiency. All constitutional delay trend templates which TrenD, eventually rejected for any of these reasons: 1. the velocity of the height SD was too low, 2. the velocity of the weight SD was too low, or 3. the bone age was too far behind chronological age were scored as diagnosing growth hormone deficiency when the constitutional delay trend template was ruled out. Concurrently, a panel of three expert endocrinolo- gists was given the same data sets and the task of diagnos- ing each patient. The endocrinologists were given the benefit of seeing the full data set at once rather than a point at a time. They too were required to judge the earliest age at which they could make their diagnosis. Note that this level of growth chart scrutiny is unusual in a busy general pediat- ric office. Several of the important contextual clues to the diagnosis that are usually gleaned from the patient history or laboratory results (e.g. results of a serum growth hor- mone test) were not available to either the clinicians or TrenD,. Given the limitations of the data set, even in those cases where the panel consensus was different from the diagnosis stored in the CWS we took the panel consensus as the expert standard for a “correct” diagnosis. Rt?SUltS 10 of the 20 patients were diagnosed by the panel as having one of these six disorders: normal growth, short stature, constitutional delay, early puberty, precocious puberty, and obesity. Of these TrenDx diagnosed 9 of 10 correctly. In 8 of the 9 correct diagnoses the clinicians reached the diagnoses at the same time as TrenD,; in one case the clinicians were earlier. In the one case misdiag- nosed by TrenD, the panel diagnosed constitutional delay, and the program diagnosed average prepubertal growth. This occurred because the patient’s velocity of height stan- dard deviation never crossed the lower bound of the value constraint on the average prepubertal growth trend template (-6 in Figure 1). We can correct this error by increasing the value of this lower bound. The other 10 patients were diagnosed by the panel as having growth hormone deficiency. Of these, TrenDx diagnosed 5 of 10 correctly. In 3 of the 5 misdiagnosed cases a constraint on proportionality (such as body mass index) could have been used to correctly trigger growth hormone deficiency. For instance, TrenDx misdiagnosed one case of growth hormone deficiency as having constitu- tional delay. In this case, the clinicians noted that the weight and height did stay within a broad channel of their standard deviation. However they also noted that the weight standard deviation of the patient was creeping upwards at the same time that her height standard deviation was creeping down- wards. As we had not encoded in the constitutional delay template any constraints on the proportionality of height and weight after infancy, TrenD, did not notice these subtle but significant opposing trends in height and weight. From this we have learned to add a constraint on proportionality to the constitutional delay template and to the acquired growth hormone deficiency template. The results of this trial were encouraging in that they demonstrated that TrenDx could diagnose a few trends. However, the number of test cases, and the nature of the cases do not permit us to make any conclusions regarding the performance of TrenD, in general pediatric practice. We are planning larger trials in one of the primary care clinics at Children’s Hospital to rigorously quantify the sensitivity and specificity of TrenD, as compared to clinicians. Related Literature We find that our pattern matching approach to detection fills a new niche in the work on monitoring trend 180 Haimowitz from time-ordered data. Several results complementary to ours. other projects may have We also plan improvements to the TrenDx diag- Traditional temporal logics [Allen 19841 have been used to encode the truth of logic propositions over time intervals. TrenDx extends this research by representing constraints on primary numerical data, as well as represent- ing an entire process as phases. Much of the work in diagnostic process monitor- ing has been in combined qualitative and quantitative simu- lations wckun and Dawant 19921. This approach requires a domain where one can construct a causal model of the mon- itored process while TrenDx does not. Also, trend templates may supplement these qualitative simulation programs by indicating which sets of future qualitative states correspond to the same trend hypothesis. This may help to reduce branching of qualitative behaviors and thus improve moni- toring efficiency. Temporal abstraction programs [Shahar 19921 accept time-stamped laboratory data and create temporal intervals over which a parameter has attained a significant qualitative state (low, normal, markedly increasing, etc.). Unguided abstraction suffers in that it has very little context of what parameters are useful to abstract under a given hypothesis. For TrenD, this context is provided by the trend template of that hypothesis. Conclusions and Future Work A trend template represents a multi-variate trend in data from a monitored process (e.g. a medical patient) and incorporates both temporal and value uncertainty. A template may be anchored to specific dates on the calendar or to a specific patient age, or it may be offset from the onset time of an unexpected fault. Each interval of a trend template corresponds to a significant stage of the monitored process. Thus the constraints of a trend template may be more understandable to experts and knowledge engineers than differential equations or statistical curve-fitting models [Thissen and Bock 19901. Among representations for disor- ders of monitored processes, the trend template is rare in requiring no pathophysiological model. Our trend detection program TrenD, reached plau- sible diagnoses in most pediatric growth patients from a sample at Boston Children’s Hospital. While these are promising results, we plan several epistemological improvements to make the program diagnose even more like an expert. Probabilistic bounds on value constraints would be useful for assigning numerical scores to the match of a datum to a template. Adding standard errors (due to measurement) on data values would make matching more flexible, and would allow more realistic projection of val- ues over time [Dean and Kanazawa 19881. nostic algorithms. The program should be able to ignore markedly aberrant data that do not fit a general trend. It should also be able to distinguish between competing hypotheses by ranking them. For two closely ranked disor- ders, TrenDx should request a laboratory test with high information content to distinguish between them, or suggest a waiting period after which the patient’s data should differ under the two hypothesized disorders. eferences Allen, J. F. (1984). “Towards a General Theory of Action and Time.” Artzjicial Intelligence, 23:(2) 123-154. Dean, T. and K. Kanazawa (1988). “Probabilistic Tem- poral Reasoning.” National Conference on Art@icial Intelli- gence, 524-528. Haimowitz, I.J., and I. S. Kohane (1993). “Automated Trend Detection with Multiple Temporal Hypotheses.” International Joint Conference on Artijicial Intelligence, to appear. Hamil, P V. V., T. A. Drizd, C. L. Johnson, R. B. Reed, A. F. Roche and W. M. Moore (1979). “Physical Growth: National Center for Health Statistics Percentiles.” The American Journal of Clinical Nutrition, 32: 607-629 Kohane, I. S. (1987). Temporal Reasoning in Medical Expert Systems. MIT Laboratory for Computer Science technical report TR-389. McCallie, D. P Jr., D. M. Margulies, I. S. Kohane, R. Stalhut, and B. Bergeron (1990).“The Children’s Hospital Workstation.” Symposium on Computer Applications in Medical Care, 755-759. Podolsky, D. K. and K. J. Isselbacher. (1991). “Diag- nostic Procedures in Liver Disease.” In Harrison’s Princi- ples of Internal Medicine, Twelfth Edition. McGraw Hill. Shahar, Y., S. Tu and M. Musen (1992). “Knowledge Acquisition for Temporal Abstraction Mechanisms.” Knowledge Acquisition, 1: (4) 2 17-236. Thissen, D. and R. D. Bock (1990). “Linear and Non- linear Curve Fitting.” Statistical Methods in Longitudinal Research, Volume II: fime Series and Categorical Longitu- dinal Data. Academic Press, Inc. Uckun, S. and B.M. Dawant (1993). ‘Model-Based Reasoning in Intensive Care Monitoring, the YAQ Approach” Art$cial Intelligence in Medicine, 5:( 1) 3 l-48. Acknowledgments Peter Szolovits, Mario Stefanelli, and Howard Shrobe have supplied valuable comments on this work. Doctors John Crigler, Samir Najjar, and Joseph Majzoub of Boston Children’s Hospital Endocrinology Division kindly diagnosed the test cases. Khuram Faizan, Nadya Adjanee, and Phillip Le helped prepare the patient data for testing. Diagnostic Reasoning 181 | 1993 | 27 |
1,351 | Ying Sun & Daniel S. Department of Computer Science and Engineering, FR-35 University of Washington Seattle, WA 98195 ysun J weld@cs.washington.edu , Abstract We describe IRS, a program that combines partial- order planning with GDE-style, model-based diag- nosis to achieve an integrated approach to repair. Our system makes three contributions to the field of diagnosis. First, we provide a unified treatment of both information-gathering and state-altering actions via the UWL representation language. Sec- ond, we describe a way to use part-replacement operations (in addition to probes) to gather diag- nostic information. Finally, we define a cost func- tion for decision making that accounts for both the eventual need to repair broken parts and the dependence of costs on the device state. Introduction Although researchers have investigated model-based diagnosis for many years, only recently has attention turned to what, should, perhaps, have been the cen- tral question all along: repair. When field-replaceable parts contain multiple components, focusing on deter- mining the exact component responsible for faulty be- havior can be counterproductive, since the final probes may not distinguish between repair actions. Further- more, most diagnosis research has assumed that all probes have the same cost, leading to diagnostic strate- gies guided solely by estimated information gains. In this paper we argue that both of these problems are best addressed by integrating theories of perception and action. In other words, we claim that repair is best thought of as a marriage of diagnosis and planning. The planner needs to call diagnosis as a subroutine *This research was funded in part by National Science Foundation Grant IRI-8957302, Office of Naval Research Grant 90-J-1904, and a grant from the Xerox corpora- tion. Our implementation is built on pieces of code that were written in part by Johan de Kleer, Denise Draper, Ken Forbus, Steve Hanks, and Scott Penberthy. In addi- tion to those mentioned above, we benefited from conver- sations with and comments by Oren Etzioni, Walter Ham- scher, Nick Kushmerick, Neal Lesh, Mark Shirley, and Mike Williamson. 182 Sun to determine which observations will best improve its incomplete information, and the diagnoser needs to call the planner to estimate the cost of observations that are not, directly executable (e.g., probing a location inside a closed cabinet). A rational approach to repair requires accounting for both the cost/benefit tradeoff of actions as well as the synergistic changes in device state that allow one action to facilitate others. In this paper, we describe an integrated repair sys- tem called IRS', and discuss three important aspects of its operation. Fundamentally, there is no difference between ac- tions that gather information (i.e., probes) and ac- tions that change the device state; they should be treated uniformly. This allows represent a- tion of actions that have both state-changing and information-gathering aspects. When estimating the cost of an action that is not directly executable, an agent should add the costs of the primitive actions in a plan that achieves the desired effect. The ability to replace parts and repeat observa- tions provides new diagnostic opportunities, similar to those provided by test generation systems. Our integrated theory of action and observation allows a diagnostic agent to combine replacement and ob- servation actions synergistically. A comprehensive utility model selects between strategies. When estimating the cost of an operation, it is cru- cial to consider the eventual cost of repairing broken parts, not just the cost of diagnosis. The next, section of the paper defines an action rep- resentation language that distinguishes between obser- vations of and changes to the device state. Then we show how to extend GDE to handle diagnosis of devices with changing state using both traditional observations as well as replacement operations. We decompose our cost function into two parts: the cost of executing the substeps of the current operation, and the expected cost of future diagnosis and repair operations. We show ‘IRS stands for Integrated Repair System, but its con- cern with cost evokes images of another basis for the acronym. From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. how the UCPOP planner [Penberthy and Weld, 19921 can be used to calculate this cost function, and we illustrate our algorithm on two simple refrigerator [Al- thouse et al., 19921 examples. After discussing the im- plementation, we close with a discussion of related and future work. odeling Action and Change The first step in creating a unified theory of repair is to select a generalized model of action that distinguishes between causal and information-gathering effects. For example, it is crucial to differentiate between an ob- servation that the voltage of a node is zero and an action that grounds the node. Even though the agent knows the voltage is zero in both cases, the effects are very different. Traditional diagnosis systems do only the former, while most implemented planners handle only the latter; an integrated repair system needs both. Even though a whole AI research subfield is devoted to representations of action [McCarthy and Hayes, 19691, most existing theories do not meet our needs: 1. Ability to represent incomplete information. 2. Distinguish between observations (which increase in- formation, but, don’t change the world state) and actions with causal effects. 3. Computationally tractable. For example, although the STRIPS representation satisfies the last criterion, it assumes complete informa- tion and thus renders the notion of observation mean- ingless. [M oore, 19851 develops a first-order modal logic that codifies actions that supply an agent with information, and [Morgenstern, 19871 presents a more expressive language that allows actions to have knowl- edge preconditions, but neither researcher considers al- gorithms for generating plans using their models of ac- tion. Our UWL representation [Etzioni et al., 19921 is per- fectly suited to the needs of repair. An extension of STRIPS that, handles incomplete information, UWL was originally designed to represent UNIX commands for a Softbot [Etzioni and Segal, 19921. The novel aspects of the language include annotations to differentiate causal from observational effects and informational from cau- sational goals, T, F, and U truth values, and run-time variables. For example, one might write the goal or precondi- tion of setting the voltage of V to 220 with (satisfy (value-of V 220)) while the goal of determining the current voltage at that probe point can be writ- ten as (f indout (value-of V ?x) ) . Similarly, the effect of an action that grounds a node might be (cause (value-of V 0) ) while a step that just ob- serves the value without changing it would be written as (observe (value-of V ! y)). In these examples, ?x denotes a plan-time variable whose value ma be constrained during subsequent planning decisions Ste- 9 fik, 19811, but !y d enotes a run-time variable that is treated as an (unknown) constant by the planner and whose value is only established when the plan is ex- ecuted. Abstractly, a UWL step schemata contains a step name, a set of preconditions, and a set of post- conditions (with associated cost). Preconditions and goals are annotated with satisfy or f indout; post- conditions are annotated with cause and observe. Ini- tial conditions are represented with a dummy step that has no preconditions and whose postconditions cause all propositions to take on some truth value, U in the case of incomplete information. Complete details and formal semantics are provided in [Etzioni et al., 19921. To illustrate the use of UWL, we show a simpli- fied version of part of our refrigerator domain theory. When the argument ?x of a measure step is an inter- nal voltage, the probe cost is 5.2 A measure step also causes the proposition (probed ?x) to be true and sets ?x to !v, a run-time variable whose value will be determined during plan execution. (define-step (measure ?x) :when (and (satisfy (internal-voltage ?x)) (satisfy (not (backplane-on)))) :effect (and (cause (probed ?x>> (observe (value-of ?x !v))> :cost 5) iagnosis with Changing As discussed in [Sun and Weld, 19921, a variety of ar- chitectures are possible for a repair agent. We choose to put a diagnostic reasoner at the top level with the planner as a subroutine. The diagnosis code maintains a model of the most, probable modes of the device’s components (candidate sets) and uses the planner to suggest useful action sequences. The most utile ac- tion is chosen, the device state and candidate sets are updated, and the process is repeated until the stop criterion is satisfied. In the remainder of this section, we describe how this planner allows estimation of the costs of different oper- ations in a manner that accounts for both the eventual repair need and the state-dependence. We illustrate our algorithm on two troubleshooting episodes with a domestic refrigerator [Althouse et al., 19921 whose schematic is shown in Figure 1. Calculating Costs Suppose that the refrigerator is in state Si: the refrig- erator door is closed, the backplane is attached, the refrigerator is located near the wall, and the power is on; the temperature inside the refrigerator is too warm yet the compressor is not running. In this example, it will turn out that the actual fault is the thermostat (which is stuck open), although IRS, of course, does not know this yet. Assuming that the power supply is 2 Measuring the compressor status or other ?x might cur a different cost and have different preconditions. in- Diagnostic Reasoning 183 Figure 1: Wiring diagram for a domestic refrigerator ok and every component has identical prior failure rate (pfr = O.OOl), the most probable candidates are: p( [thermostatl]) = p([relayl]) = p( [compressorl]) = p( [guardettel]) ” 0. 25 To find the best operation Oi, various costs must be computed and compared. IRS employs a cost function using n-step lookahead: ctotal( Oi f S7 n) = Cexec(P(S, Oi)) + (1) cj p(Sij)EC(Sij, n - 1) The total cost of executing operation 0~ in state S as estimated using n-step lookahead is equal to the cost of directly executing a plan that achieves the op- eration plus the weighted sum of the estimated ex- pected costs of the resulting outcomes. P denotes the planning function that takes an initial state and goal conjunct (encoding a diagnosis or repair operation) as arguments and returns a plan (linearized sequence of primitive actions). Thus, Cexec(P(S, Oi)) denotes the cost of executing a plan that achieves an operation Oi (e.g., a probe or a replacement) given device state S. The expected cost of future operations depends on the outcome of the current operation. For each possible state S;j resulting from executing the plan for Oi, we compute the expected cost with (n - 1)-step looka- head; the cost is then weighted by the probability of each outcome. The following recursive function computes the ex- pected cost of device state S with n-step lookahead: EC(S,n) = 0 if Reliab( S) > 1 - e (2) EC(S, n) = EC;epair(S, C) + ECdiag(S, C) (3) = C P(c)Cexec(p(S, WC))) + CEC ECop(w- x PwwPw)l cEC if n=O EC(S,n) = n-$n Ctotal(O”, SY n) ifn>O (4) The function Reliab(S) estimates the reliability of the device in state S; repair terminates when the reli- ability is above the threshold 1 - 6. In the base case, when the lookahead step n = 0, IRS estimates the re- maining costs for candidate discrimination and part repair, and sums them. To estimate the repair cost, IRS iterates through the candidates and asks the plan- ner for a plan that replaces all the parts containing a component in that candidate; the cost of that plan is weighted by the probability of the candidate. In the equation above, C denotes the set of candidates in state S; %2(c) denotes the conjunctive goal formula that specifies “Replacement” of all the parts with a compo nent in candidate c. The remaining repair cost also includes the cost of placing any removed but working parts back in the device. The remaining cost for par- titioning the candidates is estimated using minimum entropy, where E&p(C) is the estimated average cost of such an operation (which may expand to multiple actions). When n > 0, IRS estimates the cost to be the minimum cost of the possible operations at each step. For example, to estimate the cost of probing the status of condenser-fanl, IRS calls the planner with the goal (f indout (value-of status-of-cond-f an1 ! vcf > ) and the initial state of the refrigerator. In this case the planner returns3 a plan, yi, with execution COSt Cexec(Yl) = 4: (move-refrigerator-away-from-wall) (measure status-of-condenser-fanl) cost = 2 cost = 4 There are two possible outcomes of probing the sta- tus of condenser-f anl: with probability 0.5 !vcf is on, which results in most probable candidates p([relayi]) = p([compressorl]) N 0.5; and with prob- ability 0.5 ! vcf is off, which results in most proba- ble candidates p( [thermostat l]) = p( [guardettel]) N 0.5. If l-step lookahead is used, IRS arrives at the base case at this point. When condenser-fan1 is on, the estimated repair cost is 56: (disconnect-power) cost = 1 (remove-backplane) cost = 20 (remove-part relayl/compressorl) cost = 6 (place-part relay2/compressor2) cost = 6 (attach-backplane) cost = 20 (move-refrigerator-back-to-wall) cost = 2 (connect-power) cost = 1 When condenser-f an1 is off, the estimated re- pair cost is 18 if thermostat 1 turns out to be broken 3Space limitations preclude a complete description of our UCPOP partial-order planning algorithm, but it has sev- eral desirable attributes: sound, complete, and efficient. The details can be found in [Penberthy and Weld, 19921. 184 sun (p N 0.5) or 56 if guardettel turns out to be bro- ken (p !Y 0.5), resulting in an average of 37. In both cases, the estimated cost to discriminate among the remaining candidates is 3.0 * [-(2 * 0.51ogO.5)] = 3.0, where 3.0 is the estimated average cost of such an op- eration, Therefore, the estimated total cost of diagno- sis and repair starting with a probe to the status of condenser-f an1 is 4 + (0.5 * 56 + 0.5 * 37) + 3.0 = 53.5. All other operations cost more at this point, so IRS chooses to probe the status of condenser-fanl. The plan yi is executed, putting the device into state Sa. condenser-f an1 is observed to be off, causing the set of most probable candidates to be updated to: p([thermostatl]) = p([guardettel]) N 0.4995 p( [relay 1, cond-fanl]) cv 0.0005 p([compressorl, cond-fanl]) z 0.0005 At this point, the costs of all the possible opera- tions are computed again. In addition to considering the option of probing the status of thermostat1 or guardettel, IRS also considers the possibility of re- placing one of the components. In this case, replacing thermostat1 happens to be the cheapest operation, with a plan, ~2, of cost Cexec(y2) = 16 and estimated total cost Ctotal = 52 : (disconnect-power) cost = 1 (open-refrigerator-door) cost = 1 (remove-part thermostatl) cost = 6 (place-part thermostat2) cost = 6 (close-refrigerator-door) cost = 1 (connect-power) cost = 1 Executing this plan leads the device to state Ss. Computing the costs of all the possible operations reveals that probing the status of compressor1 (i.e., checking if it is running) has the lowest total cost, so the corresponding plan, ys, is executed: (measure status-of-compressorl) cost = 1 The device state is updated to Sq. 0 bserv- ing compressor1 running exonerates guardettel and yields the final candidate: y([thermostatl]) E 0.999 Since thermostat1 has already been replaced, IRS simply moves the refrigerator back to the original lo- cation. At this point, the reliability of the device is 0.999, which is above the preset threshold 0.99, so we are done. Table 1 summarizes the changing reliability of the device, the possible operations, their costs, and the candidates generated from the executed operations. (If a probe is executed, the value measured is shown after an arrow.) Note how IRS handles the state-dependent . probe costs and takes into account the eventual repair cost throughout the diagnosis process, A Different Example Interestingly, if we adjust the prior failure rates of the components such that the failure rate of the guardette is three times higher than that of the other compo- nents, IRS would generate a different sequence of oper- ations. After IRS measures condenser-f an1 to be off, the most probable candidates are p([guardettel]) N 0.75 and p([thermostatl]) E 0.25. At this point, re- placing thermostat1 is the cheapest operation to exe- cute because there is no need to remove the backplane, which is an expensive action. However, IRS real- izes that other operations have cheaper total costs when taking into account projected diagnosis and re- pair operations. Due to the higher failure rate of the guardette, the backplane will probably need to be opened anyway. Thus, IRS correctly chooses to probe the status of guardettel before replacing any compo- nents as summarized in Table 2. epresenting Tirne- rying State Due to the inadequacy of the notion of minimal di- agnoses, we implemented a diagnosis engine based on the alibis principle proposed by [Raiman, 19921. As a complement to minimal conflicts, minimal alibis spec- ify conditions such as a component must be working if n other components are known to be working. IRS works by incrementally generating minimal alibis, min- imal conflicts, and the corresponding set of prime di- agnoses. In addition, we were forced to extend the nor- mal component model to handle devices with chang- ing state. Assumptions such as ok(relay1) are un- changed because of the non-intermittency assump- tion. However, IRS’s structural primitives require a temporal component. We distinguish between the role a component plays in a device (i.e., the slot it occupies) and the device instance itself. IRS’s sys- tem description is written in terms of roles (e.g., the relay-function, etc.) A separate set of axioms indi- cates what instances fill what roles at what times, e.g., (fills-role relay-function relay1 to). The as- sumptions that distinguish possible worlds involve in- stances, e.g., ok(relayl), and time tokens. With these extensions, IRS can reason about swap- ping out a part, collecting evidence with a replacement part, swapping the original back in, collecting more ev- idence and so on. As a result, IRS can combine evidence collected at multiple times and involving different sets of component instances. The IRS implementation has been run on the refrig- erator example and several others, including a modified 3-inverter example [Sun and Weld, 19921. It took ap- proximately 2 minutes to run the refrigerator example on a SUN SPARC. elated Work Since IRS’s behavior is to choose the operation with the maximum expected utility, it could be seen as a Diagnostic Reasoning 185 most probable candidates p( [thermostatl]) N .25 p([relayl]) N .25 p( [compressorl]) N .25 p( ‘guardettel]) N- .25 p( thermostatl]) N .50 p( [guardettel]) N .50 probe cond-fanl-status . . . I .999 move refrigerator back DONE 7 able 1: pfr(al1 components) = 0.001 most probable candidates p([guardettel]) N 500 p([relayl]) N .167 p([compressorl]) N .167 p([thermostatl]) N .167 p([guardettel]) N .75 p([thermostatl]) N .25 . . . p([guardettel ) N 1.0 p([guardettel ) N 1.0 pfr(guardette2)=.003 0 probe guardettel-status 25 61.5 replace guardettel 34 63.0 replace thermostat1 16 69.0 . . . 0 replace guardettel 34 36.0 .997 move refrigerator back 2 2.0 DONE operation executed x + off X ---+ open Table 2: pfr(guardette) = 3 * pfr(other components) straightforward application of decision theory to the re- pair problem. From this perspective, our contribution is a program that automates both the identification of alternatives being compared and the cost estimation for those alternatives. In the past this problem (called decision analysis human solution t has been left as a task that requires Howard et al., 19’761. See [Breese et al., 19911 for other work on automating the construc- tion of decision models. The standard cost evaluation in model-based diag- nosis is based on the number of probes needed to dis- tinguish a set of hypotheses. Although [Raiman et ad., 19911 and [de Kleer et ak., 19911 generalize this no- tion, both approaches assume fixed probe costs that are specified a priori, whereas the costs in our eval- uation function are state-dependent. Compared with some work on allowing multiple observation sets and diagnosing devices with changing states [Raiman et al., 1991, Hamscher, 1991, Friedrich and Lackinger, 1991, Ng, 19911, our focus is on extending an intelligent agent to plan for state change rather than having a passive agent diagnose devices with dynamic behav- ior. Several researchers have attempted to represent system purpose explicitly and integrate repair with di- agnosis. For example, [Friedrich et al., 19911 formal- izes a repair process with time-dependence, [Poole and Provan, 19911 focuses on the utility and ranularity as- sociated with the repair actions, while McIlraith and P Reiter, 19911 discusses how to recognize the relevance of a probe given a goal; but none of these researchers incorporate planning explicitly into their framework. We use planning explicitly to estimate the costs and execute diagnosis and repair operations. We avoid ex- plicit representation of system purpose because repair (replacement) is already intermingled with the diagno- sis process. Reconfiguration might be an interesting extension for IRS; we plan to investigate [Crow and Rushby, 199 l] more carefully. Planning to minimize breakdown costs is another ability that complements IRS’s strengths; it would be straightforward to incor- porate [Friedrich et al., 19921’s time-dependent cost function into our system, but their greedy algorithms are unlikely to extend gracefully to handle the state- dependent probe costs addressed by IRS. Our research is also similar to work on test genera- tion programs which may also be thought of as a kind of planner that needs to distinguish between control- ling and observing the node values in a circuit. Unlike our situation, the goal/subgoal graph for test genera- tion is largely static; this allows predefinition and op- 186 Sun timization which are impossible in our case, but see [Shirley, 1986, Shirley, I9SS]. Conclusion We have reported on IRS, our preliminary integration of diagnostic and planning algorithms, and argued that it represents progress towards a general theory of repair. Our contributions are three-fold: o A unified treatment of information-gathering and state-altering actions with the UWL action represen- tation language. e A method for using part-replacement operations (as well as simple probes) to gather diagnostic informa- tion. o Decision making based on a cost function that takes into account both the eventual cost of repair and the dependence of cost on device state. In future work, we hope to investigate heuristics for approximating Ctotal, incorporate the cost of com- putation into the cost function, and integrate UWL'S treatment of incomplete information with UCPOP'S ability to handle universal quantification. eferences A. D. Althouse, C. H. Turnquist, and A. F. Brac- ciano. Modern Refrigeration and Air Conditioning. The Goodheart-Willcox Company, Inc., 1992. J. Breese, R. Goldman, and M. Wellman, editors. Notes from the Ninth National Conference on Artifi- cial Intelligence (AAAI-91) Workshop on Knowledge- Based Construction of Probabilistic and Decision Models. AAAI, July 1991. Judith Crow and John Rushby. Model-Based Recon- figuration: Toward an Integration with Diagnosis. In Proceedings of AAAI-91, pages 836-841, July 1991. J. de Kleer, 0. Raiman, and M. Shirley. One Step Lookahead is Pretty Good. In Proceedings of the 2nd International Workshop on Principles of Diagnosis, October 1991. Oren Etzioni and Richard Segal. Softbots as testbeds for machine learning. In Working Notes of the AAAI Spring Symposium on Knowledge Assimilation, Menlo Park, CA, 1992. AAAI Press. Oren Etzioni, Steve Hanks, Daniel Weld, Denise Draper, Neal Lesh, and Mike Williamson. An Ap- proach to Planning with Incomplete Information. In Proceedings of KR-92, October 1992. G. Friedrich and F. Lackinger. Diagnosing Tempo- ral Misbehavior. In Proceedings of IJCAI-91, August 1991. G. Friedrich, G. Gottlob, and W. Nejdl. Formalizing the Repair Process. In Proceedings of the 2nd Interna- tional Workshop on Principles of Diagnosis, October 1991. 6. Friedrich, , and W. Nejdl. Choosing Observations and Actions in Model Based Diagnosis / Repair Sys- tems. In Proceedings of KR-92, October 1992. W.C. Hamscher. Modeling Digital Circuits for Trou- bleshooting. Artificial Intelligence, 51( l-3):223-272, October 1991. R. Howard, J. Matheson, and K. Miller. Readings in decision analysis. Stanford Research Institute, Menlo Park, CA, 1976. J. McCarthy and P. J. Hayes. Some Philosophical Problems from the Standpoint of Artificial Intelli- gence. In Machine Intelligence 4, pages 463-502. Ed- inburgh University Press, 1969. S. McIlraith and R. Reiter. On Experiments for Hy- potherical Reasoning. In Proceedings of the 2nd In- ternational Workshop on Principles of Diagnosis, Oc- tober 1991. R.C. Moore. A Formal Theory of Knowledge and Ac- tion. In Formal Theories of the Commonsense World. Ablex, 1985. Leora Morgenstern. Knowledge preconditions for ac- tions and plans. In Proceedings of IJCAI-87, 1987. H.T. Ng. Model-based, Multiple Fault Diagnosis of Dynamic, Continuous Physical Devices. IEEE Expert, December 1991. J.S. Penberthy and D. Weld. UCPOP: A Sound, Complete, Partial Order Planner for ADL. In Pro- ceedings of KR-92, pages 103-114, October 1992. D. Poole and G. Provan. Use and Granularity in Consistent-Based Diagnosis. In Proceedings of the 2nd International Workshop on Principles of Diagnosis, October 1991. 0. Raiman, J . de Kleer, V. Saraswat, and M. Shirley. Characterizing Non-intermittent Faults. In Proceed- ings of AAAI-91, July 1991. 0. Raiman. The Alibi Principle, pages 66-70. Morgan Kaufmann, 1992. M. Shirley. Generating Tests by Exploiting Designed Behavior. In Proceedings AAAI-86, pages 884-890, August 1986. M. Shirley. Generating Circuit Tests by Exploiting Designed Behavior. AI-TR-1099, MIT AI Lab, De- cember 1988. M. Stefik. Planning with Constraints (MOLGEN: Part 1). Artificial Intelligence, 14(2), 1981. Y. Sun and D. Weld. Beyond Simple Observation: Planning to Diagnose. In Proceedings of the 3rd Inter- national Workshop on Principles of Diagnosis, pages 67-75, October 1992. Diagnostic Reasoning 187 | 1993 | 28 |
1,352 | Arne JSnsson* Department of Computer and Information Science Linkaping University S- 58183 Linkoping, Sweden arj@ida.liu.se Abstract This paper describes a method for the development of dialogue managers for natural language interfaces. A dialogue manager is presented designed on the basis of both a theoretical investigation of models for dialogue management and an analysis of empirical material. It is argued that for natural language interfaces many of the human interaction phenomena accounted for in, for instance, plan-based models of dialogue do not oc- cur. Instead, for many applications, dialogue in natu- ral language interfaces can be managed from informa- tion on the functional role of an utterance as conveyed in the linguistic structure. This is modelled in a dia- logue grammar which controls the interaction. Focus structure is handled using dialogue objects recorded in a dialogue tree which can be accessed through a scoreboard by the various modules for interpretation, generation and background system access. A sublanguage approach is proposed. For each new application the Dialogue Manager is customized to meet the needs of the application. This requires em- pirical data which are collected through Wizard of Oz simulations. The corpus is used when updating the different knowledge sources involved in the natural language interface. In this paper the customization of the Dialogue Manager for database information re- trieval applications is also described. Introduction Research on computational models of discourse can be motivated from two different standpoints. One is to develop general models and theories of discourse for all kinds of agents and situations. The other approach is to account for a computational model of discourse for a specific application, say a natural language inter- face (Dahlback and Jijnsson, 1992). It is not obvious that the two approaches should present similar com- putational theories for discourse. Instead the differ- ent motivations should be considered when presenting theories of dialogue management for natural language interfaces. Many models for dialogue in natural lan- guage interfaces are not only models for dialogue in *This research was financed by the Swedish National Board for Technical Development and the Swedish Council for Research in the Humanities and Social Sciences. 190 Jonsson such interfaces but they also account for general dis- course. The focus in this work is on dialogue manage- ment for natural language interfaces and not general discourse. Thus, the focus is on efficiency and habit- ability, i.e. a dialogue manager must correctly and ef- ficiently handle those phenomena that actually occur in typed human-computer interaction so that the user does not feel constrained or restricted when using the interface. This also means that a dialogue manager should be as simple as possible and not waste effort on complex computations in order to handle phenom- ena not relevant for natural language interfaces. For instance, the system does not necessarily have to be psycholinguistically plausible or able to mimic all as- pects of human dialogue behaviour such as surprise or irony, if these do not occur in such dialogues. Grosz and Sidner (1986) presented a general compu- tational theory of discourse, both spoken and written, where they divide the problem of managing discourse into three parts: linguistic structure, attentional state and intentional state. The need for a component which records the ob- jects, properties and relations that are in the focus of attention, the attentional state, is not much debated, although the details of focusing need careful examina- tion. However, the role that is given to the intentional state, i.e. the structure of the discourse purposes, and to the linguistic structure, i.e. the structure of the sequences of utterances in the discourse, provide two competing approaches to dialogue management: 8, One approach is the plan-based approach. Here the linguistic structure is used to identify the intentional state in terms of the user’s goals and intentions. These are then modelled in plans describing the ac- tions which may possibly be carried out in different situations (cf. Cohen and Perrault, 1979; Allen and Perrault, 1980; Litman, 1985; Carberry, 1990; Pol- lack, 1990). e The other approach to dialogue management is to use only the information in the linguistic structure to model the dialogue expectations, i.e. utterances are interpreted on the basis of their functional re- lation to the previous interaction. The idea is that these constraints on what can be uttered allow us to write a grammar to manage the dialogue (cf. Reich- man, 1985; Polanyi and Scha, 1984; Bilange, 1991; From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. Jijnsson, 1991). edge structures used by the Dialogue Manager are cus- . . For the strong AI goal or the computational linguistics goal to mimic human language capabilities the plan recognition approach might be necessary. But, for the task of managing the dialogue in a natural language interface, the less sophisticated approach of using a dialogue grammar will do just as well, as will be argued below. The work presented in this paper is restricted to studying written human-computer interaction in nat- ural language, and natural language interfaces for dif- ferent applications which belong to the domain that Hayes and Reddy (1983) called simple service systems. Simple service systems “require in essence only that the customer or client identify certain entities to the person providing the service; these entities are param- eters of the service, and once they are identified the service can be provided” (ibid. p. 252). A method for customization The method presented in this paper proposes a sublan- guage approach (Grishman and Kittredge, 1986) to the develonment of dialogue managers. A dialogue man- ager should not account for thz interaction behaviour utilized in every application, instead it should be de- signed to facilitate customization to meet the needs of a certain application. Kelley (1983) presents a method for developing a natural language interface in six steps. The first two steps are mainly concerned with determining and im- plementing essential features of the application. In the third step, known as the first Wizard of Oz-step, the subject interacts with what they believe is a natural language interface but which in fact is a human sim- ulating such an interface (cf. Dahlback et al., 1993; Fraser and Gilbert, 1991). This provides data that are used to build a first version of the interface (step four). Kelley starts without grammar or lexicon. The rules and lexical entries are those used by the users during the simulation. In step five, Kelley improves his interface by conducting new Wizard of Oz simula- tions, this time with the interface running. However, when the user/subject enters a query that the system cannot handle, the wizard takes over and produces an appropriate response. The advantage is that the user’s interaction is not interrupted and a more realistic dia- logue is thus obtained. This interaction is logged and in step six the system is updated to be able to handle the situations where the wizard responded. The method used by Kelley of running a simulation in parallel with the interface was also used by Good et al. (1984). They d eveloped a command language interface to an e-mail system using this iterative de- sign method, UDI (User-Derived Interface). Kelley and Good et al. focus on updating the lexical and grammatical knowledge and are not concerned with dialogue behaviour. The Dialogue Manager presented in this paper is customized to a specific application using a process in- spired by the method of User-Derived Interfaces. The starting point is a corpus of dialogues collected in Wiz- ard of Oz-experiments. From this corpus the knowl- tomlzed. anager The Dialogue Manager was initially designed from an analysis of a corpus of 21 dialogues, other than the 30 used for customization (see below) collected in Wiz- ard of Oz-experiments using 5 different background systems’. It can be viewed as a controller of re- sources for interpretation, database access and gen- eration. The Dialogue Manager receives input from the interpretation modules, inspects the result and ac- cesses the background system with information con- veyed in the user input. Eventually an answer is re- turned from the background system access module and the Dialogue Manager then calls the generation mod- ules to generate an answer to the user. If clarification is needed from any of the resources it is dealt with by the Dialogue Manager. The Dialogue Manager uses information from dia- logue objects which model the dialogue segments and moves and information associated with them. The dia- logue objects represent the constituents of the dialogue and the Dialogue Manager records instances of dia- logue objects in a dialogue tree as the interaction pro- ceeds. The dialogue objects are divided into three main classes on the basis of structural complexity. There is one class corresponding to the size of a dialogue, another class corresponding to the size of a discourse segment (cf. Grosz and Sidner, 1986) and a third class corresponding to the size of a single speech act, or dia- logue move. Thus, a dialogue is structured in terms of discourse segments, and a discourse segment in terms of moves and embedded segments. Utterances are not analysed as dialogue objects, but as linguistic objects which function as vehicles of one or more moves.2 The dialogue object descriptions are domain depen- dent and can be modified for each new application. The Dialogue Manager is customized by specifying the dialogue objects; which parameters to use and what values they can take. From the perspective of dialogue management the dialogue objects modelling the dis- course segment are the most interestin response (IR) structure is assumed f . An initiative- cf. adjacency- pairs, Schegloff and Sacks, 1973) where an initiative opens a segment by introducing a new goal and the response closes the segment (Dahlback, 1991). The parameters specified in the dialogue objects reflect the information needed by the various processes accessing information stored in the dialogue tree. A dialogue object consists of a set of parameters for specifying the initiator, responder, context etc. needed ‘For further details of the Dialogue Manager, see $A$;nberg et al., 1990); (Jijnsson, 1991) and (Jiinsson, 2The use of three categories for hierarchically structur- ing the dialogue is motivated from the analysis of the cor- pora. However, there is no claim that they are applicable to all types of dialogue, and even less so, to any type of dis- course. When a different number of categories are utilized, the Dialogue Manager can then be customized to capture these other categories. Discourse Analysis 191 in most applications. Another set of parameters spec- ify content. Two of these, termed Objects and Prop- erties, account for the information structure of a move (query), where Objects identify a set of primary refer- ents, and Properties identify a complex predicate as- cribed to this set (cf. Ahrenberg, 1987). These are focal parameters in the sense that they can be in focus over a sequence of IR-units. Two principles for maintaining the focus structure are utilized. A general heuristic principle is that ev- erything not changed in an utterance is copied from one IR-node in the dialogue tree to the newly created IR-node. Another principle is that the value for Ob- jects will be updated with the value from the module accessing the database, if provided. The dialogue objects are used to specify the be- haviour of the Dialogue Manager and thus the spec- ification of the dialogue objects must include informa- tion on what actions to take in certain situations. This is modelled in two non-focal content parameters, Type and Topic. Type corresponds to the illocutionary type of the move. Hayes and Reddy (1983, p 266) identify two sub-goals in simple service systems: 1) “specify a pa- rameter to the system” and 2) “obtain the specifica- tion of a parameter”. Initiatives are categorized ac- cordingly as being of two different types 1) update, U, where users provide information to the system and 2) question, Q, where users obtain information from the system. Responses are categorized as answer, A, for database answers from the system or answers to clari- fication requests. Other Type categories are Greeting, Farewell and Discourse Continuation (DC) (Dahlback, 1991) the latter of which is used for utterances from the system whose purpose is to keep the conversation going. Topic describes which knowledge source to con- sult. In information retrieval applications three dif- ferent topics are used: the database for solving a task (T), acquiring information about the database, system- related, (S) or, finally, the ongoing dialogue (D). The empirical basis for customization The Dialogue Manager is customized on the basis of a corpus of 30 dialogues collected in Wizard of Oz- experiments using the actual applications. Three dif- ferent applications were used and each application uti- lized 10 dialogues for customization. The simulations were carefully designed and carried out using a power- ful simulation environment, (Dahlback et al., 1993). In the experiments there were 14 female and 16 male subjects with varying familiarity with comput- ers. Most subjects were computer novices. The av- erage age was 26 (min. 15, max. 55). Most of the subjects were students but there were also others with varying backgrounds, such as cleaning staff and admin- istrative assistants. The subjects did not realize that it was a simulation and they all, in post-experimental interviews, said that they felt very comfortable with the “system”. In the simulations a scenario is presented to the sub- jects. In one of the simulations, C'ARS, the scenario presents a situation where the subject, and his/her ac- companying person, have just got the message that their old favourite Mercedes had broken down beyond repair and that they would have to consider buying a new car. They had a certain amount of money avail- able and using the computerized CARS system were asked to select three cars, and also to provide a brief motivation for their choice. The CARS database is implemented in INGRES, and output from the database can be presented directly to the subjects. Thus, answers from the system, after suc- cessful requests, are tables with information on prop- erties of used cars. The users/subjects found this type of output very convenient as they could view a par- ticular car in the context of other similar cars. This can be seen as an argument favouring an approach to natural language interfaces where complex reasoning is replaced with fast output of structured information. Possibly more information than asked for is provided, but as long as it can be presented on one screen, it is convenient. The dialogues in the other domain, TRAVEL, were collected using two scenarios, one where the subjects were asked to gather information on charter trips to the Greek Archipelago and another where they have a certain amount of money available and were asked to use the TRAVEL system to order such a charter trip. In TRAVEL it is also possible to provide graphical informa- tion to the subjects, i.e., maps of the various islands. The use of empirical material An important question is how to use empirical ma- terial on the one hand and common sense and prior knowledge on human-computer interaction and natu- ral language dialogue on the other. Dahlback (1991) claims that this partly depends on the purpose of the study, whether it is aimed at theory development or system development. In the latter case, one always has the possibility to design the system to overcome certain problems encountered in the corpus. In this work empirical material is used for system development from two different perspectives. The first is to develop a dialogue manager for a natural lan- guage interface which can be used in various applica- tions. Here the empirical material needs to be analysed with the aim of designing a dialogue manager general enough to cover all the dialogue phenomena that can occur in realistic human-computer dialogues using var- ious background systems. Thus, phenomena which oc- cur in the empirical material must be accounted for and also certain generalizations must be made so that the Dialogue Manager can later be customized to cover phenomena that are not actually present in the corpus but are likely to occur for other applications. Empirical material is also used for customizing the Dialogue Manager to actual applications. Here gen- eralization is less emphasized, instead many details of how to efficiently deal with the phenomena in the im- plementation are more interesting. How can empirical material be used for customiza- tion? One can take the conservative standpoint and say that only those phenomena actually occurring in the dialogues are to be handled by the Dialogue Man- ager, (cf. Kelley, 1983). This has the advantage that a 192 Jonsson minimal model is developed which is empirically well motivated and which does not waste time on handling phenomena not occurring in the corpus. The drawback is that a very large corpus is needed for coverage of the possible actions taken by a potential user. This was also pointed out by Ogden (1988, p 296), who claims that “The performance of the system will depend on the availability of representative users prior to actual use, and it will depend on the abilities of the installer to collect and integrate the relevant information”. The other extreme standpoint is to only use the lin- guistic knowledge available. One problem with this ap- proach is that it is plausible that much effort is spent on handling phenomena which will never occur in the dialogue, while at the same time not account for actu- ally occurring phenomena. However, as pointed out by Brown and Yule (1983, p 21) “A dangerously extreme view of ‘relevant data’ for a discourse analyst would involve denying the admissibility of a constructed sen- tence as linguistic data”. For the purpose of customization, two kinds of in- formation can be obtained from a corpus: 8 First, it can be used as a source of phenomena which the designer of the natural language interface was not aware of from the beginning. Second, it can be used to rule out certain interesting phenomena which are complicated but which do not occur in the corpus. The first point also includes the use of the corpus to make the system behaviour more accurate. This can be illustrated by the use of clarification subdialogues. In the CARS dialogues, when the user initiative is too vague and the system needs a clarification, it first ex- plicitly states the alternatives available and then asks for a clarification. Subjects using the CARS system follow up such a clarification subdialogue as intended. However! in the TRAVEL system there are certain sys- tem clarification requests which are less explicit, and which do not state any alternatives. These clarifica- tions do not always result in a follow up answer from the user. To illustrate the second point, consider the use of singular pronouns. Singular pronouns can be used in various ways to refer to a previously mentioned item. One could argue that if a user utters something like What is the price of a Toyota Corolla?, and the answer is a table with two types of cars of different years, then the user may form a conceptualization of Toyota as a generic car and can therefore utter something like How fast is it? referring to properties of a Toyota Corolla of any year. In the work on developing the Dialogue Manager, the use of pronouns in the corpus in various situations motivates the need for designing the Dialogue Manager to capture both uses of singular pronouns. However, when customizing the Dialogue Manager the situation is different. For instance, in the CARS dialogues the users restrict their use of singular pronouns. Thus, the customized Dialogue Manager for the CARS database is not provided with specific means for managing the use of singular pronouns if presented in the context above. If they occur they will result in a clarification subdialogue. However, the “normal” use of singular pronouns is allowed. There is another motivation for this position. Excluding the generic use of a singular pronoun leads to a simpler Dialogue Manager. On the other hand including the normal use of singular pro- nouns will not increase the complexity of the Dialogue Manager. The principle utilized in the customization of the Dialogue Manager is obviously very pragmatic. If the phenomenon is present in the corpus then it should be included. If it is not present, but if it is present in other Wizard of Oz-studies using similar background systems and scenarios and implementation is straightforward, the Dialogue Manager should be customized to deal with it. Otherwise, if it is not present and it would increase the complexity of the Dialogue Manager, then it is not included. This does not prevent the use of knowledge from other sources (cf. Grishman et ad., 1986). In the cus- tomization of the Dialogue Manager for the CARS and TRAVEL systems, knowledge on how the database is or- ganised and also how users retrieve information from databases is used in the customization. Customizing the anager Customization of the Dialogue Manager involves two major tasks: 1) Defining the focal parameters of the dialogue objects in more detail and customizing the heuristic principles for changing the values of these parameters. 2) Constructing a dialogue grammar for controlling the dialogue. The focus structure In the CARS application, task-related questions are about cars which means that the Objects parameter holds various instances of sets of cars and Properties, are various properties of cars. In TRAVEL, on the other hand, users switch their attention between objects of different kinds: hotels, resorts and trips. This requires a slightly modified Objects parameter. It can be either a hotel or a resort. However, in TRAVEL the appropri- ate resort can be found from a hotel description by following the relation in the domain model from ho- tel to resort. Finding the hotel from a resort can be accomplished by a backwards search in the dialogue tree. Therefore, one single focused object - a hotel or a resort - will suffice. The value need not be a single object, it can be a set of hotels or resorts. The general focusing principles need to be slightly modified to apply to the CARS and TRAVEL applica- tions. For the CARS application the heuristic principles apply well to the Objects parameter. An intensionally specified object description provided in a user initia- tive will be replaced by the extensional specification provided by the module accessing the database, which means that erroneous objects will be removed, as they will not be part of the response from the database man- ager. For the TRAVEL application the principles for providing information to the Objects parameter are modified to allow hotels to be added if the resort re- mains the same. The heuristic principles for the Properties parameter for the CARS application need to be modified. The principle is that if the user does not change Objects Discourse Analysis 193 to a set of cars which is not a subset of Objects, then the attributes provided in the new user initiative are added to the old set of attributes. This is based on the observation that users often start with a rather large set, in this case a set of cars, and then gradually specify a smaller set by adding restrictions (cf. Kaplan 1983), for instance in CARS using utterances like remove all small size curs. For the TRAVEL application the copy principle holds without exception. The modifications of the general principles are minor and are carried out during the customization. The results from the customizations showed that the heuristic principles applied well. In CARS 52% of the user initiatives were fully specified, i.e. they did not need any information from the context to be inter- preted. 43% could be interpreted from information found in the current segment as copied from the pre- vious segment. Thus, only 5% required a search in the dialogue tree. For the TRAVEL application without or- dering, 44% of the user initiatives were fully specified and 50% required local context, while in the ordering dialogues 59% were fully specified and 39% needed lo- cal context. In the TRAVEL system there is one more object; the order form. A holiday trip is not fully defined by spec- ifying a hotel at a resort. It also requires informa- tion concerning the actual trip: Travel length, Depar- ture date and Number of persons. This information is needed to answer questions on the price of a holiday trip. The order form also contains all the information necessary when ordering a charter trip. In addition to the information on Resort, Hotel, Departure date, etc. the order form includes information about the name, address and telephone number of the user. Further- more, information on travel insurance, cancellation in- surance, departure airport etc. is found in the order form. The order form is filled with user information during a system controlled phase of the dialogue. The dialogue structure The dialogue structure parameters Type and Topic also require customization. In the CARS system the users never update the database with new information, but in the TRAVEL system where ordering is allowed the users update the order form. Here another Type is needed, CONF, which is used to close an ordering ses- sion by summarizing the order and implicitly prompt for confirmation. For the ordering phase the Topic pa- rameter 0 for order is added, which means that the utterance affects the order form. The dialogue structure can be modelled in a dialogue grammar. The resulting grammar from the customiza- tions of both CARS and TRAVEL is context free, in fact, it is very simple and consists merely of sequences of task-related initiatives followed by database responses, QT/AT~, sometimes with an embedded clarification se- quence, QD/AD. In CARS 60% of the initiatives are of this type. For TRAVEL 83% of the initiatives in the non- ordering dialogues and 70% of the ordering dialogues 3For brevi y t , when presenting the dialogue grammar, Topic type will be indicated with a subscript to the Type. The Initiative is the first TypeTopic-pair while the Re- sponse is the second separated by a slash (/). . are of this type. Other task related initiatives result in a response providing system information, QT/As, or a response stating that the intitiative was too vague, QT/AD. There are also a number of explicit calls for system information, Qs/As. The grammar rules dis- cussed here only show two of the parameters of the dialogue objects. In fact, a number of parameters de- scribing speaker, hearer, objects, properties, etc are used. These descriptors provide additional informa- tion for deciding which actions to carry out. However, the complexity of the dialogue is constrained by the grammar. The dialogue grammar is developed by first con- structing a minimal dialogue grammar from an analy- sis of dialogues from the application, or an application of the same type, e.g. information retrieval from a database. This grammar is generalized and extended, using general knowledge on human-computer natural language interaction, with new rules to cover “obvi- ous” additions not found in the initial grammar. In the CARS dialogues it included, for instance, Greetings and Farewells, which did not appear in the analysis of the dialogues. In the TRAVEL system it involved, among other things, allowing for multiple clarification requests and clarification requests not answered by the user. Some extensions not found in any of the dialogues were also added, for instance, a rule for having the sys- tem prompt the user with a discourse continuation if (s)he becomes unsure who has the initiative. However, if a phenomenon requires sophisticated and complex mechanisms, it will be necessary to consider what will happen if the grammar is used without that addition. This also includes considering how probable it is that a certain phenomenon may occur. For each new application, new simulations are needed to determine which phenomena are specific for that application. This is illustrated in the TRAVEL sys- tem dialogues where ordering is not allowed. In these dialogues some users try to state an order although it is not possible. This resulted in a new rule, I-Jo/As, informing the users that ordering is not possible. In the work by Kelley (1983) and Good et al. (1984), on lexical and grammatical acquisition, the customiza- tion process was saturated after a certain number of di- alogues. The results presented here indicate that this is also the case for the dialogue structure. From a rather limited number of dialogues, a context free grammar can be constructed which, with a few generalizations, will cover the interaction patterns occurring in the ac- tual application (Jiinsson, 1993). Summary This paper has presented a method for the develop- ment of dialogue managers for natural language inter- faces for various applications. The method uses a gen- eral dialogue manager which is customized from a cor- pus of dialogues, with users interacting with the actual application, collected in Wizard of Oz-experiments. The corpus is used when customizing dialogue objects with parameters and heuristic principles for maintain- ing focus structure. It is also used when constructing a dialogue grammar which controls the dialogue, The customization of the Dialogue Manager for two 134 Jonsson different applications - database information retrieval and database information retrieval plus ordering - was also presented. Customization was carried out for two different domains: properties of used cars and infor- mation on holiday trips. For both domains questions can be described as queries on specifications of do- main concepts about objects in the database and sim- ple heuristic principles are sufficient for modelling the focus structure. A context free dialogue grammar can accurately control the dialogue for both applications. The results on customization are very promising for the approach to dialogue management presented in this pa- per. They show that the use of dialogue objects which can be customized for various applications in combina- tion with a dialogue grammar is a fruitful way to build application-specific dialogue managers. Acknowledgements I am indebted to Lars Ahrenberg and Nils Dahlback for many valuable discussions. I will also thank Brant Cheikes, Jalal Maleki, Magnus Merkel and Ivan Rankin for commenting on previous versions of the paper. Ake Thuree did most of the implementation of the Dialogue Manager for the CARS system. References Ahrenberg, Lars; Jonsson, Arne; and Dahlback, Nils 1990. Discourse represent at ion and discourse manage- ment for natural language interfaces. In Proceedings of the Second Nordic Conference on Text Comprehen- sion in Man and machine, Taby. Ahrenberg, Lars 1987. Interrogative Structures of Swedish. Aspects of the Relation between grammar and speech acts. Ph.D. Dissertation, Uppsala Uni- versity. Allen, James F. and Perrault, C. Raymond 1980. Analysing intention in utterances. Artificial Intelli- gence 15:143-178. Bilange, Eric 1991. A task independent oral dialogue model. In Proceedings of the Fifth Conference of the European Chapter of the Association for Computa- tional Linguistics, Berlin. Brown, Gillian and Yule, George 1983. Discourse Analysis. Cambridge University Press. Carberry, Sandra 1990. Plan Recognition in Natural Language Dialogue. MIT Press, Cambridge, MA. Cohen, Philip. R. and Perrault, C. Raymond 1979. Elements of a plan-based theory of speech acts. Cog- nitive Science 3~177-212. Dahlback, Nils and Jijnsson, Arne 1992. An empiri- cally based computationally tractable dialogue model. In Proceedings of the Fourteenth Annual Meeting of The Cognitive Science Society, Bloomington, Indiana. Dahlback, Nils; JSnsson, Arne; and Ahrenberg, Lars 1993. Wizard of Oz studies - why and how. In Pro- ceedings from the 1993 International Workshop on In- telligent User Interfaces, Orlando, Florida. Dahlback, Nils 1991. Representations of Discourse, Cognitive and Computational Aspects. Ph.D. Disser- tation, Linkaping University. Fraser, Norman and Gilbert, Nigel S. 1991. Simulat- ing speech systems. Computer Speech and Language 5:81-99. Good, Michael D.; Whiteside, John A.; Wixon, Den- nis R.; and Jones, Sandra J. 1984. Building a user-derived interface. Communications of the ACM 27(10):1032-1043. Grishman, Ralph and Kittredge, Richard I. 1986. Analysing language in restricted domains. Lawrence Erlbaum. Grishman, Ralph; Hirshman, Lynette; and Nhan, Ngo Thanh 1986. Discovery procedures for sublan- guage selectional patterns: Initial experiments. Com- putational Linguistics 12(3):205-215. Grosz, Barbara J. and Sidner, Candace L. 1986. Attention, intention and the structure of discourse. Computational Linguistics 12(3):175-204. Hayes, Philip J. and Reddy, D. Raj 1983. Steps to- ward graceful interaction in spoken and written man- machine communication. International Journal of Man-Machine Studies 19:231-284. Jonsson, Arne 1991. A dialogue manager using initiative-response units and distributed control. In Proceedings of the Fifth Conference of the European Chapter of the Association for Computational Lin- guistics, Berlin. Jonsson, Arne 1993. Dialogue Management for Nat- ural Language Interfaces - An Empirical Approach. Ph.D. Dissertation, Linkijping University. Kaplan, S. Jerrold 1983. Cooperative responses from a portable natural language database query system. In Computational Aspects of Discourse. MIT Press. 167-208. Kelley, John F. 1983. Natural Language and Comput- ers: Six Empirical Steps for Writing an Easy-to-Use Computer Application. Ph.D. Dissertation, The Johns Hopkins University. Litman, Diane J. 1985. Plan Recognition and Dis- course Analysis: An Integrated Approach for Under- standing Dialogues. Ph.D. Dissertation, University of Rochester. Ogden, William C. 1988. Using natural language in- terfaces. In Helander, M., editor 1988, Handbook of Human-Computer Interaction. Elsevier Science Pub- lishers B. V. (North Holland). Polanyi, Livia and Scha, Remko 1984. A syntactic approach to discourse semantics. In Proceedings of the 10th International Conference on Computational Linguistics, Stanford. Pollack, Martha E. 1990. Plans as complex mental attitudes. In Cohen, Philip R.; Morgan, Jerry; and Pollack, Martha E., editors 1990, Intentions in Com- munication. MIT Press. Reichman, Rachel 1985. Getting Computers to Talk Like You and Me. MIT Press, Cambridge, MA. Schegloff, Emanuel A. and Sacks, Harvey 1973. Open- ing up closings. Semiotica 7:289-327. Discourse Analysis 195 | 1993 | 29 |
1,353 | easoning istic S Henry A. Kautz, Michael! J. Kearns, an AI Principles Research Department AT&T Bell Laboratories Murray Hill, NJ 07974 {kautz, r&earns, selman} @research.att.com Abstract Formal AI systems traditionally represent knowledge using logical formulas. We will show, however, that for certain kinds of information, a model-based repre- sentation is more compact and enables faster reasoning than the corresponding formula-based representation. The central idea behind our work is to represent a large set of models by a subset of characteristic models. More specifically, we examine model-based represen- tations of Horn theories, and show that there are large Horn theories that can be exactly represented by an exponentially smaller set of characteristic models. In addition, we will show that deduction based on a set of characteristic models takes only linear time, thus matching the performance using Horn ,theories. More surprisingly, abduction can be performed in polyno- mial time using a set of characteristic models, whereas abduction using Horn theories is NP-complete. Introduction Logical formulas are the traditional means of representing knowledge in formal AI systems [McCarthy and Hayes, 19691. The information implicit in a set of logical formulas can also be captured by expliciting recording the set of mod- els (truth assignments) that satisfy the formulas. Indeed, standard databases are naturally viewed as representations of a single model. However, when dealing with incom- plete information, the set of models is generally much too large to be represented explicitly, because a different model is required for each possible state of affairs. Logical for- mulas can often provide a compact representation of such incomplete information. There has, however, been a growing dissatisfaction with the use of logical formulas in actual applications, both be- cause of the difficulty in writing consistent theories, and the tremendous computation problems in reasoning with them. An example of the reaction against the traditional approach is the growing body of research and applications using case- based reasoning (CBR) [Kolodner, 19911. By identifying the notion of a “case” with that of a “model”, we can view the CBR enterprise as an attempt to bypass (or reduce) the use of logical formulas by storing and directly reasoning 34 Kautz with a set of models. While the practical results of CBR are promising, there has been no formal explanation of how model-based representations could be superior to formula- based representations. ’ In this paper, we will prove that for certain kinds of information, a model-based representation is much more compact and enables much faster reasoning than the cor- responding formula-based representation. The central idea behind our work is to represent a large set of models by a subset of characteristic models, from which all others can be generated efficiently. More specifically, we examine model- based representations of Horn theories, and show that there are large Horn theories that can be exactly represented by exponentially smaller sets of characteristic models. In addition, we will show that deduction based on a set of characteristic models takes only linear time, thus matching the performance using Horn theories [Dowling and Gallier, 19841. More surprisingly, abduction can be performed in polynomial time using a set of characteris- tic models, whereas abduction using Horn theories is NP- complete [Selman and Levesque, 19901. This result is par- ticularly interesting because very few other tractable classes of abduction problems are known [Bylander et al., 1989; Selman, 19901. eories and Characteristic We assume a standard propositional language, and use a, b, c, d, p, and q to denote propositional variables. A literal is either a propositional variable, called a positive literal, or its negation, called a negative literal. A clause is a disjunc- tion of literals, and can be represented by the set of literals it contains. A clause C subsumes a clause C’ iff all the literals in C appear in C’. A set (conjunction) of clauses is called a clausal theory, and is represented by the Greek letter X. We use n to denote the length of a theory (i.e., number of literals). A clause is Horn if and only if it contains at most one positive literal; a set of such clauses is called a Horn theory. (Note that we are not restricting our attention to definite clauses, which contain exactly one positive literal. A Horn clause may be completely negative.) ‘This is, of course, an oversimplified description of CBR; most CBR systems incorporate both a logical background theory and a set of cases. From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. 1111 Figure 1: The circled models are M& which is the closure of the example set of models MO. A model is a complete truth assignment for the variables (equivalently, a mapping from the variables to (0, 1)). We sometimes write a model as a bit vector, e.g., [OlO. I *-, to indicate that variable a is assigned false, b is assigned true, c is assigned false, etc. A model satisfies a theory if the the theory evaluates to “true” in the model. Another way of saying this is that the theory is consistent with the model. When we speak of the “models of a theory Z,” we are refering to the set of models that satisfy the theory. This set is denoted by models(C). We begin by developing a model-theoretic characteriza- tion of Horn theories. The intersection of a pair of models is defined as the model that assigns “true” to just those variables that are assigned “true” by both of the pair. The closure of a set of models is obtained by repeatedly adding the intersection of the elements of the set to the set until no new models are generated. Definition: Intersection and Closure The intersection of models ml and m2 over a set of vari- ables is given by [ml n m2](2) def { l if ml(T) = m2(4 = 1 0 otherwise Where M is a set of models, closure(M) is the smallest set containing M that is closed under n. To illustrate the various definitions given in this section, we will use an example set MO of models throughout. Let MO = {[lllo], [OlOl], [lOOO]). The closure of this set is given by MA = MO U { [Ol 001, [0000] }. See Figure 1. The notion of closure is particularly relevant in the context of Horn theories: Theorem 1 [McKinsey 194312 A theory IS is equivalent to a Horn theory if and only if models(E) is closed under intersection. *The proof in McKinsey is for first-order equational theories, Thus there is a direct correspondence between Horn the- ories and sets of models that are closed under intersection. For example, consider the closure Mh of the models in set MO defined above. It is not difficult to verify that the models in the closure are exactly the models of the the Horn theory & = (TaVTbVc, ~bV~cVa,~aV~d,bVId,bVIc). Next, we define the notion of a characteristic model. The characteristic models of a closed set M can be thought of as a minimal “basis” for M, that is, a smallest set that can generate all of M by taking intersections. In general, the characteristic models of any finite M can be defined as those elements of M that do not appear in the closure of the rest ofM: Definition: Characteristic Model Where M is a finite set of models, the set of characteristic models is given by char(M) def {m E M 1 m $ cZosure( M - {m})} For example, the characteristic models of Mh are [ 11 lo], [lOOO], and [OlOl]. The other two models in of Mi can be obtained from these characteristic models via intersection. Note that according to this definition the characteristic elements of any set of models are unique and well-defined. Furthermore, we will prove that characteristic models of a set can generate the complete closure of the set. Now, because the set of models of a Horn theory is closed (The- orem l), it follows that we can identify a Horn theory with just the characteristic elements among its models. (In fact, henceforth we will simply say “the characteristic models of a Horn theory” to mean the characteristic subset of its models.) In general, this set may be much smaller than the set of all of its models. In summary: Theorem 2 Let M be any finite set of models. Then, (1) the closure of the characteristic models of M is equal to the closure of M; and (2) if M is the models of a Horn theory, then the closure of the characteristic models of M is equal to M. hmf: That cZosure(char(M)) C closure(M) is obvious. To prove equality, for a given M and distinct ml, mo E M, de- fine ml >M mo iff there exists m2, . . . . m, E A4 such that m0 =mlflm2n... n mn, while mo # m2 f~ . . . fl mn. Define 2 M as the reflexive and transitive closure of >M . We make the following three claims: (i) The directed graph cor- responding to >M is acyclic, because if ml >M mo then the number of variables set to “true” in ml is greater than the number set to “true” in mo. (ii) The elements of M that are maximal under >M are characteristic. This is so because if m E cZosure( M - {m}), there must be some m, E A4 such that m = ml rl . . . n m,. But then (in ~icGr) ml > M m, so m is not maximal under >M. (iii) For any m E M, there is a subset M’ of elements of A4 maxi- mal under > M such that m = n 44’. This set is simply defined by M’ = {??a’ 1 m’ >M m andm’ is maximal under >M}. In graphical terms, 2 consists of the sources of the graph and in fact led to the original definition of a Horn clause. A simpler, direct proof for the propositional case appears in [Dechter and Pearl, 19921. Automated Reasoning 35 obtained by restricting the graph of >M to nodes that are >M m. Therefore, A4 C cZosure(char(IU)), so&sure(M) = closure(ch.ur(M)). Claim (2) then follows from the previous observations together with Theorem 1. q As an aside, one should note that notion of a characteristic model is not the same as the standard definition of a maximal model. By definition, any m E M is a maximal model of M iff there is no m’ E M such that the variables assigned to “true” by m’ are a superset of those assigned to “true” by m. It is easy to see that all maximal models of a set (or theory) are characteristic, but the reverse does not hold. For example, the model [lOOO] in MO is an example of a non-maximal characteristic model. Size of Representations In this section we will examine the most concise way of representing the information inherent in a Horn theory. We have three candidates: a set of Horn clauses; the complete set of models of the theory; and the set of characteristic models of the theory. We can quickly eliminate the complete set of models from contention. Obviously, it is as least as large as the set of characteristic models,-and often much larger. Furthermore, every Horn theory with K models over n variables can be represented using at most Kn2 Horn clauses [Dechter and Pearl, 19921. Thus up to a small polynomial factor, the complete set of models is also always at least as large as the clausal representation. Neither of the other two representations strictly dominates the other. We first show that in some cases the representation using characteristic models can be exponentially smaller than the best representation that uses Horn clauses. Theorem 3 There exist Horn theories with O(n2) charac- teristic models where the size of the size of ‘the. smallest clausal representation is 0(2n). Proof: Consider the theory C = (1~1 V 7x2 V . . . V lxn 1 xi E {pi, qi)). Th e size of X is 0(2n). Moreover, one can show that there is no shorter clausal form for X, by using a proof very similar to the one in [Kautz and Selman, 19921, but the size of its set of characteristic models is polynomial in n. This can be seen as follows. Write a model as a truth assignment to the wiables plqlpzqz . . . pnqn + From the clauses in IZ, it is clear that in each model there must be some pair pi and qi where both letters are be assigned false (otherwise, there is always some clause eliminating the model). Without loss of generality, let us consider the set of models with pl and q1 are both assigned false. Each of the clauses in X is now satisfied, so we can set the other letters to any arbitrary truth assignment. The characteristic models of this set are [00111111.. . 111 [00111111.. . 111 [00011111... 111 . . . . . . [00111111.. .Ol] [00101111.. .ll] [00111111.. . lo] The three models in the first column represent all the settings of the second pair of letters. (Note that 00 can be obtained by intersecting the 2nd and the 3rd model.) Each triple handles the possible settings of one of the pairs. From these 3(n - 1) models, we can generate via intersections all possible truth assignments to the letters in all pairs other than the first pair. For each pair, we have a similar set of models with that pair set negatively. And, again each set can be generated using 3 (n - 1) models. So, the total number of characteristic models is at most O(n2). q The following theorem, however, shows that in other cases, the set of characteristic models can be exponentially larger than the best equivalent set of Horn clauses. - Theorem 4 There exist Horn theories of size O(n) with 0(2(n/2)) characteristic models. Bsoof: Consider the theory C given by the clauses (la V lb), (lc V ld), (le V lf), etc. The set M of characteristic models of this theory contains all the models where each of the variables in each consecutive pair, such as (a, b), (c, d), (e, f), etc., are assigned opposite truth values (i.e., either [Ol] or [lo]). So, we get the models [OlOlOl . . .], [lOOlO . . .], [OllOOl . . .], . . ., [lOlOlO.. .]. There are 2 (n/2) of such such models, where n is the number of variables. It is easy to see that these are all maximal models of the theory, and as we observed earlier, all such models are characteristic. (One can go on to argue that there are no other characteristic models in this case.) q Thus we see that sometimes the characteristic model set rep- resentation offers tremendous space-savings over the clausal representation, and vice-versa. This suggests a strategy if one wishes to compactly represent the information in a closed set of models: interleave the generation of both rep- resentations, and stop when the smaller one is completed. The characteristic-models in a closed set can be efficiently found by selecting each model which is not equal to the intersection of any two models in the set. This operation takes O(K2n) time, where K is the total number of models and n the number of variables. The clausal theory can be found using the algorithms described in [Dechter and Pearl, 19921 and [Kautz et al., to appear]. eduction using Characteristic odels One of the most appealing features of Horn theories is that they allow for fast inference. In the propositional case, queries can be answered in linear-time [Dowling and Gal- lier, 19841. However, there is no apriori reason why a repre- sentation based on characteristic models would also enable fast inference. Nevertheless, in this section, we show that there is indeed a linear-time algorithm for deduction using characteristic models. We will take a query to be a formula in conjunctive nor- mal form - that is, a-conjunction of clauses. It is easy to determine if a query follows from a complete set of models: you simply verify that the query evaluates to “true” on every model. But if the representation is just the set of charac- teristic models, such a simple approach does not work. For example, let the query Q be the formulaaVb, and let the char- acteristic set of models be MO = {[lllO], [OlOl], [lOOO]}, as defined earlier. It is easy to see that Q evaluates to true in each member of Ma. However, Q does not logi- cally follow from the Horn theory with characteristic model set Me; in other words, Q does not hold in every model in the closure of Ma. For example, the query is false in [OlOl] n [lOOO] = [OOOO]. 36 Kautz There is, however, a more sophisticated way of evaluating queries on the set of characteristic models, that does yield an efficient sound and complete algorithm. Our approach is based on the idea of a “Horn-strengthening”, which we introduced in [Selman and Kautz, 19911. Definition: orn-strengthening A Horn clause CH is a Horn-strengthening of a clause C iff CH is a Horn clause, CH subsumes C, and there is no other Horn clause that subsumes C and is subsumed by Another way of saying this is that a Horn-strengthening of a clause is generated by striking out positive literals from the clause just until a Horn clause is obtained. For example, consider the clause C = p V q V 1~. The clauses p V v- and q V 1~ are Horn-strengthenings of C. Any Horn clause has just one Horn-strengthening, namely the clause itself. Suppose the query is a single clause. Then the follow- ing theorem shows how to determine if the query follows from a knowledge base represented by a set of characteristic models. Theorem 5 Let Z be a Horn theory and M its set of charac- teristic models. Further let C be any clause. Then X b C ifSthere exists some Horn-strengthening CH of C such that CH evaluates to “true” in every model in M. Proof: Suppose C + C. By Lemma 1 in [Selman and Kautz, 199 11, C j= Cn for some Horn-strengthening Cn of C. So C evaluates to “true” in every model of Xc, and thus in every member of M. On the other hand, suppose that there exists some Horn-strengthening Cn of C such that Cn evaluates to “true” in every model in A4. By Theorem 1, because the elements of A4 are models of a Horn theory Cn, the elements of the closure of kl are all models of Cn. But the closure of M is the models of C; thus C + Cn. Since CH j= C, we have that IZ k C. In the previous example, one can determine that a V b does not follow from the theory with characteristic models MO because neither the Horn-strengthening a nor the Horn- strengthening b hold in all of {[ll lo], [OlOl], [lOOO]}. A clause containing k literals has at most Ic Horn- strengthenings, so one can determine if it follows from a set of characteristic models in Ic times the cost of evaluating the clause on each characteristic model. In the more general case the query is a conjunction of clauses. Such a query can be replaced by a sequence of queries, one for each conjunct. We therefore obtain the following theorem: Theorem 6 Let a Horn theory X be represented by its set of characteristic models M, and let a be a formula in con- junctive normal form. It is possible to determine ifZ k a in time O(\Ml . la12), where IMI is the total length ofthe representation of M. Finally, using more sophisticated data structures we can bring the complexity down to truely linear time, 0( ] M 1 + ]a]) CKautz et al., 19931. uction using c s Another central reasoning task for intelligent systems is ab- duction, or inference to the best explanation [Peirce, 19581. In an abduction problem, one tries to explain an observa- tion by selecting a set of assumptions that, together with other background knowledge, logically entail the observa- tion. This kind of reasoning is central to many systems that perform diagnosis or interpretation, such as the ATM!% The notion of an explanation can be formally defined as follows [Reiter and de Kleer, 19871: efinitisn: [Ex~~~~at~o~] Given a set of clauses C, called the background theory, a subset A of the propositional letters, called the assumption set, and a query letter q, an explanation E for q is a minimal subset of unit clauses with letters from among A such that 1. 2):U.E kq,and 2. X U E is consistent. Note that an explanation E is a set of unit clauses, or equivalently, a single conjunction of literals. For example, let the background theory be Z = {a, la V 4 V lc V d} and let the assumption set A = {a, b, c}. The conjunction b A c is an explanation for d. It is obvious that in general abduction is harder than de- duction, because the definition involves both a test for en- tailment and a test for consistency. However, abduction can remain hard even when the background theory is re- stricted to languages in which both tests can be performed in polynomial time. Selman and Levesque [ 19891 show that computing such an explanation is NP-complete even when the background theory contains only Horn clauses, despite the fact that the tests take only linear time for such theories. The problem remains hard because all known algorithms have to search through an exponential number of combina- tions of assumptions to find an explanation that passes both tests. There are very few restricted clausal forms for which ab- duction is tractable. One of these is definite Horn clauses, which are Horn clauses that contain exactly one positive literal - completely negative clauses are forbidden. How- ever, the expressive power of definite Horn is much more limited than full Horn: In particular, one cannot assert that two assumptions are mutually incompatible. It is therefore interesting to discover that abduction prob- lems can be solved in polynomial time when the background theory is represented by a set of characteristic models. We give the algorithm for this computation in Figure 2. The abduction algorithm works by searching for a char- acteristic model in which the query holds. Then it sets E equal to the strongest set of assumptions that are compatible with the model, and tests if this E rules out all models of the background theory in which the query does not hold. This step is performed by the test closure(M) k (l\E) > q and can be performed in polynomial time, using the de- duction algorithm described in the previous section. (Note that the formula to be deduced is a single Horn clause.) If the test succeeds, then the assumption set is minimized, by deleting unnecessary assumptions. Otherwise, if no such Explain(M, A, q) For each m in M do Ifm /= q then E t- all letters in A that are assigned “true” by m if closure(M) i= (A E) 3 Q then Minimize E by deleting as many elements as possible while maintaining the condition that closure(M) k (A E) > q. return E endif endif endfor return “false” end. Figure 2: Polynomial time algorithm for abduction. M is a set of characteristic models, representing a Horn theory; A is the assumption set; and Q is the letter to be explained. The procedure returns a subset of A, or “false”, if no explanation exists. characteristic model is in the given set, then no explanation for the query exists. Note that the minimization step simply eliminates redundant assumptions, and does not try to find an assumption set of the smallest possible cardinality, so no combinatorial search is necessary. It is easy to see that if the algorithm does find an expla- nation it is sound, because the test above verifies that the query follows from the background theory together with the explanation, and the fact that the model m is in M (and thus also in the closure of M) ensures that the background theory and the explanation are mutually consistent. Furthermore, if the algorithm searched through all models in the closure of M, rather than just M itself, it would be readily apparent that the algorithm is complete. (The consistency condition requires that the the explanation and the query both hold in at least one model of the background theory.) However, we will argue that it is in fact only necessary to consider the maximal models of the background theory; and since, as we observed earlier, the maximal models are a subset of the characteristic models, the algorithm as given is complete. So suppose m is in cZosure( M), and E is a subset of A such that Q and all of E hold in m. Let m’ be any maximal model of M (and thus, also a maximal model if closure(M)) that subsumes m - at least one such m’ must exist. All the variables set to “true” in m are also set to “true” in m’; and furthermore, C.J and all of E consist of only positive literals. Therefore, Q and E both hold in m’ as well. Thus the algorithm is sound and complete. In order to bound its running time, we note that the outer loop executes at most 1 M 1 times, the inner (minimizing) loop at most IAl times, and each entailment test requires at most O(l M) - lA12) steps. Thus the overall running time is bounded by O((M12 . lA13). In summarjl: Theorem 7 Let M be the set of characteristic models of a background Horn theory, let A be an assumption set, and q be a query. Then one can find an abductive explanation of q in time O(lM12 - lA13). Again, using better data structures, we can reduce the com- plexity to be quadratic in the combined length of the query and knowledge base. The fact that abduction is hard for clausal Horn theories, but easy when the same background theory is represented by a set of characteristic models, does not, of course, indi- cate that P = NP! It only means that it may be difficult to generate the characteristic models of a given Horn theory: there may be exponentially many characteristic models, or even if there are few, they may be hard to find. None the less, it may be worthwhile to invest the effort to “compile” a useful Horn theory into its set of characteristic models, just in case the latter representation does indeed turn out to be of reasonable size. This is an example of “knowledge com- pilation” [Selman and Kautz, 199 1 I, where one is willing to invest a large amount of off-line effort in order to obtain fast run-time inference. Alternatively, one can circumvent the use of a formula-based representation all together by con- structing the characteristic models by hand, or by learning them from empirical data.3 Conclusions In this paper, we have demonstrated that, contrary to preva- lent wisdom, knowledge-based systems can efficiently use representations based on sets of models rather than logi- cal formulas. Incomplete information does not necessar- ily make model-based representations unwieldy, because it possible to store only a subset of characteristic models that are equivalent to the entire model set. We showed that for Horn theories neither the formula nor the model-based rep- resentation dominates the other in terms of size, and that sometimes one other can offer an exponential savings over the other. We also showed that the characteristic model representa- tion of Horn theories has very good computational proper- ties, in that both deduction and abduction can be performed in polynomial time. On the other hand, all known and fore- seeable algorithms for abduction with Horn clauses are of worst-case exponential complexity. This paper begins to provide a formal framework for un- derstanding the success and limitations of some of the more empirical work in AI that use model-like representations. Earlier proposals to use models in formal inference, such as Levesque’s proposal for “vivid” representations [Levesque, 19861, rely on using a single, database-like model, and thus have difficulty handling incomplete information. As we 3As mentioned earlier, if all models are given, finding the char- acteristic models takes only polynomial time. However, the com- plexity of learning the characteristic models where the algorithm can only sample from the complete set of models is an interesting open problem. Some preliminary results on the compexity of this problem have recently been obtained by D. Sloan and R. Schapire (personal communication). 38 Kautz have seen, our approach is more general, because we repre- sent a set of models. We are currently investigating exten- sions of the notion of a characteristic model to other useful classes of theories. References Bylander, Tom; Allemang, Dean; Tanner, Michael C.; and Josephson, John R. 1989. Some results concerning the computational complexity of abduction. In Proceedings of KR-89, Toronto, Ontario, Canada. Morgan Kaufmann. 44. Dechter, Rina and Pearl, Judea 1992. Structure identifica- tion in relational data. Artijkial Intelligence 58( 1-3):237- 270. Dowling, William F. and Gallier, Jean H. 1984. Linear time algorithms for testing the satisfiability of propositional Horn formula. Journal of Logic Programming 3:267-284. Kautz, Henry and Selman, Bart 1992. Speeding inference by acquiring new concepts. In Proceedings of AAAI-92, San Jose, CA. Kautz, Henry; Kearns, Michael; and Selman, Bart 1993. Reasoning with characteristic models. Technical report, AT&T Bell Laboratories, Murray Hill, NJ. Kautz, Henry A.; Kearns, Michael J.; and Selman, Bart ppear. Horn approximations of empirical data. Artificial Intelligence. Kolodner, Janet L. 199 1. Improving human decision mak- ing through case-based decision aiding. AI Magazine 12(2):52-68. Levesque, Hector 1986. Making believers out of comput- ers. ArtiJicial Intelligence 30( 1):s 1. McCarthy, J. and Hayes, P. J. 1969. Some philosophi- cal problems from the standpoint of artificial intelligence. In Michie, D., editor 1969, Machine Intelligence 4. Ellis Horwood, Chichester, England. 463ff. McKinsey, J. C. C. 1943. The decision problem for some classes of sentences without quantifiers. Journal of Sym- bolic Logic 8(3):61. Peirce, Charles S. 1958. Collected Papers of Charles Sanders Peirce. Harvard University Press, Cambridge, MA. Reiter, Raymond and Kleer, Johande 1987. Foundations of assumption based truth maintance systems: Preliminary report. In Proceedings of AAAI-87. 183. Selman, Bart and Kautz, Henry 199 1. Knowledge com- pilation using Horn approximations. In Proceedings of AAAI-91, Anaheim, CA. 904-909. Selman, Bart and Levesque, Hector J. 1990. Abductive and default reasoning: a computational core. In Proceedings of AAAI-90, Boston, MA. Selman, Bart 1990. Tractable default reasoning. Ph.D. Thesis, Department of Computer Science, University of Toronto, Toronto, Ontario. Automated Reasoning 39 | 1993 | 3 |
1,354 | Oregon Graduate Institute of Science & Technology 19600 N.W. von Neumann Drive Beaverton, OR 97006-1999 USA novick@cse.ogi.edu, wardk@cse.ogi.edu Abstract This work develops a computational model for representing and reasoning about dialogue in terms of the mutuality of belief of the conversants. We simulated cooperative dia- logues at the speech act level and compared the simulations with actual dialogues between pilots and air traffic control- lers engaged in real tasks. In the simulations, addressees and overhearers formed beliefs and took actions appropri- ate to their individual roles and contexts. The result is a computational model capable of representing the evolving context of complete real-world multiparty task-oriented conversations in the air traffic control domain.’ Introduction This paper addresses the question of how mutuality is maintained in conversation and specifically how mutuality can be usefully modeled in multiparty computational dia- logue systems. The domain we studied and modeled is air traffic control (ATC). The problem we are solving is related to the distributed artificial intelligence research on ATC communications (e.g., Findler & Lo 1988), except that we are explicitly dealing with the mutuality aspects of interaction. While other ATC studies have developed domain models suitable for distributed processing via cooperating agents, we are interested in fundamental knowledge about how such cooperation is achieved through linguistic interaction. Interestingly, we find that the mutuality model by itself explains a great deal of the ATC communications that we observed. This model and its associated representation of mutual- ity are not specific to ATC. While ATC served as a domain for the simulation, the mutuality maintenance mechanism presented here does depend on the domain’s specifics. Thus, the representation should be useful for other multi- party interaction in, for example, applications of distrib- uted artificial intelligence. We define and validate a speech act model of ATC dia- logue built around the mutuality of beliefs among conver- sants and overhearers. Complete actual dialogues between air traffic controllers and pilots were explicated in terms of task, belief, and event. The model was tested by computa- tional simulation and was found to be succe&ful in pre- 1. This research was supported by National Science Foundation grant No. IRI-9110797. 196 Ncwick dieting and explaining the course of real-world conversation at the speech act level. In particular, we rep- licated a number of actual conversations using speech-act models of air traffic control and conversational mutuality. The simulations account for and produce the effects of belief formation in computational agents representing both conversants and overhearers. Motivation Spoken language understanding systems require dialogue- level language models that are capable of representing and reasoning about the course of real-world task-oriented conversation. Speech act theory (Austin 1962; Searle 1969; Searle 1975; Searle & Vanderveken 1985) suggests that we can motivate dialogue and explain conversational coherence by modeling conversation in terms of the con- versants’ goals and their plans for reaching those goals. Language as action provides conversants with means for achieving mutual goals (Winograd & Flores, 1986). This approach accords well with findings that the structure of discourse about a particular task closely follows the struc- ture of the task itself (e.g., Oviatt & Cohen 1988; Cohen & Perrault 1979; Grosz & Sidner 1986). In this view, lan- guage is just another tool to be used in accomplishing some goal, and utterance planning becomes incorporated into the larger task planning (e.g., Power 1979; Cohen 1984; Lit- man & Allen 1987). Clark and his colleagues have proposed a theory of con- versation as a collaborative process in which conversants collaborate in building a mutual model of the conversation (Clark & Marshall 1981; Clark & Wilkes-Gibbs 1986; Schober & Clark 1989; Clark & Schaefer 1989). The con- versants’ beliefs about the mutuality of their knowledge serves to motivate and explain the information that conver- sants exchange. The collaborative view of conversation offers an explanation for conversational coherence by viewing conversation as an ensemble work in which the conversants cooperatively build a model of shared belief (Suchman 1987; Clark & Schaefer 1989). Our model is based on a synthesis of these principles, in that conversation is viewed as an attempt to establish and build upon mutual knowledge using speech acts. This syn- thesis was first proposed by Novick (1988) to explain con- versational control acts. More recently, Traum and Hinkelman (1992) have developed a similar model of From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. mutuality maintenance as part of their theory of “conver- sation acts.” In this study, we use these principles to explain and motivate domain-level acts in a real-world task. This article, then, explores the domain of air traffic con- trol as a speech-act model, produces a computational rep- resentation of the beliefs, actions and inferences of the conversants, and validates the model and representation through detailed simulation of observed ATC dialogue. We pay particular attention to the maintenance of mutual- ity of belief in both conversants and overhearers. The next section of this article discusses the ATC domain and describes the dialogues we analyzed and modeled. We present the computational model, discussing the goals of the agents in the domain and presenting our representation of their beliefs. We next present inference rules for speech acts in the ATC domain, including detailed discussion of their mutuality effects. Based on the representation and inference rules, we then simulate observed dialogue in a multi-agent rule-based environment; we describe the sim- ulation environment and procedure and discuss the results of the simulations. he Air Traffic Control Developing and validating a model of mutuality of belief in conversation, particularly for multiple party interaction, requires a domain in which there are naturally occurring cooperative tasks, relative ease of determining initial beliefs, and multiple conversants. For this study, we chose to examine air traffic control dialogue. Because we wanted the model to represent actual dialogue and not just encode the formalisms suggested by the Federal Aviation Admin- istration (FAA 1989), we studied protocols of actual pilot- controller conversation (Ward, Novick, & Sousa 1990) rather than simply encoding the FAA’s rules for ATC com- munication. In this study we examined four complete ATC dia- logues, which ranged in size from 14 to 19 transmissions. Each of the four dialogues represent the entire conversa- tion between an approach controller and the pilot of a com- mercial flight approaching the airport to land. Controller and pilot are cooperatively performing the task of guiding the aircraft through the controller’s airspace to the approach gate, a point from which the pilot can complete the approach and landing without further guidance from the controller. This task is referred to as an Instrument Landing System (ILS) approach procedure. For a detailed description of the ILS approach procedure from the con- troller’s standpoint, see Air Traffic Control (FAA 1989). The pilot’s view is described in Airman’s Information Manual (FAA 1991). Rosenbaum (1988) provides a good explanation for the non-pilot. In ATC dialogue, the conversants often hold strong expectations about what the other knows and what the other will say. Pilot and controller are both trained in ILS approach procedures and each knows at least approxi- mately what the other should do under normal circum- stances. The system is not infallible, however, and circumstances are not always normal. The purpose of much of their dialogue, then, is to establish and confirm the mutuality of information that each thinks the other proba- bly already knows or expects. Our goal, then, is to build a working model of mutuality that reflects this process, with representations that are usefully able to express differences among beliefs held by a single party, mutually held by some combination of parties, and perceived as held by the same or other combinations. The speech acts of the parties should reflect the natural consequences of their individual needs to achieve mutual understanding: Conversants with goals that require mutuality for their achievement should act appropriately; similarly capable conversants with goals that do not need mutuality should refrain from such actions. Computational Our model of mutuality in multiparty dialogue encom- passes relations among acts, utterances, and beliefs. Figure 1 summarizes this conceptual model. Conversants are modeled as autonomous agents, each having a separate belief space. An agent’s beliefs may include beliefs about another agent’s beliefs, depicted as smaller areas within an agent’s belief space. Agents communicate in a multi-step process: 0 Agent A forms the intention to perform an act directed toward Agent B. This intention is based on A’s beliefs about B’s beliefs along with other beliefs that A holds. Q A’s intended act is expressed as an utterance. This utter- ance is transmitted to other agents in the system. e B interprets this utterance as an act based on B’s own beliefs and B’s beliefs about A’s beliefs. Note that if B’s beliefs about A are in error or incomplete, B may infer a different act than A intended (misunderstanding). m Agent B’s belief space is updated to reflect the effects that A’s act had on B’s beliefs. \ A.s belie1 space utterance . : ,: 2”: -gi -f:m C’s belief space Fimre 1, Conceutual Model of Agent Interaction Discourse Analysis 197 Agent C represents an overhearer, an agent who hears A’s utterance but is not the intended target (shown by a gray line from utterance to agent). Agent C interprets the overheard utterance in light of C’s beliefs, including C’s beliefs about the beliefs of A and B. Of course, C may arrive at an interpretation of the utterance that differs from that of either A or B; overhearers are at a disadvantage in following a conversation in that they do not actively par- ticipate in the collaborative process of reaching mutual belief (Schober & Clark 1989). However, overhearing plays a crucial role in ATC communications. Pilots main- tain their situational awareness by monitoring the trans- missions of others on the frequency and may respond or refer to dialogue that was not directed to them (Dunham 199 1). The representation therefore explicitly supports multiple conversants by accounting for the effect that an utterance has on overhearers, including mutuality of belief judgements involving multiple agents. Misunderstanding is modeled in terms of inconsistent beliefs; conversants’ mental models of the conversation may have diverged without the conversants realizing it. A person may then take an action intended to embody a cer- tain speech act, only to be surprised when the other’s response indicates that a completely different speech act was understood. Similarly, disagreements can be modeled in terms of the conversants’ beliefs about other conver- sants’ differing beliefs. This model therefore supports both recognized and unrecognized belief inconsistency. This conceptual model is similar in style to that pro- posed by Allen and Traum (Allen 199 1; Traum 199 1; Traum $ Hinkelman 1992), in that a distinction is made between privately-held and shared knowledge in under- standing a task-oriented conversation. Allen’s TRAINS model is built around the negotiation and recognition of plans, however, and explicitly assumes that there are exactly two agents: system and person. The ViewGen algo- rithm (Ballim, Wilks, & Bar-den 1991) uses multiple envi- ronments to permit the system to ascribe and reason about conflicting beliefs among multiple agents. ViewGen’s emphasis is on understanding metaphor in text, however, and not on understanding task-oriented conversations. Here, we complement these approaches by providing a systematic representation for mutuality of belief. Representing Belief An agent’s understanding of the world is represented as a set of beliefs of the form belief (proposition, truth-value, belief-groups). The proposition clause represents the item about which a belief is held. An agent may hold beliefs about the state of the world, about acts that have occurred, and about acts that agents intend; these last two represent an agent’s memory of past actions and expectations of future actions. The truth-value clause represents the truth value that the belief-group assigns to the proposition (as understood by the agent in whose belief space the belief appears). This clause may take the values true, false, or unknown. Clauses which do not appear in the agent’s belief space are considered to have a truth-value of unknown. The truth- vaZue does not indicate the agent’s opinion about the “fact” represented in thepreposition; rather, it reflects the agent’s understanding of the beliefs of the belief group. This indi- rection allows an agent to ascribe beliefs to other agents. A belief-group is a set of agents who mutually hold the belief. Each belief thus has one or more associated belief- groups that express the agent’s view of the mutuality status of the proposition. For example, the belief belief(altitude([sun512],5000), true, [[approach],[sun512],[approach,sun512]]) means the agent believes that the approach controller, and the pilot of Sundance 5 12 each believe (individually) that Sundance 512 is at 5000 feet; the agent also believes that controller and pilot have established that they mutually believe the pilot is at 5000 feet. While we have paid special attention to mutuality, this representation clearly does not solve every problem asso- ciated with beliefs about actions in the world. Neverthe- less, the representation is adequate for the purposes of demonstrating the effectiveness of our theory of conversa- tional mutuality; its limitations mainly concern “domain” knowledge, particularly with respect to temporal and con- ditional actions. Thus the belief representation does not accommodate conditions that are in the process of chang- ing from one state to another-that, for example, an air- craft is slowing from 190 knots to 170 knots. It does not capture notions of time; a more general representation would add a time interval argument to indicate when the agent believed the action would or did occur. It also does not capture events that take place without being triggered by the actions of some agent. As we have noted, though, these limitations are acceptable for this representation because our purpose is to model mutuality dialogue and not to model events in general. Core Speech Acts In implementing and testing the belief model, WC defined a small set of speech acts augmented with a preliminary set of domain acts and states sufficient to represent the dia- logues being studied. These acts are: acknowledge, autho- rize, contact, direct, report, request. Acts are modeled at a high level, with preconditions and effects represented only to the degree necessary to understand the communi- cation. The speech act representation is defined more fully in Ward (1992). Note that this is not a complete list of the acts needed to explain all ATC dialogue; it represents only the acts needed to explain typical dialogues seen in the ILS approach task. Dialogue Rules ATC dialogue exhibits a strong task orientation that is reflected in the structure of the dialogue. Intentions to per- form some act are assumed to be arc motivated by task 198 Novick considerations, while the intentions to respond to others’ acts are motivated primarily by mutuality considerations. The rules defined for the approach task dialogue are sum- marized in Table 1. Nine rules in four categories were needed to account for the four dialogues studied. Mutuality rules establish and update agents’ beliefs about the mutuality of knowledge. Included in this cate- gory are rules to translate an agent’s perception of some act into a belief that the act occurred. These represent a rudi- mentary first step toward act recognition and interpreta- tion. Expectation rules capture expectations about an agent’s response to another agent’s acts. Like planning rules, they build expectations about agents’ intended actions. Where planning rules seek to capture the proce- dures of complex domain tasks, however, expectation rules attempt to embody more basic rules of behavior in this domain: that agents acknowledge acts directed toward them; that agents follow instructions if able to do so; that agents confirm information that is reported to them. Planning rules attempt to capture the procedural aspects of the domain’s communications-oriented tasks, thus representing a rudimentary planning function. Performative rules translate agents’ intentions into acts. This category currently includes only a single, very general rule, Perform Intentions; it embodies the princi- ple that agents carry out their intentions if possible. It locates an agent’s unfulfilled intention to perform an act, checks that the preconditions for performing that act are satisfied, then does it. Simulating ATC Dialogue To validate the mutuality model, we built a working imple- mentation and tested it in saso (Novick 1990), a rule-based shell developed in Prolog as a tool for modeling multi- agent problems involving simultaneous actions and sub- jective belief. The conversants are represented by compu- tational agents (rules implemented as saso operators) that communicate using the acts defined in the model. Saso uses forward-chaining control to simulate the parallel exe- cution of multiple rule-based agents with separate belief spaces. A user-defined set of extended STRIPS-style oper- ators is used to specify the behavior of agents in terms of preconditions and effects. Agents communicate through acts; when an agent performs an act, clauses representing the action are posted to the memories of all agents in the system. The conflict resolution strategy was defined in terms of the rule types: Mutuality rules are applied in pref- erence to expectation rules, expectation rules in preference to planning rules, and performative rules are triggered only when no other rules apply. This conflict resolution strategy reflects the idea that agents first update beliefs about their knowledge, then form expectations about the future behav- ior of agents, plan their own actions, and finally act. For each dialogue, the inputs to the simulation were an initial state for each agent, a list of external non-conversa- tion domain events that each agent would encounter, and a set of rules representing knowledge of mutuality and the ’ domain task. The outputs were speech acts and changes in the agents’ beliefs. The simulations were allowed to run to completion. The simulations were validated by comparing their speech-act outputs on an act-for-act basis with a speech-act coding of the original ATC dialogues. To determine effects of overhearing, the internal belief states of the agents were compared with a manual analysis of mutuality of beliefs in the original dialogues. Our first concern was to validate the basic model and rule set by simulating a single dialogue.The model’s abil- ity to track the belief states of overhearers was then tested by enlarging the simulation to include an agent who lis- Table 1: Dialogue Rules for the Approach Task Mutuality ules Expectation Rules Planning Rules Per-formative Rules Rule 0 Act to Belief 9 Mark Belief Mutual 0 Mark Act Mutual 0 Expect Acknowledge Act 0 Expect Follow Instructions a Expect Confirm Report 0 Plan Initial Contact 9 Plan Approach Clearance 0 Perform Intentions Purpose Effect . Perceive acts. 8 Form beliefs about the mutuality of information. 8 Adds belief that act occurred w Updates mutuality of information * Form beliefs about the mutuality 0 Updates mutuality of belief of acts. that act occurred 0 Expect agents to acknowledge acts directed toward them. 0 Expect agents to follow instructions. a Adds expectation that agent will acknowledge act. 0 Adds expectation that agent will follow instructions. * Adds expectation that agents will confirm report. 0 Breaks task into a series of subgoals. 0 Breaks task into a series of subgoals. 0 Expect agent to confirm information that is reported to them. 0 Pilot intends to contact controller. 0 Controller intends to authorize pilot to fly approach procedure = Agents attempt to carry out their intentions. = Agent performs act. Discourse Analysis 199 tened but did not participate in the conversation. Next, the model was tested against each of the other three dialogues from the protocol. Finally, the model was tested against the four dialogues as they were actually interleaved in the cor- pus. Results The simulations ran to completion and correctly tracked the belief states of conversants and overhearers. With the exceptions noted below, the speech acts produced in the simulation corresponded on an act-for-act basis with the baseline codings. Agents’ beliefs about the current state of all ongoing conversations were maintained, with agents responding appropriately to acts directed toward them and to events affecting them. Using the same rules as the active conversants, the overhearing agents “recognized” acts, formed expectations about the next actions of the conver- sants, and made judgments about the mutuality of knowl- edge and the evolving conversational state of the conversants. Some aspects of the original dialogues were not repro- duced in the simulations. As a consequence of the decision to avoid detailed event simulation, the model does not cur- rently support directives that are not intended to be carried out immediately, such as an instruction to contact another controller when the aircraft reaches a certain point in its route. Such instructions were simulated as if they were to be carried out immediately. Also, the current rule set does not allow for an agent’s inability to perform some action, e.g., a pilot’s inability to see another aircraft; this aspect of the test dialogues was not reproduced. The model does not currently represent ranges of conditions, so ranges were instead represented as single values. Because saso simu- lates perfect understanding by conversants, correction sub- dialogues were not investigated in these simulations. The model addresses only one aspect of the approach controller’s responsibilities; it should be extended with substantial domain-task information to encompass a sig- nificant fraction of ATC dialogue. Also, there is clearly a need for many rules that weren’t required for simulating the particular dialogues in this corpus, such as rules for reporting that an agent does not intend to comply with a direction. Another limitation to the generality of this model is the lack of a domain-specific planner. In the present study, this detailed planning function is simulated by introducing agents’ intentions through saso’s event queue. Although a detailed planner on the level of Wes- son’s (1977) expert-level simulation of controller func- tions is probably not required for speech understanding, a certain level of knowledge of expected outcome of domain events is needed to motivate certain aspects of ATC dia- logue. Conclusion This work is a step toward developing a computational rep- resentation multiparty mutuality, which we have shown to be adequate for typical air traffic control dialogues. Com- plete conversations were computationally simulated at the speech act level, demonstrating that real-world collabora- tive conversations can be motivated and represented in terms of belief and expectation, task and event. The goal of this study was to develop a computational model of air traffic control dialogue that would: . Support reasoning about the beliefs and intentions of the conversants. l Model agents’ beliefs about other agents’ beliefs. 0 Capture sufficient domain and physical context to model the exchanges, particularly the evolving context of the dialogue itself. * Support multiple (more than two) conversants, includ- ing overhearers. 0 Permit different agents to hold different, possibly incon- sistent, beliefs, and to support reasoning about the mutuality or inconsistency of conversants’ beliefs. We tested the model against these goals by simulation. Using a small set of rules, several actual dialogues were successfully simulated at the speech act level. The belief states of multiple conversational participants and over- hearers were modeled through the course of several over- lapping dialogues. Agents’ beliefs were updated in response to the evolving conversational context as agents negotiated their tasks while maintaining a situational awareness of other agents’ activities. In modeling complete ATC dialogues in terms of the beliefs and goals of the conversants, typical patterns emerged. Conversants separately form similar beliefs based on separate inputs, then attempt to confirm the mutu- ality of the belief. As utterances are made and acknowl- edged, speaker and hearer form beliefs about what speech acts are intended by the utterance and form expectations that the act will be acknowledged and confirmed. These expectations form a strong context that permit the conver- sants to easily recognize the speech acts motivating non- standard responses that might otherwise be ambiguous. A key contribution of this model is the representation of mutuality of belief in terms of belief sets. Belief sets cap- ture both an agent’s understanding of who believes a given piece of information and the mutuality that the agents holding that belief have established among themselves. This representation allows an agent to hold beliefs about other agents’ possibly conflicting beliefs (“I believe A but he believes B.“) as well as allowing agents to hold beliefs about the mutuality of their knowledge (“She and I have established that we both believe A; he shouId also believe A, but we have not yet confirmed that.“). The belief set representation is flexible enough to represent and reason about mutuality combinations involving any number of agents, thus supporting the modeling of multi-agent con- versations. In this paper, then, we have proposed a representation for mutuality of belief that supports reasoning and action by multiple agents. We have shown the utility of the belief- groups representation through its use in a simulation of ATC dialogues. As part of the simulation, we developed a set of rules that maintain mutuality in this domain and that 200 Novick ue general enough to support straightforward other domains involving cooperative tasks. Acknowledgments the assistance OfLarry The authors gratefully acknowledge Porter and Caroline Sousa. extension to References Allen, J. F. 1991. Discourse Structure in the TRAINS Project. In Proceedings of the Fourth DARPA Workshop on Speech and Natural Language. Austin, J. L. 1962. How To Do Things With Words. London: Oxford University Press. Ballim, A.; Wilks, Y.; and Barnden, J. 1991. Belief Ascrip- tion, Metaphor, and Intensional Identification. Cognitive Science 15:133-171. Clark, H. H., and Marshall, C. R. 1981. Definite Reference and Mutual Knowledge. In I. A. Sag (Ed.), EIements ofdis- course understanding. Cambridge: Cambridge University Press. Clark, H. H., and Wilkes-Gibbs, D. 1986. Referring as a Collaborative Process. Cognition 22: l-39. Clark, H. H., and Schaefer, E. F. 1989. Contributing to Dis- course. Cognitive Science 131259-294. Cohen, P. R., and Perrault, C. R. 1979. Elements of a Plan- based Theory of Speech Acts. Cognitive Science 3(3): 177- 212. Cohen, P. R. 1984. The Pragmatics of Referring and the Modality of Communication. Computational Linguistics 10(2):97-146. Dunham, S. 1991. Personal communication. Federal Aviation Administration 1989. Air Traffic Control 7110.65f. Federal Aviation Administration 199 1. Airman’s Informa- tion Manual. Findler, N., and Lo, R. 1988. An Examination of Distributed Planning in the World of Air Traffic Control. In A. Bond and L. Gasser (eds.), Readings In Distributed Artificial Intelli- gence. San Mateo, Ca: Morgan Kaufman. Grosz, B. Y., and Sidner, C. L. 1986. Attention, Intentions, and the Structure of Discourse. Computational Linguistics 12(3): 175-204. Litman, D. J., and Allen, J. F. 1987. A Plan Recognition Model for Subdialogues in Conversations. Cognitive Sci- ence 11: 163-200. Novick, D. G. 1988. Control Of Mixed-initiative Discourse Through Meta-locutionary Acts: A Computational Model, Technical Report No. CIS-TR-88-18, Department of Com- puter and Information Science, University of Oregon. Novick, D. 6. 1990. Modeling Belief and Action in a Multi- agent System. In B. Zeigler & J. Rozenblit (eds.), AZ Simu- lation and Planning in High Autonomy Systems. Los Alam- itos, CA: IEEE Computer Society Press. Oviatt, S. L., and Cohen, P. R. 1988. Discourse Structure and Performance Efficiency in Interactive and Noninterac- tive Spoken Modalities. Technical Note 454, SRI Interna- tional. Power, R. 1979. The Organization Of Purposeful Dialogues, Linguistics 17: 107- 152. Rosenbaum, S. L. 1988. A User’s View Of The Air Tra#ic Control (ATC) System. Internal Memorandum 46321- 88 1130-01 .IM, AT&T Bell Laboratories. Schober, M. F., and Clark, H. H. 1989. Understanding by Addressees and Overhearers. Cognitive Psychology 21:211- 232. Searle, J. R. 1969. Speech Acts: An Essay In The Philosophy Of Language. Cambridge: Cambridge University Press. Searle, J. R. 1975. Indirect Speech Acts. In J. L. Morgan (ed.) Syntax and Semantics, Volume 3: Speech Acts. New York: Academic Press. Searle, J. R., and Vanderveken, D. 1985. Foundations of Illocutionary Logic. Cambridge: Cambridge University Press. Suchman, L. A. 1987. Plans and Situated Actions. Cam- bridge: Cambridge University Press. Traum, D. R. 1991. Towards a Computational Theory of Grounding in Natural Language Conversation, Technical Report No. 401, Computer Science Department, University of Rochester. Tmum, D., and Hinkelman, E. 1992. Conversation Acts in Task-Oriented Spoken Dialogue, Technical Report 425, Department of Computer Science, University of Rochester. (To appear in Computational Intelligence, 8(3)). Ward, K.; Novick, D. 6.; and Sousa, C. 1990. Air Traffic Control Communications at Portland International Airport, Technical Report, CS/E 90-025, Department of Computer Science and Engineering, Oregon Graduate Institute. Ward, K. 1992. A Speech Act Model of Air Traffic Control Dialogue. MS. thesis, Department of Computer Science and Engineering, Oregon Graduate Institute. Wesson, R. B. 1977. Problem-solving with Simulation in the World of an Air Traffic Controller. Ph.D diss., University of Texas at Austin. Winograd, T., and Flores, F. 1986. Understanding Comput- ers and Cognition. Norwood, NJ: Ablex. Discourse Analysis 201 | 1993 | 30 |
1,355 | Ingrid Zukerman and Richard McConachy Department of Computer Science Monash University Clayton, Victoria 3168, AUSTRALIA {ingrid,ricky}@bruce.cs.monash.edu.au Abstract In recent times, there has been an increase in the number of Natural Language Generation systems that take into consideration a user’s inferences. The statements generated by these systems are typically connected by inferential links, which are opportunistic in nature. In this paper, we de- scribe a discourse structuring mechanism which organizes inferentially linked statements as well as statements connected by certain prescriptive links. Our mechanism first extracts relations and constraints from the output of a discourse plan- ner. It then uses this information to build a di- rected graph whose nodes are rhetorical devices, and whose links are the relations between these de- vices. The mechanism then applies a search proce- dure to optimize the traversal through the graph. This process generates an ordered set of linear discourse sequences, where the elements of each sequence are maximally connected. Our mecha- nism has been implemented as the discourse orga- nization component of a system called WISHFUL which generates concept explanations. Introduction Consideration of the inferences an addressee is likely to make from discourse is an essential part of discourse planning. In recent times, there has been an increase in the number of Natural Language Generation (NLG) systems which address the inferences a user is likely to make from the information presented by these systems, e.g., [Joshi et al. 1984; Zukerman 1990; Cawsey 1991; Horacek 1991; Lascarides & Oberlander 1992; Zuker- man & McConachy 19931. A system that addresses a user’s possible inferences poses a new set of problems for the discourse structur- ing component of the system. Consider, for example, the following discourse: 1 The first step in Bracket Simplification is addition or subtraction. 2 For example, 2(3~ + 52) = 2 x 82. 3 Indeed, Bracket Simplification applies to Like Terms. *This research was supported in part by grant A49030462 from the Australian Research Council. 4 In addition, as you know, it applies to Numbers. 5 However, it does not always apply to Algebraic Terms. 6 For instance, you cannot add the terms in brackets in 3(2x + 7~). This discourse features inferential relations in lines 2-3, 3-4 and 4-5 (signaled by italicized conjunctions). The sentence in line 3 realizes a generalization from the example in line 2, the sentence in line 4 expands on the information in line 3, and the sentence in line 5 violates an expectation established in line 4. The two main methods for text organization con- sidered to date are the schema-based approach, e.g., [Weiner 1980; McKeown 1985; Paris 19881, and the goal-based approach, e.g., [Hovy 1988; Moore & Swartout 1989; Cawsey 19901. Both of these meth- ods are designed to accomplish a single discourse goal. However, inferential relations are opportunistic rather than prescriptive, and therefore cannot be easily cast as contributing to a single communicative goal. Hence, these approaches are ill equipped to cope adequately with inferential links. In this paper, we present a mechanism which orga- nizes inferentially linked information into maximally connected discourse. This mechanism also copes with prescriptive discourse relations between the intended information and the prerequisite information that is needed to understand the intended information. Our mechanism has been implemented as a component of a system called WISHFUL which generates concept ex- planations [Zukerman & McConachy 19931. In the following section, we discuss previous research in discourse structuring. Next, we outline the oper- ation of our discourse planner as background to the description of our discourse structuring mechanism. We then discuss our results and present concluding re- marks. Related Research The schema based approach was introduced in [Weiner 19801. It was later formalized in [McKeown 19851 and expanded in [Paris 1988]. This approach consists of compiling rhetorical predicates into a schema or tem- plate which reflects normal patterns of discourse. Since schemas represent compiled knowledge, they are com- Zukerman From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. putationally efficient. However, they do not cope well with the need to exhibit dynamic and adaptive be- haviour. This shortcoming is overcome by the goal- based approach. The two main techniques which represent the goal- based approach are described in [Hovy 1988; Hovy & McCoy 19891 and in [M oore & Swartout 19891. Both techniques involve converting discourse relations iden- tified in Rhetorical Structure Theory (RST) [Mann & Thompson 19871 into discourse plan operators, and then applying a hierarchical planner [Sacerdoti 19771 to produce a discourse plan. This plan is a tree whose leaves are propositions and whose non-leaf nodes are relations between propositions. Moore’s mechanism takes as input a communicative goal, and uses dis- course plan operators both to decide what to say and to organize the discourse. Hovy’s structurer, on the other hand, is given a set of propositions to be communicated as well as one or more communicative goals. [Hovy & McCoy 19891 1 a er t combined Hovy’s discourse struc- turer with Discourse Focus Trees proposed in [McCoy & Cheng 19911 in order to enhance the coherence and flexibility of the generated discourse. The goal-based approach is particularly suitable for situations where a communicative goal may be achieved by whatever means are available, e.g., con- vincing a user to do something [Moore & Swartout 19891. However, when the objective is to convey infor- mation about a concept, e.g., teach Distributive Law, this approach may omit information that does not fit in the proposed rhetorical structure. For instance, the system described in [Hovy 19881 tries to include as much information as possible in a generated RST tree, but leaves out information that does not fit. The system described in [Cawsey 19901 includes only cer- tain types of information in the discourse operators, and therefore, other relevant information may never be mentioned. A different approach was taken in [Mann & Moore 19811, where discourse organization is viewed as a prob- lem solving task whose objective is to satisfy some op- timality criterion. They implemented a hill-climbing procedure which iteratively selects the best pairwise combination of an available set of protosentences. Due to the use of the hill-climbing function, this approach produces locally optimal discourse. In this research, we also view discourse organization as a problem solving task, but we generate discourse which satisfies globally our optimality criterion. Finally, [Mooney et al. 19911 and [Cerbah 19921 con- sider the discourse structuring problem at a different level. [Mooney et al. 19911 generate extended dis- course by first applying a bottom-up strategy to parti- tion a large number of information items into groups, and then applying a goal-based technique to structure the discourse in each group. [Cerbah 19921 uses global discourse strategies, such as parallel-explanation and concession, to guide the organization of discourse re- lations in order to generate discourse that achieves a desired overall effect. An interesting avenue of inves- tigation is the adaptation of the mechanism presented in this paper as a component of these systems. Operation of the Our discourse planner receives as input a concept to be conveyed to a hearer, e.g., Bracket Simplification; a list of aspects that must be conveyed about this concept, e.g., operation and domain; and an attitude, which de- termines a desired level of expertise. It generates a set of Rhetorical Devices (RDs), where an RD is composed of a rhetorical action, such as Assert or Instantiate, ap- plied to a proposition. following steps. To this effect, it performs the Step 1: WISHFUL first consults a model of the user’s beliefs in order to determine which propositions must be presented to convey the given aspects. This step selects for presentation propositions about which the user has misconceptions, and propositions that are believed by the user but not to the extent demanded by the given attitude. Table 1 contains the propositions selected to convey the operation and domain of Bracket Si rplification. y p3: [Bracket-Simplification apply-to Like-Terms] Table 1: Propositions to be Conveyed Step 2: Next, WISHFUL applies inference rules in backward reasoning mode in order to generate alter- native RDs that can be used to convey each proposi- tion. It then applies inference rules on these RDs in forward reasoning mode in order to conjecture which other propositions are indirectly affected by these RDs. If propositions that are currently believed by the user are adversely affected by inferences from the proposed RDs, they will be added to the set of propositions to be conveyed, e.g., proposition p4 in Table 2. Step 3: In this step, the generation process is ap- plied recursively with a revised attitude and new as- pects for each of the concepts mentioned in each of the alternative sets of RDs generated in Step 2. This is necessary, since it is possible that the hearer does not understand the concepts mentioned in a particular set of RDs well enough to understand this set. This pro- cess generates subordinate sets of RDs, each of which is an alternative way of conveying a concept that was not sufficiently understood by the hearer. Negate+Instantiate+ ~5: [Bracket-Simplification 3(2a: + 7~) (N+I+) always apply-to Algebraic-Terms] Table 2: The Set of RDs Selected by WISHFUL Discourse Analysis 203 Figure 1: A Z-layer RD-Graph Step 4: For each concept used in each alternative set of RDs, WISHFUL now generates a set of RDs that evokes this concept, if the user model indicates that the user and the system do not have a common terminol- ogy for it’. To ensure that the available discourse op- tions are not constrained unnecessarily, Evocative RDs are generated before the organization of the discourse. Further, they are used to generate constraints for the discourse organization process. For instance, consider a situation where the only possible evocation of the concept Like-Terms is “a kind of Algebraic Expression where all the variables are the same.” Now, if the or- ganization procedure had been applied before the evo- cation step, it could have arbitrarily determined that Algebraic-Expressions should be introduced long after Like-Terms. In this case, the resulting discourse would be awkward at best. This situation is avoided by con- straining Algebraic-Expressions to appear either before or immediately after Like-Terms. The generation of access referring expressions, on the other hand, must be performed after the organization of the discourse, since decisions regarding pronominalization depend on the structure of the discourse. Step 5: Owing to the interactions between the in- ferences from the RDs in each set of RDs generated so far, it is possible that some of the proposed RDs are no longer necessary. In order to remove the redun- dant RDs, WISHFUL applies an optimization process to each set of RDs. It then selects the set with the least number of RDs among the resulting sets. The output of the discourse planner is an RD-Graph, which is a directed graph that contains the following components: (1) the set of propositions to be con- veyed (P~,--.,P~ RDs (J-Z&... in Figure 1); (2) the selected set of > RDm, iR%+d, - - - > WDm+pl); 6 3) the inferential relations between the RDs and t e propositions (labelled wi,j); and (4) the prescriptive relations between the sets of RDs that generate prereq- uisite and evocative information and the RDs that are connected to the propositions (labelled um+b,j). The inferential relations are generated in Step 2 above. The weight wi,j contains information about the effect of RDi on the user’s belief in proposition pj. The pre- requisite information is generated in Step 3, and the evocative information in Step 4. Table 2 contains the set of RDs generated by WISH- FUL for the input in Table 1. The rhetorical action ‘Evocation pertains to the first time a concept is men- tioned in a piece of discourse, as opposed to access, which pertains to subsequent references to this concept [Webber 19831. Mention indicates that the user is familiar with the proposition in question. Instantiate+ stands for an Instantiation annotated with a short explanation, such as that in line 6 in the sample discourse in the In- troduction. The algebraic expressions 2(31: + 5x) and 3(2x+7y) in the Instantiations are the objects on which the corresponding propositions are instantiated. Operation of the Discourse Structurer Our discourse structuring mechanism generates an op- timal ordering of the RDs in the RD-Graph generated by the previous steps of WISHFUL. Our optimality cri- terion is maximum connectivity, which stipulates that the generated discourse should include the strongest possible relations between the RDs in the graph. Our procedure first uses the relations in the RD- Graph to derive constraints and relations that affect the order of the generated RDs. The constraints are strict injunctions regarding the relative ordering of these RDs, while the relations are suggestions regard- ing the manner in which the RDs should be connected. These constraints and relations are then represented as a Constraint-Relation Graph, which is a directed graph whose nodes are RDs and whose links are relations and constraints. Finally, we apply a search procedure which finds the optimal traversal through the graph, i.e., the traversal which uses the strongest links and violates no constraints. Extracting Constraints and Relations The constraints extracted by our mechanism are BE- FORE and IMMEDIATELY-AFTER. They are obtained directly from the prescriptive links in the RD-Graph (the links in the right-hand layer of the graph in Fig- ure 1) by applying the following rule: If 3 a link between {RD,+I,} and RDj (vm+k,j #O) Then BEFORE((RD,+k},RDj) or IMMEDIATELY-AFTER({RD,+k},RDj). These constraints stipulate that a set of RDs that is used to evoke or explain a concept must be presented in the discourse either at any time before this concept is presented or immediately after it. The relations extracted by our mechanism are CAUSE,REALIZE,ADD and VIOLATE. Thefirstthree re- lations represent corroborating information, where the causal relation is the strongest, and the additive rela- tion the weakest. The fourth relation represents con- flicting information. In order to derive these relations, the system first obtains support and soundness infor- mation from the weights wi,j of the inferential links in the RD-Graph (the links in the left-hand layer of the graph in Figure 1). Support indicates whether an inference from an RD supports or contradicts a proposition. Inferences that support a proposition are positive that contradict it are negative (- f +), while inferences . Soundness indicates the level of soundness of an inference from an RD. We distinguish between three types of positive inferences based on the soundness of the inference rules that yield these inferences: sound (s), acceptable (a) and unacceptable (u). Negative in- ferences are not affected by this distinction, since the 204 Zukerman A+I RD1-+-- PI [Bracket-Simplification step-l +/-I A RD P3 [Bracket-Simplification apply-to Like-Terms] M RD P4 [Bracket-Simplification apply-to Numbers] N+I+ RD* [Bracket-Simplification always apply-to Algebraic-Terms] Figure 2: RD-Graph for the Selected Set of RDs manner in which they are addressed is not influenced by their soundness. ’ Sound inferences are logically sound, e.g., a special- ization from a positive statement or a generalization from a negative statement. Acceptable inferences are sensible! and their re- sults hold often, e.g., a generalization from an in- stance to a class, a generalization from a positive statement, or a specialization from a negative state- ment. Unacceptable inferences hold only occasionally, and hence should not be reinforced, e.g., inferences based on the superficial similarity of two items. In addition, the discourse structurer requires direct- ness information, which conveys the length of the in- ference chain which infers a proposition from an RD. A directness of level 0 corresponds to a direct infer- ence, level 1 corresponds to inferences drawn from the application of one inference rule such as generalization or specialization, level 2 corresponds to the combina- tion of two inference rules, etc. Directness reflects the intentionality of the discourse, since direct inferences are usually the ones intended by the speaker. Hence, an RD that conveys a proposition by means of a di- rect inference always has a positive support for this proposition’. Directness information is obtained di- rectly from the inference rules used by the system. Figure 2 depicts support, soundness and directness information for the RD-Graph which corresponds to the set of RDs in Table 2. For instance, the label +2a represents an acceptable inference of positive support and directness 2. The relations between the RDs are derived from these factors by means of the procedure Get- Inferential-Relations. For each proposition, the al- gorithm builds a set of binary relations of the form ReZ(RDi, DirRD). Each binary relation contains one RD that conveys this proposition directly (DirRD), and another that affects it indirectly (RDi). As stated above, the possible values of ReZ considered by our mechanism are: VIOLATE, CAUSE, REALIZE and ADD. The relation VIOLATE is obtained first from the RDs that affect a proposition indirectly with a negative sup- port, i.e., DirRD is at odds with each of these RDs. The remaining RDs, which have a positive support, corroborate DirRD. They are divided into those from 2The Negation of proposition p has a positive support for the intended proposition lp. which the proposition is derived by means of a sound inference, those from which the proposition is inferred by an acceptable inference, and those which yield the proposition through an unacceptable inference. These RDs are related to DirRD by means of the relations CAUSE, REALIZE and ADD, respectively. Table 3 con- tains the binary relations generated by our algorithm for the RD-Graph in Figure 2. Procedure Get-Inferential-Ptelations(RD-Graph) For each proposition p E RD-Graph do: 1. DirRD + the RD from which p is inferred directly. 2. IndRD + the RDs from which p is inferred indirectly. 3. If IndRD # 8 and DirRD 8 Then (VIOLATE(RD~ ,DirRD) f RDa affects p with a negative inference} {CAUSE(RD~ ,DirRD)I RDi affects p with a sound and positive inference} (REALIZE(RD~,D~~RD 1 RDi affects p with an accepta d de and positive inference} {ADD(RDi,DirRD)I RDi affects p with an unacceptable and positive inference} uilding the Constraint- elation Graph After the ordering constraints and relations have been extracted from the RD-Graph, they are combined in order to generate the Constraint-Relation Graph used in the next step of the discourse organization process. This is done by iteratively adding each constraint and relation to a graph that starts off empty, without dis- rupting the links built previously. In order to sup- port a numerical optimization process, the links in the Constraint-Relation Graph are assigned weights. Con- straints (BEFORE and IMMEDIATELY-AFTER) are as- signed a height of 00, since constraints must-never violated. Relations are assigned weights according be to their support and soundnesg as follo%s: CAUSE 4,k~- ALIZE 2, VIOLATE 2 and ADD 1. Figure 3 illustrates the Constraint-Relation Graph built from the relations in Table 3. Figure 3: The Constraint-Relation Graph Generating the Optimal Traversal The procedure for generating the optimal traversal of the Constraint-Relation Graph consists of three stages: (1) path extraction, (2) filtering, and (3) optimization. The path extraction stage generates all the ter- minal paths starting from each node in the Constraint- Relation Graph, where a terminal path is one that con- tinues until a dead-end is reached. For instance, node RD1 in Figure 3 has two alternative terminal paths: (1) RD1 -REALIZE - RDa -VIOLATE - RD4 - VIO- LATE - RDs, and(2) RD1 -REALIZE- RD2 -ADD - RD3 -VIOLATE- RD4. Discourse Analysis 205 IndRD (+u) Relation P3 P4 P5 RD RD: RD RD; KU (RDz,iD3] - - - RDI - - - RD3 RD2 - REALIZE(RDl,RDa) ADD(RD2,RDs) vIoLATE(RD2 ,RD4‘\ ADD(RDs,RDz) VIOLATE(RD4 ,RD2) VIOLATE(R&R&) vIoLATE{RL&,RD~~ Table 3: Relations Extracted from the RD-Graph The filtering stage deletes redundant and irregdar paths, where a path is redundant if there exists an- other path which subsumes it; and a path from node RDi to node RDj is irregular if it contains consecu- tive VIOLATE links, and there exists another path from node RDi to node RDj that is composed of positive links only. For example, the path RD2 - ADD - RD3 - VIOLATE - RD4 is redundant, since it is subsumed by path (2) above. The first path above is irregular, since there is a positive link, namely ADD, between RD2 and RD3. The deletion of redundant paths cuts down the search, and the deletion of irregular paths prevents the generation of sentences of the form “RD2, but RD4. However RD3” if a sentence of the form “RD2 and RD3. However RD4” is possible. The optimization stage consists of applying algo- rithm A* [Nilsson 19801, where the goal is to select an ordered set of terminal paths which covers all the nodes in the Constraint-Relation Graph, so that the sum of the weights of the links in these paths is maximal. The operators for expanding a node in the search graph are defined as follows: Operator Oi traces the terminal path pathi through the Constraint-Relation Graph, and removes from the graph the nodes along the traced route and the links incident upon these nodes. The application of Oi gen- erates discourse that connects the RDs in pathi. After the application of an operator, the problem state consists of (1) the terminal paths removed so far from the Constraint-Relation Graph, and (2) the remaining part(s) of the Constraint-Relation Graph. The remaining parts of the graph must then be pro- cessed similarly until the graph is empty. A* uses the evaluation function f(n) = g(n) + h(n) for each node n in the search graph, and terminates the search at the node with the highest value of f. In order to satisfy the admissibility conditions of A*, g and h are set to the following values: 9= x x wei!lht(linkRD,,RDj) pathrPaths {RDi,RDj}Epath h= x WeightRDi - min WeightRDi RDie (CRG-Paths] RDi E ( CRG- Paths) where Paths are the paths removed so far from the Constraint-Relation Graph CRG; weight(linkRDi,RDj) is the weight of the link which connects RDi and RDj; and WeightRD, is the maximum of the weights of the links incident on RDi . The h function estimates the best possible out- come based on the remaining parts of the Constraint- Relation Graph. This outcome corresponds to the dis- course that would result if the strongest link incident on each node could be used in the terminal path that covers the remaining graph. The weakest among these links is subtracted from the h function, since n- 1 links are needed to connect n nodes. The result of applying this procedure to the Constraint-Relation Graph in Figure 3 is the ordered discourse sequence RD1 - REALIZE - RD2 - ADD - RD3 - VIOLATE - RD4 which has a total weight of 2+1+2 = 5. This sequence yields the following out- put, which corresponds to the sample text in the In- troduction. Assert+Instantiate{2(32 + 52)) [Bracket-Simplification step-l +/-I REALIZE Assert [Bracket-Simplification apply-to Like-Terms] ADD Mention [Bracket-Simplification apply-to Numbers] VIOLATE Negate+Instantiate+{3(23: + 7~)) [Bracket-Simplification always apply-to Algebraic-Terms] Handling Constraints Our mechanism also handles the constraints BEFORE and IMMEDIATELY-AFTER. Recall that these con- straints involve a set of RDs which evokes or ex- plains a singleton RD, e.g., BEFORE({RD~+~},RD.). The discourse structurer extracts constraints and re a- I tions from the set {RDm+k} and builds a Constraint- Relation Graph as explained above. This graph is sub- ordinate to the node RDj in the main graph, and it is linked to RDj by a BEFORE/IMMEDIATELY-AFTER hyper-link. The optimization process is applied sepa- rately to this graph, resulting in a connected sequence of RDs for the set {RDm+k}. This sequence is treated as a single entity when the terminal paths are built for the main graph. When the BEFORE link is followed, this sequence yields an in- troductory segment that appears at some point before RDj. Alternatively, when the IMMEDIATELY-AFTER link is followed, it yields a subordinate clause. In this case, if the subordinate graph contains only a few RDs, the main path may continue after the subordi- nate clause. For example, “Bracket Simplification ap- plies to Like Terms., which are AZgebraic Terms such as 3(2a: + 5~). In addition, it applies to Numbers.” Row- ever, if the subordinate graph is large, the terminal 206 Zukerman path must stop immediately after an unwieldy tangential discussion. it in order to avoid esults As stated above, the mechanism described in this pa- per is part of a system for the generation of concept explanations. This system is implemented in Sun Com- mon Lisp on a SPARCstation 2. The example dis- cussed throughout this paper takes approximately 4 CPU seconds to reach the stage shown in Table 2, and an additional second to produce the final ordered output sequence of rhetorical devices and relations. Since the discourse organization problem is exponen- tial, the mechanism is slowed down by larger input patterns with many inter-relationships which produce large, highly connected Constraint-Relation Graphs. For example, it takes about twenty seconds to struc- ture one sample input of twenty RDs. The preliminary testing of our mechanism has been performed in the domains of algebra (14 examples) and zoology (7 examples). Our mechanism was also in- formally evaluated by showing hand-generated English renditions of its output to staff and tutors in the De- partment of Computer Science at Monash University. The general opinion of the interviewed staff was that the text was logically constructed. In addition, a com- parison of the output of our mechanism with texts in prescribed textbooks showed that this output follows the layout of published instructional material. Conclusion We have offered a discourse structuring mechanism that organizes inferentially linked rhetorical devices as well as rhetorical devices linked by prerequisite rela- tions. Our mechanism extracts ordering constraints and inferential relations from the output of a discourse planner, and optimizes the ordering of the generated rhetorical devices based on the principle of maximum connectivity. The output of this mechanism captures sufficient rhetorical features to support continuous dis- course. References Cawsey, A. 1990. Generating Explanatory Discourse. In Dale, R., Mellish, C., and Zock, M. eds. Current Research in Natural Language Generation. Academic Press. Cawsey, A. 1991. Using Plausible Inference Rules in Description Planning. In Proceedings of the Fifth Conference of the European Chapter of the ACL. Cerbah, F. 1992. Generating Causal Explanations: From Qualitative Models to Natural Language Texts. In Proceedings of the Tenth European Conference on Artificial Intelligence, 490-494, Vienna, Austria. Horacek, H. 1991. Exploiting Conversational Impli- cature for Generating Concise Explanations. In Pro- ceedings of the Meeting of the European Association for Computational Linguistics. Hovy, E.H. 1988. Planning Coherent Multisentential Text. In Proceedings of the Twenty-Sixth Annual Meeting of the Association for Computational Lin- guistics, 163-169, Buffalo, New York. Hovy, E.H.; and McCoy, K.F. 1989. Focusing Your RST: A Step Toward Generating Coherent Multisen- tential Text. In Proceedings of the Eleventh Annual Meeting of the Cognitive Science Society, 667-674, Ann Arbor, Michigan. Joshi, A.; Webber, B.L.; and Weischedel, R.M. 1984. Living Up to Expectations: Computing Expert Re- sponses. In Proceedings of the National Conference on Artificial Intelligence, 169-175, Austin, Texas. Lascarides, A. and Oberlander, J. 1992. Abducing Temporal Discourse. In Dale, R., Hovy, E., Rijsner, D., and Stock, 0. eds. Aspects of Automated Language Generation. Springer-Verlag, Berlin, Heidelberg. Mann, W.C.; and Moore, J.A. 1981. Computer Gen- eration of Multiparagraph English Text. American Journal of Computational Linguistics 7( 1): 17-29. Mann, W.C.; and Thompson, S.A. 1987. Rhetorical Structure Theory: A Theory of Text Organization, Technical Report No. ISI/RS-87-190. Information Sci- ences Institute, Los Angeles, California. McKeown. K. 1985. Discourse Strategies for Gener- atin Natural Language Text. ArtifiGaZ Intelligence 27( I?: 1-41. McCoy, K.F.; and Cheng, 9. 1991. Focus of Attention: Constraining What Can Be Said Next. In Paris, C.L., Swartout, W.R., and Mann, W.C. eds. Natural Lan- guage Generation in Artificial Intelligence and Com- putational Linguistics. Dordrecht, The Netherlands: Kluwer Academic Publishers. Mooney, D.J.; Carberry, S.; and McCoy, K.F. 1991. Capturing High-level Structure of Naturally Oc- curring, Extended Explanations Using Bottom-up Strategies. Computational Intelligence 7(4): 334-356. Moore, J.D.; and Swartout, W.R. 1989. A Reac- tive Approach to Explanation. In Proceedings of the Eleventh International Joint Conference on Artificial Intelligence, 1504-1510, Detroit, Michigan. Nilsson, N. 1980. Principles of Artificial Intelligence. Palo Alto, California: Tioga Publishing Company. Paris, C.L. 1988. Tailoring Object Descriptions to a User’s Level of Expertise. Computational Linguistics 14(3): 64-78. Sacerdoti, E. 1977. A Structure for Plans and Be- haviour. New York, New York: Elsevier, North- Holland Inc. Webber, B.L. 1983. So What Can We Talk About Now? In Brady, M., and Berwick, R.C. eds. Compu- tational Models of Discourse. MIT Press. Weiner, J. 1980. Blah, A System Which Explains Its Reasoning. Artificial Intelligence 15: 19-48. Zukerman, I. 1990. A Predictive Approach for the Generation of Rhetorical Devices. Computational In- telligence 6( 1): 25-40. Zukerman, I.; and McConachy, R.S. 1993. Generat- ing Concise Discourse that Addresses a User’s In- ferences. Proceedings of the Thirteenth International Joint Conference on Artificial Intelligence, Chambery, France. Forthcoming. Discourse Analysis 207 | 1993 | 31 |
1,356 | Department of Computer Science University of Massachusetts Amherst, MA 01003 Email: DECKER@CS.UMASS.EDU Abstract This paper presents a simple, fast coordination algo- rithm for the dynamic reorganization of agents in a distributed sensor network. Dynamic reorganization is a technique for adapting to the current local problem- solving situation that can both increase expected sys- tem performance and decrease the variance in perfor- mance. We compare our dynamic organization algo- rithm to a static algorithm with lower overhead. ‘One- shot’ refers to the fact that the algorithm only uses one meta-level communication action. The other theme of this paper is our methodology for analyzing complex control and coordination issues without resorting to a handful of single-instance exam- ples. Using a general model that we have developed of distributed sensor network environments [Decker and Lesser, 1993a], we present probabilistic performance bounds for our algorithm given any number of agents in any environment that fits our assumptions. This model also allows us to predict exactly in what situ- ations and environments the performance benefits of dynamic reorganization outweigh the overhead. Introduction The distributed sensor network (DSN) domain has been a fertile source of examples for the study of coopera- tive distributed problem solving [Carver and Lesser, 1991; Durfee et al., 1987; Lesser, 19911. A key result of the early work in DSNs has been the demonstration of the advan- tages available to groups of agents that communicate about their current problem solving situation. Algorithms for co- ordinating DSN agents can be divided into two classes on the basis of their communication patterns: static algorithms communicate only the results of tasks and no other infor- mation about the local state of problem solving; dynamic algorithms use meta-level communication about their lo- cal problem-solving states to adapt to a situation (examples *This work was supported by ARPA under ONR contract N00014-92-J-1698, ONR contract N00014-92-J-1450, and NSF contract CDA 8922572. The content of the information does not necessarily reflect the position or the policy of the Government and no official endorsement should be inferred. of this include partial global planning [Durfee and Lesser, 19911 and many negotiation algorithms). This paper presents a simple one-shot dynamic algorithm for reorganizing agents’ areas of responsibility in response to a particular DSN problem-solving episode, and analyzes the agents’ resulting behaviors. The class of one-shot dy- namic algorithms is interesting because it is the class of coordination algorithms with the lowest communication overhead (only one meta-level communication action) that still allows agents to adapt to a particular situation during problem solving. This low overhead allows dynamic algo- rithms to be used in environments where the higher costs of multiple meta-level communications and negotiation are not warranted. The particular algorithm presented here, called one-shot dynamic reorganization, allows agents to very quickly resolve to a new organization by limiting each agents’ area of responsibility to a rectangular shape. We will analyze the performance of the dynamic al- gorithm relative to a static one, the effect of some en- vironmental assumptions such as the cost of commu- nication, and the reduction of variance in performance caused by dynamic adaptation (which can be exploited by real-time scheduling algorithms[Decker et al., 1990; Garvey and Lesser, 19931). The model we will use for our analysis[Decker and Lesser, 1993a] grew out of the set of single instance examples of distributed sensor network (DSN) problems presented in [Durfee et al., 19871. The authors of that paper compared the performance of sev- eral different coordination algorithms on these examples, and concluded that no one algorithm was always the best. This is the classic type of experimental result[Cohen, 19911 that our modeling and analysis method was designed to address-we wish to explain this result, and better yet, to predict which algorithm will do the best in each situation. We wish to identify the characteristics of the DSN envi- ronment, or the organization of the agents, that cause one algorithm to outperform another. Our approach relies on a statistical characterization of an environment rather than single instance examples. The first section will summarize our model of DSN en- vironments and the results of our previous analysis of static coordination algorithms. The next section will discuss dy- namic coordination in general, and then we will present the 210 Decker From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. one-shot dynamic reorganization algorithm and confidence intervals on its performance. Finally, we will present our relative performance, communication cost, and variance re- duction results. Our task environment model assumes that several indepen- dent task groups arrive at multiple physical locations over a period of time called an episode. In a distributed sensor net- work (DSN) episode a single vehicle track corresponds to a task group. The movements of several independent vehicles will be detected over a period of time (the episode) by one or more distinct sensors, where each sensor is associated with an agent. For example, on the left side of Figure 2 we see a single episode with 5 vehicle tracks and the outlines of 9 non-overlapping sensor areas. The performance of agents in such an environment will be based on how long it takes them to process all the task groups (vehicle tracks), which will include the cost of communicat- ing data, task results, and meta-level communication, if any. The organizational structure of the agents will imply which subsets of which task groups (which portions of which vehi- cle tracks) are available to which agents and at what cost (an agent can get information from its own sensor more cheaply than by requesting information from another agent’s sen- sor). Usually DSN agents have overlapping sensors, and either agent can potentially work on data that occurs in the overlapping area without any extra communication costs. We make several simplifying assumptions: that the agents are homogeneous (have the same capabilities with respect to receiving data, communicating, and processing tasks), that the agents are cooperative (interested in maximizing the system performance over maximizing their individual performance), that the data for each episode is available simultaneously to all agents as specified by their initial or- ganization, and that there are only structural (precedence) constraints within the subtasks of each task group.’ Any single episode can be specified by listing the task groups (vehicle tracks), and what part of each task group was available to which agents, given the organizational structure. Our analysis will be based on the statistical properties of episodes in an environment, not any single instance of an episode. The properties of the episodes in a simple DSN environment are summarized by the tuple 2) =< A, 7, T, o, 7 > where A specifies the number of agents, 7 the expected number of task groups, r and o spec- ify the structural portion of the organization by the physical range of each agent’s sensor and the physical overlap be- tween agent sensors2, and ir specifies the homogeneous task group structure (an example of the task group structure is shown in Figure 1). A particular episode in this environment can be described by the tuple D =< A, r, o, 71,. . . , Tn > ‘In general there are usually more complex interrelationships between subtasks that affect scheduling decisions, such as fucili- tation [Decker and Lesser, 19911. 2We will also assume the sensors start in a square geometry, i.e, 4 agents in a 2 x 2 square, 25 agents arranged 5 x 5. where n is a random variable drawn from a Poisson distri- bution with an expected value of Q. In a DSN episode, each vehicle track is modeled as a task group. The structure of each task group is based loosely on the processing done by a particular DSN, the Distributed Ve- hicle Monitoring Testbed (DVMT)[tisser and Corkill, 19831. Our simple model is that each task group z is associated with a track of length Zi and has the following structure: (Zi) vehicle location methods (VLM’s) that represent pro- cessing raw signal data at a single location resulting in a single vehicle location hypothesis; (Zi - 1) vehicle track- ing methods (VIM’s) that represent short tracks connecting the results of the VLM at time t with the results of the VLM at time t + 1; (1) vehicle track completion method (VCM) that represents merging all the VTM’s together into a complete vehicle track hypothesis. Non-local precedence relationships exist between each method at one level and the appropriate method at the next level as shown in Figure l- two VLMs precede each VTM, and all VIM’S precede the lone VCM. Besides executing methods, agents may also execute communication actions and information gathering actions (such as getting data from the sensors or commu- nications from other agents). We assume that communica- tion and information gathering are no more time consuming that problem solving computation (in practice they are of- ten much quicker- see the analysis in the final section of this paper). A more complete description of our modeling framework, which can handle much more complexity than this simple model illustrates, can be found in [Decker and Lesser, 1993b], in this volume. accrual function min Figure 1: Task structure associated with a single vehicle track. Later analysis in this paper will be verified by comparing model-based predictions against a DSN simulation, which generates and simulates the execution of arbitrary environ- ments 2). In the simulation we assume that each vehicle is sensed at discrete integer locations (as in the DVMT), ran- domly entering on oneedge and leaving on any other edge. Inbetween the vehicle travels along a track moving either horizontally, vertically, or diagonally each time unit using a simple DDA line-drawing algorithm (for example, see Figure 2). Given the organization (r, o, and A, and the geometry), we can calculate what locations are seen by the sensors of each agent. This information can then be used istributed Problem Solving 211 along with the locations traveled by each vehicle to deter- mine what part of each task group is initially available to each agent. The analysis summaries in the next section were also verified by simulation; please see [Decker and Lesser, 1993a] for the derivation and verification of these early re- sults; later in this paper we will discuss our verification methodology. Environmental Analysis Summary The termination of the system as a whole can be tied to the completion of all tasks at the most heavily loaded agent. Normally, we would use the average number of methods to be executed, but since the focus of our analysis is the termination of problem solving, we need to examine the expected maximum size (9) of an initial data set seen by some agent as a random variable. This basic environmental analysis result is taken from the derivation in [Decker and Lesser, 1993a]; it is equivalent to the expected number of VLM methods seen by the maximally loaded agent in an episode. This value also depends on the expected number of task groups (fi) seen by that same agent, another random variable. For example, the observed value of the random variable (8) in the particular episode shown on the left side of Figure 2 is 22 sensed data points at agent A4, and the number of task groups (tracks) seen by that same agent is 4 (fi = 4). The average number of agents that see a single track (which we represent by the variable a) is 3.8 in the the same episode. If the system of agents as a whole sees n total task groups, then the discrete probability distributions of fi and ,!? are: (1) Pr[i = s]fi = N] = ga,N,O.S (8) (2) 3 = (rG + (r/2)(N - 5)) (3) The function ga,n,p (9) is called the max binomial order statistic, and is defined in terms of the simple binomial probability function b,,, (s) as follows: ~~,P(4 = L- :)P”(l -d”-” [Pr [S = s]] %P(S) = :=o L?J (4 Pr is 5 41 swds) = &Js)” - B&s - 1)” [P@ = 41 The variable a represents the average number of agents that see a single task group, and is estimated as follows: These results, derived and verified in [Decker and Lesser, 1993a], will be used in the following sections within for- mulae for the predicted performance of coordination algo- rithms. Static Coordination Algorit alysis summary In our static algorithm, agents always divide up the overlap- ping sensor areas evenly between themselves so that they do not do redundant work, and never have to communicate about their areas of responsibility. The total time until ter- mination for an agent receiving the maximum initial data set (of size 9) is the time to do local work, combine results from other agents, and build the completed results, plus two communication and information gathering actions. Because this agent has the maximum amount of initial data, it will not finish any more quickly than any other agent and therefore we can assume it will not have to wait for the results of other agents. The termination time of this agent (and therefore the termination time of the entire statically organized sys- tem) can be computed from the task structure shown earlier, and a duration function de(M) that returns the duration of method M: T static = gdo(VLM) + (9 - fi)do(VTM) + (a - l)&do(VTM) + &do(VCM) + 2do(I) + 2do(C) (5) We can use Eq. 5 as a predictor by combining it with the probabilities for the values of 3 and fi given in Eqns. 3 and 1. Again, we refer the interested reader to [Decker and Lesser, 1993a] for derivations and verification. Analyzing Dynamic Organizations In the dynamic organizational case, agents are not limited to the original organization and initial distribution of data. Agents can re-organize by changing the initial static bound- aries (changing responsibilities in the overlapping areas), or by shipping raw data to other agents for processing (load balancing). We will assume in this section that the agents do not communicate with each other about the current local state of problem solving directly. A clearer distinction is that in a one-shot dynamic organization each agent makes its initial decision (about changing boundaries or shipping raw data) without access to non-local information. By con- trast, in a full meta-level communication algorithm (like Partial Global Planning) the agent has access to both its local information and a summary of the local state of other agents. In this paper the decision to dynamically change the organization is made only once, at the start of an episode after the initial information-gathering action has occurred. In the case of reorganized overlapping areas, agents may shift the initial static boundaries by sending a (very short) message to overlapping agents, telling the other agents to do more than the default amount of work in the overlapping areas. The effect at the local agent is to change its effective range parameter from its static value of r’ = r -o/2 to some value r” where r - o/2 > r” 2 r - o, changing the first two terms of Equation 5, and adding a communication action to indicate the shift and an extra information gathering action to receive the results. The following section discusses a particular implementation of this idea that chooses the par- tition of the overlapping area that best reduces expected differences between agent’s loads and averages competing desired partitions from multiple agents. In the load balancing case, an agent communicates some proportion p of its initial sensed data to a second agent, who does the associated work and communicates the results back. Instead of altering the effective range and overlap, this method directly reduces the first two terms of Equa- tion 5 by the proportion p. The proportion p can be chosen 212 Decker dynamically in a way similar to that of choosing where to partition the overlap between agents (see the next section). Whether or not a dynamic reorganization is useful is a function of both the agent’s local workload and also the load at the other agent. The random variable S again repre- sents the number of initially sensed data points at an agent. Looking first at the local utility, to do local work under the initial static organization with n task groups, any agent will take time: Sdo(VLM) + (S - n)do(VTM) (6) When the static boundary is shifted before any processing is done, the agent will take time ~O(Cshcxt) + S”do(VLM) + (s” - n)do(VTM) + do(l) (7) to do the same work, where Cshort is a very short commu- nication action which is potentially much cheaper than the result communications mentioned previously, and S” is cal- culated using the new range r”. When balancing the load directly, local actions will take time do(Gong) + pSdo(VLM) + p(S - n)do(flM) + do(l) (8) where da(Crong) is potentially much more expensive than the communication actions mentioned earlier (since it involves sending a large amount of raw data). If the other agent had no work to do, a simple comparison between these three equations would be a sufficient design rule for deciding between static and either dynamic organization. Of course, we cannot assume that the other agent is not busy; the best we can do a priori (without an extra meta- level communication during a particular episode) is to as- sume the other agent has the average amount of work to do. We can derive a priori estimates for the average lo- cal work at another agent from Equation 6 by replacing S with S, the probability distribution of the average initial sensed data at an agent. This probability distribution is the same as Eq. 3 except that we replace the probability func- tion of the max order statistic ga,N,p(S) in Eq. 2 with the simple binomial probability function b~,~ (8) (we’ll restate the equations for our implementation in the next section). Therefore without any meta-level communication an agent can estimate how busy its neighbors are, and a system of agents could choose intelligently between static, dynamic overlap reorganization, and dynamic load balancing given these constraints. One-shot Dynamic Coordination Algorithm for Reorganization This section describes a particular implementation of the general idea described earlier of dynamically reorganizing the partitions between agents for the DSN simulation. This implementation will keep each agent’s area of responsibility rectangular, and relaxes competing constraints from other agents quickly and associatively (the order of message ar- rival does not affect the eventual outcome). To do this, the message sent by an agent requests the movement of the four corridors surrounding an agent. The northern corridor of Agent 1, for example, is the northern agent organizational responsibility boundary shared by every agent in the same row as Agent 1. As can be seen in Figure 2, a 3x3 organiza- tion has four corridors (between rows 1 and 2,2 and 3, and between columns 1 and 2,2 and 3). The coordination algorithm described here works with the static local scheduling algorithm described in [Decker and Lesser, 1993a]. This is consistent with our view of coordination as a modulating behavior [Decker and Lesser, 19911. This simple local scheduler basically runs a loop that finds all local methods that can currently be executed that are also tied to data within the agent’s static, non-overlapping sensor area, and then executes one. If no methods can be executed, the current set of results (if new) are broadcast to the other agents, and an information gathering action is executed to receive any new communication from other agents. The only modification to the local scheduler for the dynamic system is that we prevent it from scheduling local method execution actions until our initial communications are completed (the initial and reception phases, described below). The coordination algorithm is then as follows. During the initial phase the local scheduler schedules the initial information gathering action, and we precede to the second phase, reception. In the second phase we use the local information to decide what organizational design to use, and the parameter values for the design we choose. To do this we calculate the duration of our (known) local work under the default static organization (Eq. 6), and then estimate that duration under the alternative organizations (dynamic reorganization or load-balancing). When a parameter needs to be estimated, we do so to minimize the absolute expected difference between the amount of work to be done locally and the amount of work done at the remote agent that is impacted the most by the proposed change. For example, when dynamically restructuring, if the over- lap between agents is more than 2 units, we have a choice of reducing the area an agent is responsible for by more than 1 unit (this is the organizational design parameter p in question). To decide on the proper reduction (if any), each agent computes its known local work W using Eq. 6 with the actual (not estimated) S and N computed assuming the agent’s area is reduced by p. Then the agent finds the value of p that minimizes the difference in its known local work W(r - p, S, N) and the average work w(r + p, 3, N) at the other agent: S(r,s,N) = (7-s + p - 5)) W(T,S,N) = S(r, s, N)do(VLM) +(S(r, 8, N) - N)&(W) EVW = 2 5 b,% (N)bN,o.s(s)W(r, 9, IV) (9) N=O a=0 IfP = 0, then the agent will not restructure. If p # 0, then the agent sends a message to all affected agents requesting a reduction of amount p in each corridor (north, east, south, and west). The agent sets its current area of interest to include only the unique (non-overlapping) portion of its area (if any), and enters the unique-processing phase. During this phase the regular local scheduler described earlier controls method execution actions. Distributed Problem Solving 213 Figure 2: On the left is a 3x3 static organization, on the right is the dynamic reorganization result after agents 3, 4, 5 and 7 attempt to reduce their areas of responsibility by one unit. These are actual screen visualizations (originally in color) from our simulation package. When no more methods unique to this agent can be exe- cuted, the coordination algorithm checks the current time. If enough time has passed for the messages from other agents (if any) to arrive (this depends on the communication de- lays in the system), the coordination algorithm schedules an information-gathering action to retrieve the messages. Note that every agent may reach this point at a different time; agents with a large amount of unique local work may take some time, agents with no work at all will wait idle for the length of communication delay time in the system. At this point each agent will relax its borders according to the wishes of the other agents. The relaxation algorithm we have chosen is fairly simple and straightforward, though several similar choices are possible. The algorithm is sym- metric with respect to the four corridors surrounding the agent, so we will just discuss the relaxation of the northern corridor. There will be a set of messages about that corri- dor, some wanting it moved up by some amount and some wanting it moved down by some amount-we will consider these as positive and negative votes of some magnitude. The relaxation algorithm sums the votes, and returns the sum unless it is larger than the maximum vote or smaller than the minimum vote, in which case the max or min is returned, respectively. Competing votes of the same mag- nitude sum to zero, and cancel each other. The summed value becomes the final direction and amount of movement of that corridor. Figure 2 shows a particular example, where four agents each vote to reduce their areas of responsibility by one unit. At this point the agent has a new static area that does not overlap with any other agent (since all agents will see the same information and follow the same decision procedure), and it enters the final normalprocessing phase, and the local 214 Decker scheduler schedules all further actions as described earlier (scheduling only tasks in the new, non-overlapping range). To summarize: the agents first perform information gath- ering to discover the amount of local sensor data in the episode. They then use this local information to decide how to divide up the overlapping regions they share with other agents, using the assumption that the other agents have the average amount of local work to do. This parameter is then communicated to all the affected agents, and the agent works on data in its unique area (if any)-the part of the agent’s sensor range that is never the subject of negotiation because only that agent can sense it. After completing and communicating this unique local work, the agent performs another information gathering action to receive the param- eter values from the other agents, and a simple algorithm produces a compromise for the way the overlap will be di- vided up. The agents now proceed as in the static case until the end of the episode. Analyzing the Dynamic estructuring Algorithm As we did in [Decker and Lesser, 1993a], we can develop an expression for the termination time of any episode where the agents follow this algorithm. To do so, we start with the basic termination time given all of the random variables: T dynamic = maxfTstatic[r = r--p], Tstatic[r = r+p, 4 = 8, IV = IV)] (10) where p is computed as described in the last section using the values of (r, 8, fi, 3, IV). To turn this into a predictive formula, we then use the expressions for the probabilities of the terms 3, fi, Z, and &r (from Eqns. 3 and 1). For example, we can produce an expression for the expected termination of the algorithm: n I$ n iV 270 13=0 &o I?=0 a=0 bn,% (N) * b~,o.s(3) * Tdytmlic[r, 9, fi, 3, n] (11) We tested the predictions of Equation 11 versus the mean termination time of our DSN simulation over 10 repetitions in each of 10 randomly chosen environments from the de- signspace[2<r<lO,O<o~r,l<_fi55,1<N5 lo]. The durations of all tasks were set at 1 time unit, as were the duration of information gathering and communication actions, with the exception of the 4 environments shown in the next section. We used the simulation validation statis- tic suggested by Kleijnen [Kleijnen, 19871 (where $ = the predicted output by the analytical model and y = the output of the simulation): (12) where Var(p) is the predicted variance.3 The result z can then be tested for significance against the standard normal tables. In each case, we were unable to reject the null hypothesis that the actual mean termination equals the pre- dicted mean termination at the cx = 0.05 level, thus validat- ing our formal model.4 ncreasing task durations Figure 3 compares the termination of static and dynamic re- structuring organizations on identical episodes in four differ- ent environments. From left to right, the environments were [A = 9, r = 9, o = 9, n = 71, [A = 4, T = 9, o = 3, n = 51, [A = 16,r = 8,0 = 5,n = 43, [A = 9,r = 10,o = 6,n = 71. Ten different episodes were generated for each environment. In order to see the benefits of dynamic restructuring more clearly, we chose task durations for each environment sim- ilar to those in the DVMT: &(VLM) = 6, &(VTM) = 2, and do (VCM) = 2.5 Note that the dynamic organization often does significantly better than the static organization, and rarely does much worse-remember that in many par- ticular episodes that the dynamically organized agents will decide to keep the static organization, although they pay a constant overhead when they keep the static organization (one extra communication action and one extra information gathering action, given that the time for a message to reach all agents is no longer than the communication action time). Comparative Analyses The next figure demonstrates the effect of the ratio of com- putation duration to communication duration. This and 3The predicted variance of Equation 5 can be easily derived from the statistical identity Var(z) = E[z’] - (E[z])~. 4For non-statisticians: the null hypothesis is that our prediction is the same as the actual value, we did not wish to reject it, and we did not. ‘The idea being that the VLM methods correspond to lowest three DVMT KSIs as a group, and the other methods correspond to single DVMT KSIs, and that a KS1 has twice the duration of a communication action. g 220 jl70 E 120 70 Figure 3: Paired-response comparison of the termination of static and dynamic systems in four different environments (ten episodes in each). Task durations are set to simulate the DVMT (see text). subsequent figures assume that the dynamic restructuring shrinkage parameter p is set to minimize the difference be- tween maximum and average local work as described in the previous section. Figure 4 shows how the expected value and 50% confidence interval on system termination changes as the duration of a method execution action changes from equal to (lx) a communication action to 10 times (10x) that of a communication action. The task structure remains that of the DSN example described in Section . In Figure 4 we see a clear separation emerge between static and dynamic termination. The important point to take from this example is not this particular answer, but that we can do this analysis for any environment 2). 400 $ 350 ,; 300 2 250 -8 - 200 1 Figure 4: Predicted effect of decreasing communication costs on expected termination under a static organization and dynamic restructuring (expected value and 50% confi- dence interval, A = 25, r = 9, o = 9, n = 7). ecreasing Pe rmance Variance The earlier figure assumes that the number of task groups n is known beforehand. The reason for this is to highlight the variance implicit in the organization, and minimize the influence of the external environment. Figure 5 shows how much extra variance is added when only the expected value of n, which is q, is known. We assume that the number of task groups n (in the DSN example, vehicle tracks) that occur during a particular episode has a Poisson distribu- tion with an expected value of q. The discrete probability Distributed Problem Solving 215 function for the Poisson distribution, given in any statistics book, is then: l%(Y) = se--y Pr[n = Yll We can use this probability in conjunction with Eqns. 3, 6, and 9 to calculate the expected value, SO%, and 95% con- fidence intervals on termination in the static or dynamic organizations. An example of this calculation for one en- vironment is shown in Figure 5. Note in Figure 5 both the large increase in variance when n is random, and the small decrease in variance in the dynamic restructuring organi- zation. Note also that the mean termination time for the dynamic organization is less than that for the static organi- zation. Static, n Poisson Dynamic, n Poisson Static, n known Dynamic, n known I I 35 85 135 185 Estimated Termination Time Figure 5: Demonstration of both the large increase in per- formance variance when the number of task groups n is a random variable, and the small decrease in variance with dynamic restructuring coordination [A = 9, T = 22, o = 91. Where n is known, n = 5. Where n is a random variable, the expected value q = 5. Conclusions This paper described a one-shot dynamic coordination al- gorithm for reorganizing the areas of responsibility for a set of distributed sensor network agents. When per- formance is measured in terms of the time for a sys- tem of agents to terminate, the class of dynamic algo- rithms can often outperform static algorithms, and re- duce the variance in performance (which is a useful char- acteristic for real-time scheduling [Decker et al., 1990; Garvey and Lesser, 19931). This paper presented a formula for the expected value (or variance, or confidence interval) of the termination time for a particular one-shot dynamic reorganization algorithm. It showed how this result can be used to predict whether the extra overhead of the dynamic algorithm was worthwhile compared to a static algorithm in a particular environment. Other questions were examined, such as the effect of decreasing communication costs, or increased uncertainty about the task environment. We hope these results can be used directly by designers of DSNs to choose the number, organization, and control algorithms of agents for their particular environments, and that they inspire the DAI community to move beyond the development of ideas using single-instance examples. We are currently analyzing a simple extension of this al- gorithm that uses two meta-level communication actions to provide agents with non-local information with which to make decisions about how to reorganize. We have observed only a small reduction in mean performance but a greater re- duction in variance. Future work we have planned includes the analysis of a multi-stage communication, PGP-style dy- namic coordination algorithm, and the use of our expanded model that includes faulty sensors and ghost tracks [Decker and Lesser, 1993b]. References Carver, N. and Lesser, V.R. 1991. A new framework for sensor interpretation: Planning to resolve sources of un- certainty. In Proceedings of the Ninth National Conference on Artificial Intelligence. 724-73 1. Cohen, Paul R. 1991. A survey of the eighth national conference on artificial intelligence: Pulling together or pulling apart? AI Magazine 12( 1): 16-41. Decker, K.S. and Lesser, V.R. 1991. Analyzing a quanti- tative coordination relationship. Technical Report 91-83, University of Massachusetts. To appear, Group Decision and Negotiation, 1993. Decker, K.S. and Lesser, V.R. 1993a. An approach to ana- lyzing the need for meta-level communication. In Proc. of the Thirteenth International Joint Conference on Artificial Intelligence, Chambery. Decker, K.S. and Lesser, V.R. 1993b. Quantitative model- ing of complex computational task environments. In Proc. of the Eleventh National Conference on Artificial Intelli- gence, Washington. Decker, K.S.; Lesser, V.R.; and Whitehair, R.C. 1990. Extending a blackboard architecture for approximate pro- cessing. The Journal of Real-Time Systems 2(1/2):47-79. Durfee, E.H. and Lesser, V.R. 1991. Partial global plan- ning: A coordination framework for distributed hypothesis formation. IEEE Trans. on Systems, Man, and Cybernetics 21(5): 1167-1183. Durfee, E.H.; Lesser, V.R.; and Corkill, D.D. 1987. Coher- ent cooperation among communicating problem solvers. IEEE Trans. on Computers 36(11): 1275-1291. Garvey, A.J. and Lesser, V.R. 1993. Design-to-time real- time scheduling. IEEE Trans. on Systems, Man, and Cy- bernetics 23(6). Special Issue on Scheduling, Planning, and Control. Kleijnen, J.P.C. 1987. Statistical Tools for Simulation Practitioners. Marcel Dekker, New York. Lesser, V.R. and Corkill, D.D. 1983. The distributed vehi- cle monitoring testbed. AlMagazine 4(3):63-109. Lesser, V.R. 1991. A retrospective view of FA/C dis- tributed problem solving. IEEE Trans. on Systems, Man, and Cybernetics 21(6): 1347-1363. 216 Decker | 1993 | 32 |
1,357 | antitative Modeli utational Task Keith Decker and Victor Lesser Department of Computer Science University of Massachusetts Amherst, MA 01003 DECKER@CS.UMASS.EDU Abstract Formal approaches to specifying how the mental state of an agent entails that it perform particular actions put the agent at the center of analysis. For some ques- tions and purposes, it is more realistic and convenient for the center of analysis to be the task environment, domain, or society of which agents will be a part. This paper presents such a task environment-oriented modeling framework that can work hand-in-hand with more agent-centered approaches. Our approach fea- tures careful attention to the quantitative computational interrelationships between tasks, to what information is available (and when) to update an agent’s mental state, and to the general structure of the task environment rather than single-instance examples. A task environ- ment model can be used for both analysis and simula- tion; it avoids the methodological problems of relying solely on single-instance examples, and provides con- crete, meaningful characterizations with which to state general theories. This paper will give an example of a model in the context of cooperative distributed prob- lem solving, but our framework is used for analyzing centralized and parallel control as well. ntroduction This paper presents a framework, TEMS (Task Analysis, En- vironment Modeling, and Simulation), with which to model complex computational task environments that is compat- ible with both formal agent-centered approaches and ex- perimental approaches. The framework allows us to both analyze and quantitatively simulate the behavior of single or multi-agent systems with respect to interesting character- istics of the computational task environments of which they are part. We believe that it provides the correct level of ab- straction for meaningfully evaluating centralized, parallel, and distributed control algorithms for sophisticated knowl- edge based systems. No previous characterization formally *This work was supported by DAKPA contract N00014-92-J- 1698, Office of Naval Research contract N00014-92-J-1450, and NSF contract CDA 8922572. The content of the information does not necessarily reflect the position or the policy of the Government and no official endorsement should be inferred. captures the range of features, processes, and especially in- terrelationships that occur in computationally intensive task environments. We use the term computational task environment to re- fer to a problem domain in which the primary resources to be scheduled or planned for are the computational pro- cessing resources of an agent or agents, as opposed to physical resources such as materials, machines, or men. Examples of such environments are distributed sensor net- works, distributed design problems, complex distributed simulations, and the control processes for almost any dis- tributed or parallel AI application. A job-shop schedul- ing application is not a computational task environment, but the control’ of multiple distributed or large-grain par- allel processors that are jointly responsible for solving a job shop scheduling problem is. Distributed sensor net- works use resources such as sensors, but they are typically not the primary scheduling consideration. Computational task environments are the problem domain for control algo- rithms like many real-time and parallel local scheduling al- gorithms [Boddy and Dean, 1989; Garvey and Lesser, 1993; Russell and Zilberstein, 19911 and distributed coordination algorithms [Decker and Lesser, 1991; Durfee et al., 19871. The reason we have created the TEMS framework is rooted in the desire to produce general theories in AI [Co- hen, 19911. Consider the difficulties facing an experimenter asking under what environmental conditions a particular local scheduler produces acceptable results, or when the overhead associated with a certain coordination algorithm is acceptable given the frequency of particular subtask in- terrelationships. At the very least, our framework provides a featural characterization and a concrete, meaningful lan- guage with which to state correlations, causal explanations, and other forms of theories. The careful specification of the computational task environment also allows the use of very strong analytic or experimental methodologies, includ- ing paired-response studies, ablation experiments, and pa- rameter optimization. T&MS exists as both a language for stating general hypotheses or theories and as a system for simulation. The simulator supports the graphical display of generated subjective and objective task structures, agent actions, and statistical data collection in CLOS on the TI ‘Planning and/or scheduling of computation. Distributed Problem Solving 217 From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. Explorer. The next section will discuss the general nature of the three modeling framework layers. The following sections discuss the details of the three levels. After describing each layer, we will give an example of a model built with this framework. This example grows out of the set of sin- gle instance examples of distributed sensor network (DSN) problems presented in [Durfee et al., 19871 using the Dis- tributed Vehicle Monitoring Testbed (DVMT). The authors of that paper compared the performance of several different coordination algorithms on these examples, and concluded that no one algorithm was always the best. This is the classic type of result that the TEMS framework was created to address-we wish to explain this result, and better yet, to predict which algorithm will do the best in each situa- tion. The level of detail to which you build your model will depend on the question you wish to answer-we wish to identify the characteristics of the DSN environment, or the organization of the agents, that cause one algorithm to outperform another. In a DSN problem like the DVMT, the movements of several independent vehicles will be detected over a period of time by one or more distinct sensors, where each sensor is associated with an agent. The performance of agents in such an environment is based on how long it takes them to create complete vehicle tracks, including the cost of communication. The organizational structure of the agents will imply the portions of each vehicle track that are sensed by each agent. General Framework The principle thing that is being analyzed, explained, pre- dicted, or hypothesized is the performance of a system or some component. While TAMS does not establish a par- ticular performance criteria, it focuses on providing two kinds of performance information: the temporal location of task executions, and the quality of the execution or its result. Quality is an intentionally vaguely-defined term that must be instantiated for a particular environment and per- formance criteria. Examples of quality measures include the precision, belief, or completeness of a task result. We will assume that quality is a single numeric term with an ab- solute scale, although the algebra can be extended to vector terms. In a computationally intensive AI system, several quantities-the quality produced by executing a task, the time taken to perform that task, the time when a task can be started, its deadline, and whether the task is necessary at all-are affected by the execution of other tasks. In real- time problem solving, alternate task execution methods may be available that trade-off time for quality. Agents do not have unlimited access to the environment; what an agent believes and what is really there may be different. The model of environmental and task characteristics pro- posed has three levels: objective, subjective, and gener- ative. The objective level describes the essential, ‘real’ task structure of a particular problem-solving situation or instance over time. It is roughly equivalent to a formal description of a single problem-solving situation such as those presented in [Durfee and Lesser, 19911, without the information about particular agents. The subjective level describes how agents view and interact with the problem- solving situation over time (e.g., how much does an agent know about what is really going on, and how much does it cost to find out-where the uncertainties are from the agent’s point of view). The subjective level is essential for evalu- ating control algorithms, because while individual behavior and system performance are measured objectively, agents make decisions with only subjective information. Finally, the generative level describes the statistical characteristics required to generate the objective and subjective situations in a domain (how likely are particular task structures, and what variation is present?). Objective Level The objective level describes the essential structure of a particular problem-solving situation or instance over time. It focuses on how task interrelationships dynamically affect the quality and duration of each task. The basic model is that task groups appear in the environment at some frequency, and induce tasks 2’ to be executed by the agents under study. Task groups are independent of one another, but tasks within a single task group have interrelationships. Task groups or tasks may have deadlines D(T). The quality of the execution or result of each task influences the quality of the task group result Q(T) in a precise way. These quantities can be used to evaluate the performance of a system. An individual task that has no subtasks is called a method M and is the smallest schedulable chunk of work (though some scheduling algorithms will allow some methods to be preempted, and some schedulers will schedule at mul- tiple levels of abstraction). There may be more than one method to accomplish a task, and each method will take some amount of time and produce a result of some quality. Quality of an agent’s performance on an individual task is a function of the timing and choice of agent actions (‘local effects’), and possibly previous task executions (‘non-local effects’).2 The basic purpose of the objective model is to formally specify how the execution and timing of tasks af- fect this measure of quality. Local Effects: The Subtask Relationship Task or task group quality (Q(T)) is based on the sub- task relationship. This quality function is constructed recursively-each task group consists of tasks, each of which consists of subtasks, etc.-until individual executable tasks (methods) are reached. Formally, the subtask rela- tionship is defined as subtask@‘, T, Q), where T is the set of all direct subtasks of T and & is a quality function Q(T, t) : [tasks x times] I+ [quality] that returns the quality associated with T at time t. In a valid model, the directed graph induced by this relationship is acyclic (no task has itself for a direct or indirect subtask). *When local or non-local effects exist between tasks that are known by more than one agent, we call them coordination rela- tionships[Decker and Lesser, 19911 218 Decker The semantics of a particular environment are mod- eled by the appropriate choice of the quality function & (e.g.:, minimum, maximum, summation, or the arith- metic mean). For example, if subtask(Tf, T, &min), then Q(K t) = CLllin(Z, 4 = minTET &(T, t). In this case the quality that is associated with task 2’1 is the minimum qual- ity associated with any of its subtasks. This is sometimes referred to as an AND because the quality of the parent remains at a minimum until every subtask has been com- pleted. Other functions are used for modeling particular knvironments. Functions like sum and average indicate the possibility that not all tasks in the environment need to be carried out. We have now described how quality is modeled at tasks that have subtasks, but what about methods? Each method M at a time t will potentially produce (if executed by an agent) some maximum quality q(M, t) in some amount of elapsed time d(M, t) (we will defer any further definition of the functions d and g until we discuss non-local effects ). The execution of methods is interrupt- ible, and if multiple methods for a single task are available, the agent may switch between them (typically alternative methods tradeoff time and quality).3 Let P(M, t) be the current amount of progress on the execution of M. If A4 were not interruptible and S(M) and F(M) were the execution start time and finish time, respectively, of M, then: t I S(M) P(AI,t) = ii- S(M) S(M) c t < F(M) F(M) -S(M) t 2 F(M) We typically model the quality produced by a method &(M, t) using a linear growth function Qlin: Other models (besides linear quality functions) have been proposed and are used, such as concave quality functions (must execute most of a task before quality begins to accu- mulate), convex quality functions (most quality is achieved early on in a method, and only small increases occur later), and ‘mandatory and optional parts’ quality functions [Liu et al., 19913. The desired Q(M, t) can be easily defined for any of these. As an example of the power of this representation, we consider the two main schools of thought on quality accumulation: the anytime algorithm camp [Boddy and Dean, 19891 and the design-to-time (approximate process- ing) camp[Decker et al., 1990; Garvey and Lesser, 19931. We can represent their ideas succinctly; in the anytime algo- rithm model partial results are always available,4 as in the definition of Qlin(M, t) above, while in the design-to-time 3We model the effect of interruptions, if any, and the reuse of partial results as non-local effects . 41n Boddy’s paper, the assumption is made that Q(M, t) has monotonically decreasing gain. model results are not available (quality does not accrue) un- til the task is complete, as in the definition of QDm(M, t)? QD-IT(M, t) = :w, t> P(M, t> < w, t) PW, t> 2 WK t> Any task 2’ containing a method that starts executing be- fore the execution of another method M finishes may po- tentially affect M’s execution through a non-local eflect e. We write this relation nle(T, M, e,pl,p2, . . *), where the p’s are parameters specific to a class of effects. There are precisely two possible outcomes of the application of a non-local effect on M under our model: duration ef- fects where d(M, t) (duration) is changed, and quality ef- fects where q(M, t) (maximum quality) is changed. An effect class e is thus a function e(T, M, t, d, q,pl, ~2, . . .) : [task x method x time x duration x quality x parameter 1 x parameter 2 x . . .] H [duration x quality]. The amount and direction of an effect is dependent on the relative timing of the method executions, the quality of the effect’s antecedent task, and whether information was com- municated between the agents executing the methods (in multi-agent models). Some effects are continuous, depend- ing on the current quality of the effect’s antecedent Q(T, t). Some effects are triggered by a rising edge of quality past a threshold; for these effects we define the helper function O(T, 8) that returns the earliest time when the quality sur- passes the threshold: O(T, 0) = min(t) s.t. Q(T, t) > 8. Communication. Some effects depend on the availability of information to an agent. We indicate the communication of information at time t about task T, to an agent A with a delay of St by comm(T,, A, t, St). There are many models of communication channels that we could take for a com- munication submodel; since it is not our primary concern we use a simple model with one parameter, the time delay 6t.6 For defining effects that depend on the availability of information, we define the helper function Qavail(T, t, A) that represents the quality of a task T ‘available’ to agent A at time t. If T was executed at A, Qavad(T, t, A) = Q(T, t). If T was executed (or is being executed) by another agent, then the ‘available’ quality is calculated from the last com- munication about T received at agent A prior to time t. Computing d(M, t) and q(M, t). Each met initial maximum quality qo(M) and duration we define q(M, 0) = Q~(M) and d(M,O) = &-J(M). If there is only one non-local effect with M as a conse- quent nle(T, M e,pl,p2,. . .>, then [d(M, t), q(W t>l +- ‘Another difference between design-to-time (Dm and other approaches will show up in our generative and subjective additions to this model--Dm does not assume that Q(M, t) is fixed and known, but rather that it is an estimator for the actual method response. 60ther parameters, such as channel reliability, can be used. The description of an agent’s control and coordination algorithms will describe when and where communication actually occurs (see our discussion of communication actions and the concept of agency in the Subjective Level section ). Distributed Problem Solving 219 e(T, M, t, d(M, t - l),q(M, t - l),p1,p2,. . .). If there is more than one NLE, then the effects are applied one after the other in an order specified in the model (the default is for effects with antecedents closer in the task structure to M to be applied first). Non-local Effect Examples Non-local effects are the most important part of the T&MS framework, since they sup- ply most of the characteristics that make one task environ- ment unique and different from another. Typically a model will define different classes of effects, such as causes, fa- cilitates, cancels, constrains, inhibits, and enables[Decker and Lesser, 19921. This section contains definitions for three common classes of effects that have been useful in modeling different environments. Enables. If task Ta enables method M, then the max- imum quality q(M, t) = 0 until Ta is completed and the result is available, when the maximum quality will change to the initial maximum quality q(M, t) = qo(A.4). enables(T=, M, t, d, q, 6) = I% 01 [do(M), qo(M)] t < O(Ta,B) t 1 @(Zz, 0) (1) Facilitates. Another relationship, used by the PGP algo- rithm [Durfee and Lesser, 19911, is the facilitates relation- not only through the power parameters, but also through the quality of the facilitating task available when work on the facilitated task starts. faCilitah!s(~a9 M, t, d, q9 4d, 49) = { [d(l - f(4d, Qavail(C, t>, q(Ta, t>>>, Q(I+ f(&t Qavail(G, t>t q(Ta, t>>>l [d(l - f(h, Qavail(G, S(M)), q(Ta, t>>>, t < VW (2) ~(l+ f(&, Qavail(Ta, S(W), q(Ta, t>>)l t 1 S(M) where f(9, CA s> = :t,b. So if Ta is completed with max- imal quality, and the result is received before M is started, then the duration d(M, t) will be decreased by a percentage equal to the duration power &t of thefacilitates relationship. The second clause of the definition-indicates that commu- nication after the start of processing has no effect. In other work [Decker and Lesser, 19911 we explored the effects on coordination of a facizitates relationship with varying duration power &J, and with & = 0. Hinders. The hinders relationship is the opposite of fa- cilitates, because it increases the duration and-decreases the maximum quality of the consequent. This can be used as a high-level model of distraction [Durfee et al., 19871. Objective Modeling Example Now that we have discussed the basic components of an ob- jective model, let us turn to an example in which we build a model using the TEMS framework. In our model of DSN problems, the computation to interpret each vehicle track that occurs in the sensed environment is modeled as a task group. The simplest objective model is that each task group Ti is associated with a track of length Zi and has the following objective structure, based on the DVMT: (Zi) vehicle loca- tion methods (VLM) that represent processing raw signal data at a single location resulting in a single vehicle location hypothesis; (Zi - 1) vehicle tracking methods (VTM) that represent short tracks connecting the results of the VLM at time t with the results of the VLM at time t + 1; (1) vehicle track completion method (VCM) that represents merging all the VTMs together into a complete vehicle track hypothe- sis. Non-local enablement effects exist-two VLMs enable each VTM, and all VTMs enable the lone VCM. A picture can be found in the section “A model of DSN environments” in our companion paper [Decker and Lesser, 1993b] (in this volume). Expanding the Model We will now add some complexity to the model. Let us assume a simple situation: there are two agents, A and B, and that there is one vehicle track of length 3 sensed once by A alone (Tl), once by both A and B (T2), and once by B alone (T3). We now proceed to model the standard features that have appeared in our DVMT work for the past several years. We will add the characteristic that each agent has two methods with which to deal with sensed data: a normal VLM and a ‘level-hopping’ (LH) VLM (the level-hopping VLM produces less quality than the full method but requires less time; see [Decker et al., 1990; Decker et al., 19931 for this and other approximate methods). Furthermore, only the agent that senses the data can execute the associated VLM; but any agent can execute VTMs and VCMs if the appropriate enablement conditions are met. Figure 1 displays this particular problem-solving episode. To the description above, we have added the fact that agent B has a faulty sensor (the durations of the grayed methods will be longer than normal); we will explore the implications of this after we have discussed the subjective level of the framework in the next section. An assumption made in [Durfee et al., 19871 is that redundant work is not generally useful; this is indicated by using max as the combination function for each agent’s redundant methods. We could alter this assumption by simply changing this function (to mean, for example). Another characteristic that appeared often in the DVMT literature is the sharing of results between methods (at a single agent); we would indicate this by the presence of a sharing relationship (similar to facilitates) between each pair of normal and level-hopping VLMs. Sharing of results could be only one-way between methods. Now we will add two final features that make this model more like the DVMT. First, low quality results tend to make things harder to process at higher levels. For example, the impact of using the level-hopping VLM is not just that its quality is lower, but also that it affects the quality and du- ration of the VTM it enables (because not enough possible solutions are eliminated). To model this, we will use the precedence relationship (a combination of enables and hin- ders) instead of just enables: not only do the VLM methods enable the VTM, but they can also hinder its execution if the enabling results are of low quality. Secondly, the first VLM 220 Decker method (executable task) faulty sensor method accrual function vain - subtask relationship - - - -p enables relationship Figure 1: Objective task structure associated with two agents execution provides information that slightly shortens the ex- ecutions of other VLMs in the same vehicle track (because the sensors have been properly configured with the correct signal processing algorithm parameters with which to sense that particular vehicle). A similar facilitation effect occurs at the tracking level. These effects occur both locally and when results are shared between agents-in fact, this effect is very important in motivating the agent behavior where one agent sends preliminary results to another agent with bad sensor data to help the receiving agent in disambiguat- ing that data [Durfee and Lesser, 19911. Figure 2 repeats the objective task structure from the previous figure, but omits the methods for clarity. Two new tasks have been added to model facilitation at the vehicle location and vehicle track level.7 TVL indicates the highest quality initial work that has been done at the vehicle level, and thus uses the quality accrual function maximum. Tm indicates the progress on the full track; it uses summation as its quality accrual func- tion. The more tracking methods are executed, the easier the remaining ones become. The implications of this model are that in a multi-agent episode, then, the question becomes when to communicate partial results to another agent: the later an agent delays communication, the more the potential impact on the other agent, but the more the other agent must delay. We examined this question somewhat in [Decker and Lesser, 19911. At the end of the next section, we will return to this example and add to it subjective features: what information is available to agents, when, and at what cost. Subjective Leve The purpose of a subjective level model of an environment is to describe what portions of the objective model of the situation are available to ‘agents’. It answers questions such as “when is a piece of information available,” “to whom is it available,” and “what is the cost to the agent of that piece 7Note that these tasks were added to make the model more expressive; they are not associated with new methods. of information”. This is a description of how agents might interact with their environment-what options are available to them. To build such a description we must introduce the con- cept of agency into the model. Ours is one of the few comprehensive descriptions of computational task envi- ronments, but there are many formal and informal de- scriptions of the concept of agency (see [Gasser, 1991; Hewitt, 19911). Rather than add our own description, we notice that these formulations define the notion of compu- tation at one or more agents, not the environment that the agents are part of. Most formulations contain a notion of belief that can be applied to our concept of “what an agent believes about its environment”. Our view is that an “agent” is a locus of belief and action (such as computation). The form of the rest of this section is as follows: how does the environment affect the beliefs of the agents; how do the beliefs of agents affect their actions, and how do the actions affect the environment. Agent beliefs. We use the symbol I’i to denote the set of beliefs of agent A at time t. A subjective mapping of an objective problem solving situation 0 is a function cp : [A x O] I+ rA from an agent and objective assertions to the beliefs of an agent. For example, we could define a mapping cp where each agent has a probability p of believing that the maximum quality of a method is the objective value, and a probability 1 - p of believing the maximum quality is twice the objective value. Any objective assertion has some subjective mapping, including g (maximum quality of a method), d (duration of a method), deadlines, and the relations subtask, de, and comm. Control. The beliefs of an agent affect its actions through some control mechanism. Since this is the focus of most of our and others’ research on local scheduling, coordination, and other control issues, we will not discuss this further. The agent’s control mechanism uses the agent’s current set of beliefs rA to update three special subsets of these be- liefs (alternatively, commitments [Shoham, 19911) identi- fied as the sets of information gathering, communication, Distributed Problem Solving 221 subtask relationship Figure 2: Non-local effects in the objective task structure and method execution actions to be computed. Computation. TAMS can support parallel computation, but for brevity we will just describe single processor compu- tation as a sequence of agent states. Agent A’s current state is uniquely specified by rA. We provide a meta-structure for the agent’s state-transition function that is divided into the following 4 parts: control, information gathering, com- munication, and method execution. First the control mecha- nisms assert (commit to) information-gathering, communi- cation, and method execution actions and then these actions are computed one at a time. Method Execution. How do the actions of an agent af- fect the environment? Both the objective environment (i.e. quality of the executing method) and the subjective map- ping (i.e., what info is available via ‘p) can be affected. We use two execution models: simple method execution, and execution with monitoring, suspension, and preemp- tion. These follow from the discussion of QD~ and Qlin earlier , and are simple state-based models. Basically, for non-interruptible, single processor method executions, the agent enters a method execution state for method M at time S(M) and remains in that state until the time t when t - S(M) = d( M, t). Method execution actions are similar to what Shoham terms ‘private actions’ like DO [Shoham, 19911. Communication. How do the actions of an agent affect other agents ? Communication actions allow agents to af- fect each others’ beliefs to a limited extent. Many people have worked on formalizing the aspects of communication using ideas such as speech acts. The semantics of commu- nication actions can be freely defined for each environment; most work so far has used speech-act classes of commu- nicative acts, such as Shoham’s INFORM and REQUEST. What happens when a communication is ‘received’? The reception of information may trigger a non-local effect as we described earlier, and may influence the behavior of an agent as specified by its control algorithm. Information Gathering. An information gathering ac- tion trades-off computational resources (time that could be 222 Decker spent executing methods) for information about the envi- ronment. For example, one useful information gathering action is one that queries the environment about the arrival of new tasks or task groups. Another information gathering action causes any communications that have arrived at an agent to be ‘received’ (added to the agent’s set of beliefs). Both communication and information gathering actions take some period of time to execute, as specified in the model. Subjective odeling Example Let’s return to the example we began earlier to demonstrate how adding a subjective level to the model allows us to represent the effects of faulty sensors in the DVMT. We will define the default subjective mapping to simply return the objective value, i.e., agents will believe the true ob- jective quality and duration of methods and their local and non-local effects. We then alter this default for the case of faulty (i.e., noisy) sensors-an agent with a faulty sensor will not initially realize it (&(faulty-VLMJ = 2de(VLM), but p(A, de(faulty-VLM)) = do(VLM)). Other subjec- tive level artifacts that are seen in [Durfee et al., 19871 and other DVMT work can also be modeled easily in our framework. For example, ‘noise’ can be viewed as VLM methods that are subjectively believed to have a non-zero maximum quality (cp(A, qe(noise-VLM)) > 0) but in fact have 0 objective maximum quality, which the agent does not discover until after the method is executed. The strength with which initial data is sensed can be modeled by lowering the subjectively perceived value of the maximum quality q for weakly sensed data. The infamous ‘ghost track’ is a subjectively complete task group appearing to an agent as an actual vehicle track, which subjectively accrues qual- ity until the hapless agent executes the VCM method, at which point the true (zero) quality becomes known. If the track (subjectively) spans multiple agents’ sensor regions, *At this point, one should be imagining an agent controller for this environment that notices when a VLM method takes unusually long, and realizes that the sensor is faulty and replans accordingly. the agent can potentially identify the chimeric track through communication with the other agents, which may have no belief in such a track (but sometimes more than one agent suffers the same delusion). Space precludes a detailed discussion of the generative level. By using the objective and subjective levels of TAMS we can model any individual situation; adding a generative level to the model allows us to go beyond that and determine what the expected performance of an algorithm is over a long pe- riod of time and many individual problem solving episodes. Our previous work has created generative models of task in- terarrival times (exponential distribution), amount of work in a task cluster (Poisson), task durations (exponential), and the likelihood of a particular non-local effect between two tasks [Decker and Lesser, 1993a; Decker and Lesser, 1991; Garvey and Lesser, 19931. Using the Framework We have used this model to develop expressions for the expected value of, and confidence intervals on, the time of termination of a set of agents in any arbitrary simple DSN environment that has a static organizational structure and coordination algorithm [Decker and Lesser, 1993a]. We have also done the same for a dynamic, one-shot reorganiza- tion algorithm (and have shown when the extra overhead is worthwhile versus the static algorithm) [Decker and Lesser, 1993b] (in this volume). In each case we can predict the effects of adding more agents, changing the relative cost of communication and computation, and changing how the agents are organized. These results were achieved by direct mathematical analysis of the model (a combination of algo- rithm analysis and derivation of the probability distributions for important parameters) and verified through simulation in T&MS. Simulation is also a useful tool for learning parameters to control algorithms, for quickly exploring the behavior space of a new control algorithm, and for conducting con- trolled, repeatable experiments when direct mathematical analysis is unwarranted or too complex. The simulation system we have built for the direct execution of models in the TEMS framework supports, for example, the collection of paired response data, where different or ablated coor- dination or local scheduling algorithms can be compared on identical instances of a wide variety of situations (gen- erated using the generative level of the model). We have used simulation to explore the effect of exploiting the pres- ence of facilitation between tasks in a multi-agent real-time environment (no quality is accrued after a task’s deadline) [Decker and Lesser, 19911. The environmental characteris- tics here included the mean interarrival time for tasks, the likelihood of one task facilitating another, and the strength of the facilitation (&). The TiEMS framework is not limited to experimentation in distributed problem solving. In [Garvey and Lesser, 19931, Garvey and Lesser used the framework to describe the effects of various task environment and agent design features on the performance of their real-time ‘design-to- time’ algorithm. They show that monitoring does provide a reduction in missed deadlines but that this reduction may be significant only during ‘medium’ loads. Garvey is now using a more complex model of enabkng and hindering task structures to design an optimal design-to-time algorithm for certain task environments [Garvey et al., 19931 (in this volume). elate This paper has presented T&MS, a framework for model- ing computationally intensive task environments. TEMS exists as both a language for stating general hypotheses or theories and as a system for simulation. The important features of TEMS include its layered description of environ- ments (objective reality, subjective mapping to agent beliefs, generative description of the other levels across single in- stances); its acceptance of any performance criteria (based on temporal location and quality of task executions); and its non-agent-centered point of view that can be used by re- searchers working in either formal systems of mental-state- induced behavior or experimental methodologies. TIEMS provides environmental and behavioral structures and fea- tures with which to state and test theories about the control of agents in complex computational domains, such as how decisions made in scheduling one task will affect the utility and performance characteristics of other tasks. The basic form of the computational task environment framework-the execution of interrelated computational tasks-is taken from several domain environment sim- ulators [Carver and Lesser, 1991; Cohen et al., 1989; Durfee et al., 19871. If this were the only impetus, the result might have been a simulator like Tileworld [Pollack and Ringuette, 19901. However, formal re- search into multi-agent problem solving has been produc- tive in specifying formal properties, and sometimes al- gorithms, for the control process by which the mental state of agents (termed variously: beliefs, desires, goals, intentions, etc.) causes the agents to perform particu- lar actions [Cohen and Levesque, 1990; Shoham, 1991; Zlotkin and Rosenschein, 19911. This research has helped to circumscribe the behaviors or actions that agents can produce based on their knowledge or beliefs. The final influence on T&MS was the desire to avoid the individual- istic agent-centered approaches that characterize most AI (which may be fine) and DAI (which may not be so fine). The concept of agency in T&MS is based on simple notions of execution, communication, and information gathering. An agent is a locus of belief (state) and action. By sepa- rating the notion of agency from the model of task envi- ronments, we do not have to subscribe to particular agent architectures (which one would assume will be adapted to the task environment at hand), and we may ask questions about the inherent social nature of the task environment at hand (allowing that the concept of society may arise before the concept of individual agents). The TAMS simulator supports the graphical display of generated subjective and objective task structures, agent Distributed Problem Solving 223 actions, and statistical data collection in CLOS on the TI Explorer. It is being used not only for research into the coordination of distributed problem solvers[Decker and Lesser, 1993a; Decker and Lesser, 1991; Decker and Lesser, 19921, but also for research into real-time scheduling of a single agent[Garvey and Lesser, 1993; Garvey et al., 19931, scheduling at an agent with parallel processing resources available, and soon, learning coordi- nation algorithm parameters. TiEMS does not at this time automatically learn models or automatically verify them. While we have taken initial steps at designing a methodology for verification (see [Decker and Lesser, 1993a]), this is still an open area of research [Cohen, 19911. Our future work will include building new models of different environments that may include physical resource constraints, such as airport resource scheduling. The existing framework may have to be extended some- what to handle consumable resources. Other extensions we envision include specifying dynamic objective models that change structure as the result of agent actions. We also wish to expand our analyses beyond the questions of scheduling and coordination to questions about negotiation strategies, emergent agent/society behavior, and organizational self- design. Acknowledgments Thanks to Dan Neiman and Alan Garvey for their comments on earlier versions of this paper. References Boddy, M. and Dean, T. 1989. Solving time-dependent planning problems. In Proceedings of the Eleventh Inter- national Joint Conference on Artificial Intelligence. Carver, N. and Lesser, V. R. 1991. A new framework for sensor interpretation: Planning to resolve sources of un- certainty. In Proceedings of the Ninth National Conference on Artificial Intelligence. 724-73 1. Cohen, Philip R. and Levesque, H. J. 1990. Intention is choice with commitment. Artificial Intelligence 42(3). Cohen, Paul R.; Greenberg, M.; Hart, D.; and Howe, A. 1989. Trial by fire: Understanding the design require- ments for agents in complex environments. AI Magazine 10(3):33--48. Also COINS-TR-89-61. Cohen, Paul R. 1991. A survey of the eighth national conference on artificial intelligence: Pulling together or pulling apart? AI Magazine 12(l): l-1. Decker, K. S. and Lesser, V. R. 1991. Analyzing a quanti- tative coordination relationship. COINS Technical Report 91-83, University of Massachusetts. To appear in the jour- nal Group Decision and Negotiation, 1993. Decker, K. S. and Lesser, V. R. 1992. Generalizing the partial global planning algorithm. International Journal of Intelligent and Cooperative Information Systems l(2). Decker, IS. S. and Lesser, V. R. 1993a. An approach to analyzing the need for meta-level communication. In Pro- ceedings of the Thirteenth In terna tional Joint Conference on Artificial Intelligence, Chambery. Decker, K. S. and Lesser, V. R. 1993b. A one-shot dynamic coordination algorithm for distributed sensor networks. In Proceedings of the Eleventh National Conference on Arti- ficial Intelligence, Washington. Decker, K. S.; Lesser, V R.; and Whitehair, R. C. 1990. Extending a blackboard architecture for approximate pro- cessing. The Journal of Real-Time Systems 2(1/2):47-79. Decker, K. S.; Garvey, A. J.; Humphrey9 M. A.; and Lesser, V. R. 1993. A real-time control architecture for an approxi- mate processing blackboard system. International Journal of Pattern Recognition and Artificial Intelligence 7(2). Durfee, E.H. and Lesser, V.R. 1991. Partial global plan- ning: A coordination framework for distributed hypothesis formation. IEEE Transactions on Systems, Man, and Cy- bernetics 21(5):1167-l 183. Durfee, E. H.; Lesser, V. R.; and Corkill, D. D. 1987. Coherent cooperation among communicating problem solvers. IEEE Transactions on Computers 36(11): 1275- 1291. Garvey, A. and Lesser, V. R. 1993. Design-to-time real- time scheduling. IEEE Transactions on Systems, Man, and Cybernetics 23(6). Special Issue on Scheduling, Planning, and Control. Garvey, A.; Humphrey, M.; and Lesser, V. R. 1993. Task interdependencies in design-to-time real-time scheduling. In Proceedings of the Eleventh National Conference on Artificial Intelligence, Washington. Gasser, L. 1991. Social conceptions of knowledge and action. Artificial Intelligence 47(l): 107-138. Hewitt, C. 1991. Open information systems semantics for distributed artificial intelligence. Artificial Intelligence 47(1):79-106. Liu, J. W. S.; Lin, K. J.; Shih, W. K.; Yu, A. C.; Chung, J. Y.; and Zhao, W. 1991. Algorithms for scheduling im- precise computations. IEEE Computer 24(5):58-68. Pollack, M. E. and Ringuette, M. 1990. Introducing Tile- world: Experimentally evaluating agent architectures. In Proceedings of the Eighth National Conference on Artifi- cial Intelligence. 183-189. Russell, S. J. and Zilberstein, S. 1991. Composing real- time systems. In Proceedings of the Tivelfth International Joint Conference on Artificial Intelligence, Sydney, Aus- tralia. 212-217. Shoham, Y. 1991. AGENTO: A simple agent language and its interpreter. In Proceedings of the Ninth National Conference on Artificial Intelligence, Anaheim. 704-709. Zlotkin, G. and Rosenschein, J. S. 1991. Incomplete in- formation and deception in multi-agent negotiation. In Proceedings of the Twelfth In terna tional Joint Conference on Artificial Intelligence, Sydney, Australia. 225-231. 224 Decker | 1993 | 33 |
1,358 | University of Michigan Ann Arbor, MI 48109 durfee/jaeho@engin.umich.edu A rational agent in a multiagent world must de- cide on its actions based on the decisions it expects others to make, but it might believe that they in turn might be basing decisions on what they be- lieve the initial agent will decide. Such recipro- cal rationality leads to a nesting of models that can potentially become intractable. To solve such problems, game theory has developed techniques for discovering rational, equilibrium solutions, and AI has developed computational, recursive meth- ods. These different approaches can involve differ- ent solution concepts. For example, the Recursive Modeling Method (RMM) finds different solutions than game-theoretic methods when solving prob- lems that require mixed-strategy equilibrium so- lutions. In this paper, we show that a crucial dif- ference between the approaches is that RMM em- ploys a solution concept that is overeager. This ea- gerness can be reduced by introducing into RMM second-order knowledge about what it knows, in the form of a flexible function for mapping relative expected utility of an option into the probability that the agent will pursue that option. This modi- fied solution concept can allow RMM to derive the same mixed equilibrium solutions as game-theory, and thus helps us delineate the types of knowledge that lead to alternative solution concepts. Rational decisionmaking in a multiagent context is a difficult problem when an agent has knowledge that leads it to view other agents as being rational as well. With such reciprocaZ rationality, an agent must decide on its action(s) given what it believes the rational agents around it will do, but inferring that requires it to infer what each of those agents will believe it (and ‘This research was sponsored in part by the NSF un- der grants IRI-9015423 and IRI-9158473, by DARPA under contract DAAE-07-92-C-R012, and by the Golda Meir Fel- lowship and the Alfassa Fund administered by the Hebrew University of Jerusalem. Jerusalem, Israel piotr@cs.huji.ac.il the others) will do, which in turn requires it to assess what each of those agents will believe about what each of the agents will believe the others will do, and so on. Reciprocal rationality can thus lead to an indefinite (and theoretically infinite) number of levels of recursive modeling. In the Recursive Modeling Method ( for example, we have developed a procedure whereby agents can build this recursive nesting of models and can use the resultant modeling hierarchy to infer ra- tional courses of action for others and for themselves [Gmytrasiewicz et al., 19911. RMM provides a rigorous foundation for such decisionmaking situations, based on RMM’s concept of a solution. Rowever, as we show in this paper, RMM’s original formulation leads to de- cisions that differ from those that would be derived by traditional game-theoretic methods, because those methods employ a different solution concept. In addition, the original solution concept in RMM could lead to cases where the algorithm that uses the recursive models could arrive at different results de- pending on the number of recursive levels examined. In our original RMM, we described how such behav- ior must eventually become cyclic given finite knowl- edge, and that when such a cycle occurs RMM can probabilistically mix the results to arrive at a single, overall decision. While this is the best that original RMM can do, the question arises as to whether a differ- ent solution concept could avoid such cyclic behavior, and could converge more naturally on a single solution. Moreover, the assumption that RMM must run out of knowledge at some finite level might be overly restric- tive for some applications, and a solution concept that clearly defines the behavior of RMM in the limit of infinity would be desirable. In this paper, we suggest these characteristics of MM stem from the fact that its solution con- cept leads to what we call “overeager reciprocal ratio- nality.” In a nutshell, our argument is that rationality based on expected payoffs given probabilistic models of what others will do should be tempered by the degree of confidence in those models. We describe one way of introducing this additional knowledge into RMM by using a more flexible function for generating those istributed Problem Solving 225 From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. probabilistic models. Not only can this avoid overea- ger rationality that leads to oscillations, but that in fact it can make RMM’s results converge in the limit of infinite recursion. /. More broadly, however, the contributions of this pa- per extend beyond RMM to examine the nature of game-theoretic rationality as employed in a multia- gent reasoning systems [Rosenschein and Breese, 1989; Rosenschein and Genesereth, 1985; Rosenschein et al., 1986; Zlotkin and Rosenschein, 19911. In particular, the concepts of equilibria and mixed-strategies play a central (although often debated) role in the game- theoretic literature [Shubik, 19821. In this paper, we show how the recursive algorithm that RMM employs to model reciprocal rationality and the more tradi- tional equilibria solutions can converge, given a par- ticular choice of solution concept. We begin by defining the game-theoretic notions of equilibria and mixed strategies, showing how rational- ity is embodied in these models. We then look at orig- inal RMM’s solution to reciprocal rationality problems and its characteristics. Then we suggest a less eager ra- tionality assumption, embodied in a function for com- puting probabilities over agents’ moves, and describe how introducing it into the original RMM formulation avoids overeager rationality. We conclude by analyz- ing the performance of our approach and highlighting important open problems. Game Theoretic Game theoreticians have developed a number of tech- niques for determining rational combinations of moves for games represented in normal (or strategic) form, where each combination of strategies chosen by the players leads to a particular payoff for each player. The most common solution method involves using (iter- ated) dominance techniques [Rasmusen, 1989; Shubik, 19821, where players remove from consideration any moves that are clearly inferior (dominated) no matter which of the non-dominated moves others take. By al- ternating between the players’ viewpoints, the number of rational moves can be decreased; in the ca+se where there is a single rational strategy for each player, it can be found. The combination of rational strategies for the players represents an equilibrium solution, since none of the players has an incentive to deviate unilat- erally from its own strategy. In cases where there are multiple equilibrium moves, converging on a single move (a pure strategy) for each player is more complicated. For example, consider the game summarized in Figure la, where each combina- tion of moves leads to a matrix entry giving a payoff for P (lower left) and Q (upper right). In this game, there are two equally good moves for both P and Q. They could each maximize their payoff at 2 by either choosing options ad or bc. The trouble is, which of these will they choose? One way would be to have P and Q communicate and agree on one of the c d a 0 2 P lzl 0 2 b 2211 (a) with 2 pure solutions (b) with no pure solutions Figure 1: Matrices for Example Games two possible combinations. Or if they shared common knowledge about how to break such ties, they could employ that knowledge here. But more generally, P and Q cannot be assured of converging for certain on one joint course of action. Game theory says that a mixed strategy equilibrium might thus be a useful so- lution concept here, where agents adopt probabilistic combinations of the separate pure strategies. One pos- sible derivation for such a mixed strategy, patterned af- ter [Rasmusen, 19891, is the following: Assume that P and Q will adopt mixed strategies (pa pb) and (pC pd), respectively. ’ The expected payoff of P (recall that Pa + Pb = pc + pd = 1) is then: JwaYofbl = 2PaPd + 2pbpc + PbPd = Pa +Pc -3PaPc + 1. Differentiating the above with respect to pa, and postulating that the result be zero (for the maximiza- tion problem), allows us to conclude that in the mixed strategy equilibrium Q must select move c with prob- ability p, = l/3. That is, it is assumed that a mixed strategy for P is optimal, and P will only adopt a mixed strategy if pc = l/3 (if higher, P will always choose b, and lower leads to a). By the same argu- ments’, the strategy for player P is pa = l/3. So, with Pa = Pc = l/3, P and Q would expect payoffs of 4/3. Mixed strategies also play a role in games with no equilibrium solutions, such as the game in Figure lb. In this game, the mixed strategy equilibrium solution has player P choose a with probability l/2 and b with probability l/2, while player Q chooses c with proba- bility l/3 and d with probability 2/3 [Brandenburger, 19921. As Brandenburger points out, the concept of mixed strategy, where agents choose probabilistically among possibilities, is somewhat troubling in this kind of game because, for example, if Q believes that P is playing the expected mixed strategy of (l/2 l/2), then there is no incentive for Q to play his mixed strategy ‘In the notation in the rest of this paper, a mixture over strategies, represented as (pa pb), indicates that the option listed first in the matrix has probability p, and the option listed second has probability pb. 226 Durfee of (l/3 2/3) over any other mixed (or pure) strategy. Brandenburger cites a history of work involved in view- ing mixed strategies not as probabilistic strategy selec- tion by agents playing a game, but instead as proba- bilistic conjectures that agents have about the pure strategies of others. It is this viewpoint that RMM takes as well. ecwsive Analyses of normal form games, as described above, can employ various methods, including iterated domi- nance and adopting assumptions about the agents and solving for optimal mixed strategies. Different analy- ses might use different solution concepts, and thus (as seen above) different decisions can be rational in the context of different solution concepts. In the Recur- sive Modeling Method (RMM), our goal has been to develop a general, algorithmic solution to the problem of recursive reciprocal reasoning that generates the re- cursion explicitly. In RMM, one agent represents how it thinks another sees the situation by hypothesizing the game matrix (or matrices) that the other agent is seeing. It can also hy- pothesize the matrices that the other agent might think the first agent sees, and so on, building a hierarchy of models representing, to increasing depths, the view of how the agent thinks that agents think that agents think . . . that agents perceive the situation. Begin- ning at the leaf node(s) of this hierarchy, probabilities over agents’ choices from among their options can be propagated upward until, at the top, the agent trying to decide what to do has a more informed model of what others are likely to do. Note that RMM works by assigning an equiprobability distribution among the options at the leaf nodes, corresponding not to a belief that an agent will flip a coin to decide on its strategy, but corresponding rather to the fact that it does not know how the agent will decide on its strategy, lead- ing to the equiprobabilistic distribution that contains no information. RMM then chooses the option(s) with the maximum expected utility at each successive level above. RMM helps agents converge on informed rational choices in many situations. For example, in situa- tions with single equilibrium solutions, RMM easily converges on solutions computed with iterated domi- nance. However, in situations without such clear so- lutions, RMM often must probabilistically mix solu- tions. Recall the example with two pure equilibrium solutions (Figure la). In RMM, the tugging between two solutions leads to oscillating views from level to level. From P’s perspective: If he does not consider what Q will prefer to do, then he will choose option b since it has the higher average payoff. If he considers Q but does not consider how Q considers P, then he will infer that Q will choose option d (highest average) and so P should choose a. If he considers Q and how Q considers P but no deeper, then he will infer that Q will think P will choose b, so that Q will choose c, so that P should in fact choose b. This oscillating of P de- ciding on b and then a continues no matter how deep in the hierarchy P searches: If P elaborates the recursion an even number of times, he will prefer b (and expect a payoff of 2 since he expects Q to take move c), while he will prefer a (and again expect a payoff of 2) if he elaborates the recursion an odd number of times. Wow do we reconcile this oscillation? Well, what does is to probabilistically mix the derived ex- pectations, and to work from there. So in the example above, when P runs RMM, it will recognize that half the time it expects Q to take action c, and half the time it expects Q to take d. It ascribes a strategy of (112 V) to Q, and determines that, if Q is equally likely to take either action, P should take action b (for an expected payoff of 1.5) rather than action a (which has an expected payoff of 1). Q, when it runs RMM, will go through the same reasoning to decide that it should take action d. Thus, each agent will have de- cided on a single action, and expect a payoff of 1.5; as external observers, however, we can clearly see that their true payoffs will really be 1. Had they instead each derived the mixed strategy of (l/3 2/3), how- ever, we know that they could each expect a payoff of 1.33. Our derivation of this mixed strategy in the previous section assumed additional knowledge that al- lowed an agent to adopt a mixed strategy based on the understanding that the other agent would be adopting a mixed strategy as well. As we show below, we can in- corporate addit,ional knowledge within RMM to change its solution concept such that agents using RMM can derive this mixed strategy. First, however, let us also revisit the case with no equilibrium solutions (Figure lb). Were, the results of RMM for successive depths of the recursion will cy- cle among the four possible joint moves indefinitely. P would believe that Q is equally likely to take action c or d, and so P would choose action a with an expected payoff of 1. Q, on the other hand, would see P as equally likely to take action a or b, and so Q would be indifferent among its choices, adopting a mixed strat- egy of (l/2 l/2). Th is i d ff ers from the mixed strategies of (l/2 l/2) for P and (l/3 l/3) for Q derived game theoretically in the previous section. In summary, RMM’s decisions mirror the game- theoretic solutions when the choice is clear, but when several choices are equally rational depending on where one starts in the hierarchy, then RMM treats the pos- sible strategies conjectured for other agents as equally likely. This sometimes leads RMM to adopt strate- gies that differ from game-theoretic solutions, as we have seen. As we explain next, the reason why RMM makes different conclusions than game theory is because RMM’s solution concept does not consider second-order knowledge, and thus RMM’s rationality is overeager. Distributed Problem Solving 227 OL 0 J 0; 1 dative expected payoff 0 -J 4 0 05 1 &live expected payelf (4 04 Figure 2: Functions Mapping Relative Payoff to Probability. (4 Overeager Reci We argue here that RMM’s solution concept commits too strongly and quickly to the its conjectures-the probabilistic mixtures over options. At the leaves, RMM assigns an equiprobable distribution over the options of the other agent(s). Given this distribution, however, RMM immediately considers desirable only the options that will maximize expected payoffs based on this equiprobable distribution. Because the initial distribution was based on ignorance, it seems prema- ture to place such weight in it, given the opportunity to bring more knowledge to bear at the higher levels of the hierarchy. In other words, RMM applies the utility maximization concept of rationality too eagerly. Let us look at the probability calculations given rel- ative expected payoffs, graphed for the simple case of two options, shown in Figure 2. At the leaf nodes, we have a flat probability function (Figure 2a), meaning that all options have equal probability because RMM has no knowledge about the relative expected payoffs below the leaf nodes. Above the leaf nodes, we have a step probability function (Figure 2b), which places certainty in the option with the higher expected pay- off. Clearly, these are two extremes of a more general mapping function of relative expected payoff to prob- ability. Consider the more general function to compute prob- ability of option i, given its expected payoff relative to the payoffs of all of the options J (where all payoffs are assumed non-negative): pi = Payoff(i)” / x Payoff(j)” jEJ (1) In this function, when k: is 0 we get the extreme of a flat probability function, while as it: approaches 00, the function approaches the step function extreme. When k: is 1, the function specifies a linear relationship be- tween payoffs and probabilities-that if option a has twice the payoff of option b, for example, then it is 228 Durfee twice as likely to be chosen. The function is graphed for the simple case with two options in Figure 2c. This function can be incorporated simply into the RMM calculations, provided we can specify a choice for L. The choice of k: represents knowledge, and, more im- portantly, how k changes at different depths of the hi- erarchy corresponds to second-order knowledge about probabilities at each level. In the original RMM, k was implicitly specified as 0 for computing the probability distribution feeding into the leaves of the hierarchy, and 00 at all levels above. This abrupt change, how- ever, is what we claim makes the original solution con- cept in RMM overeager. Instead, k should be 0 at the leaves and become successively larger as one moves up the hierarchy, because k represents knowledge about the certainty RMM should place in conjectures about strategies. Toward the leaves, RMM should only lean slightly toward favoring options that do well given the conjectures at those levels, because those conjectures are based on ignorance. As we work toward the root of the hierarchy, the probabilistic conjectures are based on more and more information (propagated from the levels below), and thus RMM should commit more heavily at these levels. We can think of this approach as a variant of simulated annealing, where early in the process (near the leaves) the algorithm biases the search for rational strategies slightly but still keeps its search options open. As it moves up the hierarchy, the algorithm becomes increasingly committed to bet- ter options based on the evidence accumulated from below. Besides avoiding overeager rationality, this modifi- cation also provides a new approach to dealing with the possibility of infinite recursion in RMM. As RMM recurses increasingly deeper, k gets closer and closer to 0, and the influence of deeper levels of knowledge (about what I know about what you know about what I know.. .) diminishes. In a practical sense, there will be a finite level at which a computing system will lack (defvar *power-reduction-rate* .8) (defun simple-rmm (matrix1 matrix2 levels &optional (power 1)) (let* ((column-probs (if (= levels 0) (make-equiprobability-vector (length (first matrixl))) (simple-r-mm matrix2 matrix1 (l- levels) (modify-power power levels)) )) (rows-exp-utils (mapcar #* (lambda (row) (compute-expected-utility row column-probs)) matrixl))) (mapcar #'(lambda (utility-for-row) (compute-probability utility-for-row rows-exp-utils power)) rows-exp-utils))) (defun compute-expected-utility (payoffs probs) (cond ((null payoffs) 0) (t (+ (* (first payoffs) (first probs)) (compute-expected-utility (rest payoffs) (rest probs)))))) (defun compute-probability (payoff all-payoffs power) (let ((prob (if power (/ (expt payoff power) (float (let ((current-sum 0)) (dolist (a-payoff all-payoffs current-sum) (setf current-sum (+ current-sum (expt a-payoff power)))) current-sum))) ; else, nil power means assume original RMM formulation (let ((max-payoff (apply I'max all-payoffs))) (if (= max-payoff payoff) (float (/ 1 (count max-payoff all-payoffs))) 0.0))))) (if (<= prob l.Oe-6) 0.0 prob))) (defun modify-power (power level) ; this version ignores the level.... (when power (* power *power-reduction-rate*))) Figure 3: Code Fragment For Simple RMM Implementation sufficient resolution to distinguish deeper levels of the hierarchy, while in a theoretical sense, RMM is well- behaved as it recurses toward infinite levels. Thus, rather than appealing to arguments of finite amounts of knowledge bounding any recursion, we can instead clearly define the behavior of RMM as it moves toward infinite recursive levels. While it is clear that, in the modified RMM prob- ability calculation, k: should approach 0 as more lev- els are explored, it is less clear what value of k makes sense as we approach the root of the hierarchy. Driving k toward higher values will cause RMM to “lean” in- creasingly heavily as it goes upward, until it leans hard enough to commit to a specific choice. This is desir- able for problems with single equilibrium points, but, when mixed strategies are most rational, having values of k too high will lead to oscillations just like in the unmodified RMM. The remaining questions, therefore, are whether a proper selection of k can lead to appro- priate mixed strategies, and if so, how is that selection done. ixed Strategies T Key functions in a much simplified version of RMM, which does not consider possible horizontal branch- ing representing uncertainty about alternative payoff matrices other agents might subscribe to (see [Gmy- trasiewicz et al., 1991]), are shown in Figure 3. Note that this example implementation uses a very simple method to change the value of k at successively lower levels of the hierarchy: it multiplies the value of k at the previous level by a constant (less than 1). This approach allows the algorithm to asymptotically ap- proach 0 toward the leaves assuming sufficient recur- sive levels and an initial value of k that is not too large. To see how modifying RMM affects its performance, we begin with the example having two equilibria in Figure la. Our game-theoretic analysis determined the optimal mixed strategy would be for P and Q to each select its first option with probability l/3, and its second option with probability 2/3. Recall, how- ever, that this analysis was based on knowledge that P and Q both assume that the other was adopting a mixed strategy. The modified RMM algorithm out- lined in Figure 3 does not assume this knowledge. In Figure 4 we show the probability that P (and Q since the game is symmetric) assigns to its first option de- rived by modified RMM for runs involving 100 levels of recursion on incrementally larger values of k. The probability of the second option is simply 1 minus that of the first option. As expected, when k is 0 throughout, the equiprob- able mixed strategy is returned. As k increases, how- ever, note that a player will adopt a mixed strat- egy that approaches the solution computed game- theoretically as (l/3 2/3). Beyond a certain value of k, however, modified RMM diverges from this solution because its solution concept leans so heavily toward particular options even if they are only marginally bet- ter, forcing the system into the oscillatory behavior seen in the original RMM formulation. The other problematic case in which there were no equilibrium solutions (Figure lb) provides similar re- sults. That is, as k increases, the mixed-strategy solu- tions derived by RMM approach those derived game- theoretically (P playing (l/2 l/2) and Q (l/3 2/3)), and then begin to diverge and oscillate. The value of k at which convergence ceases differs between this prob- lem and the previous problem, and as yet we have no method to predict ahead of time the value of k that will Distributed Problem Solving 229 0.9 0.8 0.7 0.6 0.5 0.4 1tJ 0.2 0.1 0 5 10 15 20 25 k Figure 4: First Option Probability: 2-Equilibria Game. lead to convergence. Our current formulation thus uses an iterative method: assuming that the new mapping function captures the correct knowledge when modified RMM converges, our formulation increases the value of k until divergence begins. Conclusions Our results illustrate how adding additional knowl- edge about the uncertainty in probabilities over other agents’ strategies can lead to a different solution con- cept, causing reciprocal rationality to be less eager. Under appropriate conditions, this new solution con- cept can converge on mixed equilibrium solutions that can lead to better actual performance than the solu- tions derived by the original RMM solution concept. More importantly, the modified solution concept al- lows us to bring together the equilibrium solution mod- els common in game-theoretic approaches with the recursive modeling approach embodied in RMM. By including the possibility of representing second-order knowledge about the probabilistic conjectures about strategies, we can implement solution concepts ranging from the sometimes overeager rationality of the original RMM, to the less eager solution concepts that approx- imate those used in game-theoretic methods, all the way to completely indecisive strategies (when k = 0). More generally, our results in this paper serve to un- derscore how the results of rational decisionmaking are dependent upon the underlying solution concepts and their associated knowledge, even when following the basic concept of maximizing expected utility. Much work along these lines remains to be done, however. With regard to the work presented in this paper, a clear next step is to examine more precisely the second-order knowledge captured in the new mapping function, us- ing that understanding to analytically derive parame- ter settings (values of k) that will lead to the optimal converging mixed-strategy equilibria, and possibly em- bodying that knowledge more explicitly in the RMM algorithm by propagating probabilities about probabil- ities through the RMM hierarchy. Beyond this, how- ever, is the ongoing challenge of characterizing the dif- ferent rational solution concepts, so that developers of rational autonomous systems understand the strengths and limitations of the solution concepts that they im- plement . eferences Brandenburger, Adam 1992. Knowledge and equilib- rium in games. The Journal of Economic Perspectives 6(4):83-101. Gmytrasiewicz, Piotr J.; Durfee, Edmund H.; and Wehe, David K. 1991. A decision-theoretic approach to coordinating multiagent interactions. In Proceed- ings of the Twelfth International Joint Conference on Ar- tijcial Intelligence. Rasmusen, Eric 1989. Games andInformation, an Intro- duction to Game Theory. Basil Blackwell. Rosenschein, Jeffrey S. and Breese, John S. 1989. Communication-free interactions among rational agents: A probablistic approach. In Gasser, Les and Huhns, Michael N., editors 1989, Distributed Artificial Intelligence, volume 2 of Research Notes in Artificial In- telligence. Pitman. 99-l 18. Rosenschein, Jeffrey S. and Genesereth, Michael R. 1985. Deals among rational agents. In Proceedings of the Ninth International Joint Conference on Arti$cial In- telligence, Los Angeles, California. 91-99. (Also pub- lished in Readings in Distributed Artificial Intelligence, Alan H. Bond and Les Gasser, editors, pages 227-234, Morgan Kaufmann, 1988.). Rosenschein, Jeffrey S.; Ginsberg, Matthew L.; and Genesereth, Michael R. 1986. Cooperation with- out communication. In Proceedings of the Fifth Na- tional Conference on Artificial Intelligence, Philadelphia, Pennsylvania. 5 l-57. Shubik, Martin 1982. Game Theory in the Social Sci- ences: Concepts and Solutions. MIT Press. Zlotkin, Gilad and Rosenschein, Jeffrey S. 1991. Cooperation and conhict resolution via negotiation among autonomous agents in non-cooperative do- mains. IEEE Transactions on Systems, Man, and Cy- bernetics 21(6). (Sp ecial Issue on Distributed AI). 230 Durfee | 1993 | 34 |
1,359 | Tad Hogg and Colin B. Williams Xerox Palo Alto Research Center 3333 Coyote Hill Road Palo Alto, CA 94304, U.S.A. Hogg@parc.xerox.com, CWilliams@parc.xerox.com Abstract We present and experimentally evaluate the hypothesis that cooperative parallel search is well suited for hard graph coloring problems near a previously identified transition between under- and overconstrained instances. We find that simple cooperative methods can often solve such problems faster than the same number of independent agents. Many A.I. programs involve search to solve NP hard problems. While intractable in the worst case, of more relevance to many applications is their behavior in typical situations. In fact, for many classes of such problems, most instances can be solved much more readily than might be expected from a worst case analysis. This has led to recent studies to identify characteristics of the relatively hard instances. In particular, observations [Cheeseman et al., 1991, Mitchell et al., 19921 and theory [Williams and Hogg, 1992b, Williams and Hogg, 1992a] indicate that constraint-satisfaction search problems with highest cost (fastest growing exponential scaling) occur near the transition from under- to overconstrained problems. These transitions, becoming increasingly sharp as problems are scaled up, are determined by values of easily measured “order” parameters of the problem, and are analogous to physical phase transitions as in percolation. The transition regions are also characterized by high variance in the solution cost for different problem instances, and for a single instance with respect to different search methods or a single nondeterministic method with e.g., different initial conditions or different tie-breaking choices made when the search heuristic ranks some choices equally. Structurally, these hard problems are characterized by many large partial solutions, which prevent early pruning by many types of heuristics. This phenomenon is also conjectured to appear in other types of search problems. Can these observations be exploited in practical search algorithms? One immediate application is to use the order parameters to estimate the difficulty of alternate problem formulations as an aid in deciding which approach to take. Another use is based on the observation of high variance in solution cost for problems near the transition region. Specifically, there have been many studies of the benefit of running several methods independently in parallel and stopping when any method first finds a solution Fishburn, 1984, Helmbold and McDowell, 1989, Pramanick and Kuhl, 1991, Kornfeld, 1981, Imai et al., 1979, Rao and Kumer, 1992, Mehrotra and Gehringer, 19851. Since the benefit of this approach relies on variation in the individual methods employed, the high variance seen in the transition region suggests it should be particularly applicable for hard problems [Cheeseman et al., 1991, Rao and Kumer, 19921. Another possibility is to allow such programs to ex- change and reuse information found during the search, rather executing independently. If the search methods are sufficiently diverse but nevertheless occasionally able to utilize information found in other parts of the search space, greater performance improvements are possible. Such “cooperative” methods have been studied in the con- text of simple constraint satisfaction searches [Clearwater et al., 1991, Clearwater et al., 19921. In these cases, co- operative methods were observed to give the most benefit precisely for those problems with many large partial so- lutions that could not be pruned. This was the case even though the information exchanged was often misleading in the sense of not being part of any solution. While this work used fairly simple search methods, it suggests that cooperative search may be useful for much harder prob- lems employing sophisticated search heuristics. These observations lead us to conjecture that a mixture of diverse search methods that share information will be particularly effective for problems in the transition region. In this paper we test this conjecture experimentally for graph coloring, a particular class of NP-complete problems for which an appropriate order parameter, average connec- tivity, and the location of the transition region have been empirically determined [Cheeseman et al., 19911. We also address some practical issues of sharing information, or exchanging “hints”, among sophisticated heuristic search methods, which should allow these results to be extended readily to other constraint satisfaction problems. G.3 Cdd s The graph coloring problem consists of a graph, a specified number of colors, and the requirement to find a color for each node in the graph such that no pair of adjacent nodes (i.e., nodes linked by an edge in the graph) have Distributed Problem Solving 231 From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. the same color. Graph coloring has received considerable attention and a number of search methods have been developed [Minton et al., 1990, Johnson et al., 1991, Selman et al., 19921. This is a well-known NP-complete problem whose solution cost grows exponentially in the worst case as the size of the problem (i.e., number of nodes in the graph) increases. For this problem, the average degree of the graph y (i.e., the average number of edges coming from a node in the graph) is an order parameter that distinguishes relatively easy from harder problems, on average. In this paper, we focus on the case of 3-coloring (i.e., when 3 different colors are available), for which the transition between under- and overconstrained problems and hence the region of hardest problems occurs near [Cheeseman et al., 19911 Y = 5. While there are likely to be additional order parameters, such as the variance in the degrees, this one was sufficient to allow us to find a set of graphs that are relatively hard to color with 3 colors. In our experiments we used two very different search methods. The first was a complete, depth-first backtrack- ing search based on the Brelaz heuristic [Johnson et al., 19911 which assigns the most constrained nodes first (i.e., those with the most distinctly colored neighbors), break- ing ties by choosing nodes with the most uncolored neigh- bors (with any remaining ties broken randomly). For each node, the smallest color consistent with the previous as- signments is chosen first, with successive choices made when the search is forced to backtrack. This complete search method is guaranteed to eventually terminate and produce correct results. Moreover, it operates by attempt- ing to extend partial colorings to complete solutions. Our second method used heuristic repair [Minton et al., 19901 from randomly selected initial configurations. This method, which always operates with complete assignments (i.e., each node is assigned some color), attempts to pro- duce a solution by selecting a node and changing its color to reduce as much as possible, or at least leave unchanged, the number of violated constraints in the problem. If some progress toward actually reducing the number of violations is not made within a prespecified number of steps, the search restarts from a new initial condition. This method is incomplete, i.e., if the problem has no solution it will never terminate. In practice, an upper bound on the to- tal number of tries is made: if no solution is found, the method may incorrectly report there are no solutions. For hard problems, both methods have a high variance in the number of steps required to find a solution. More- over, their very different nature suggests a combination of the two methods will give a collection of agents far more diverse than if all agents use the same method. In par- ticular, heuristic repair is often very effective at finding solutions once it starts “near” enough to one. To generate hard problems we examined many ran- dom graphs with connectivity near the transition re- gion. To correspond with the cooperative methods used previously [Clearwater et al., 19911 and simplify the use of hints, we considered only graphs that did in fact have solutions. Specifically, to construct our sample of graphs, we first divided the nodes into three classes (as nearly equal as possible) and allowed only edges that connected nodes in different classes to appear in our graph. This guaranteed that the graphs had a solution. Then trivial cases of underconstrained nodes were avoided by making sure each node had degree at least three. Finally, ad- ditional edges required to reach the desired connectivity were then added randomly. Many of the resulting graphs were trivial to search (e.g., for 100 node graphs, the me- dian search cost for the Brelaz heuristic was about 200 steps at the peak). To identify those that were in fact dif- ficult, the resulting graphs were searched repeatedly with both search methods, and only those with high search cost for all these trials were retained. This selection generally produced hard graphs with search costs one to three orders of magnitude higher than typical cases. We should also note that these graphs were hard even when compared to other methods of generating graphs which are known to give harder cases on average. Specifically, the prespeci- fication of a solution state in our method tends to favor graphs with many solutions and hence favors easier graphs than uniform random selection. For 100 node graphs, this latter method gives a peak median search cost of about 350 steps. Even more difficult cases are emphasized by restricting consideration to graphs with no trivial reduc- tions with typical costs of about 1000 steps [Cheeseman et al., 19911. A Cooperative Search There are two basic steps in implementing a cooperative search based on individual algorithms. First, the algo- rithms themselves must be modified to enable them to produce and incorporate information from other agents, i.e., read and write hints. Second, decisions as to exactly what information to use as hints, when to read them, etc. must be made. We should note that the first step may, in itself, change the performance of the initial algorithm or its characteristics (e.g., changing a complete search method into an incomplete one). Since this may change the ab- solute performance of the individual algorithm, a proper evaluation of the benefit of cooperation should compare the behavior of multiple agents, exchanging hints, to that of a single one running the same, modified, algorithm, but unable to communicate with other agents. In that way, the effect of cooperation, due to obtaining hints from other agents, will be highlighted. For example, a single agent running the Brelaz algo- rithm can first be modified so that it may read and write 232 mx hints (it itself produced) from a private blackboard. This alone leads to slightly improved performance. The effect of cooperation can then be assessed by comparing a so- ciety of N agents each running the modified algorithm in isolation with N agents running the same algorithm except that they read and write to a common blackboard. In this way we can subtract out the effects of the changed algo- rithm and the memory capacity on the performance of the agents, leaving just the effect of cooperation. While there are many ways to use hints, we made fairly simple choices similar to those used previously [Clearwater et al., 19911, in which hints consisted of partial solutions (thus for graph coloring, these hints are consistent colorings for a subset of the nodes in the graph). A central blackboard, of limited size, was used to record hints produced by the agents. When the blackboard was full, the oldest (i.e., added to the blackboard before any others) of the smallest (i.e., involving colors for the fewest nodes) hints were overwritten with new hints. Each agent independently writes a hint, based on its current state, at each step with a fixed probability q. When an agent was at an appropriate decision point, described below, it read a hint with probability p. Otherwise, or if there were no available hints, it continued with its original search method. Thus, setting p to zero corresponds to independent search since hints would never be used. We next describe how the two different search methods were modified to produce and incorporate hints. At any point in a backtracking search, the current partial state is nsistent coloring of some subset of the graph’s nodes. en writing a hint to the blackboard, the Brelaz agents simply wrote their current state. Each time the agent was about to expand a node in its backtrack search, it would instead, with probability p, attempt to read a compatible hint from the blackboard, i.e., a hint on the blackboard whose assignments were 1) consistent with those of the agent (up to a permutation of the colors’) and 2) specified at least one node not already assigned in the agent’s current state. Frequently, there was no such compatible hint (especially when the agent was deep in the tree and hence had already made assignments to many of the nodes), in which case the agent continued with its own search. When a compatible hint was found, its overlap with the agent’s current state was used to determine a permutation of the hint’s colors that made it consistent with the state. This permutation was applied to the remaining colorings of the hint and then used to extend the agent’s current state as far as possible (ordering the new nodes as determined ‘ We thus used the fact that, for graph coloring, any permutation of the color assignments for a consistent set of assignments is also consistent. Brelaz heuristic), and retaining so that the overall search rem effect, this hint simply replaced decisions heuristic would have made regarding the initial colors to try for a number of nodes. euristic repair the agent’s state always has a color assignment for each node, but it will not be fully consistent (until a solution is found). In order to produce a consistent partial assignment for use as a hint, we started with the full state and randomly removed assignments until a hint with no conflicts was obtained. The heuristic repair agents have a natural point at which to read hints, namely when they are about to start over from a new initial state. At these times, we had each agent read a random hint from the blackboard with probability p, and otherwise randomly generate a new state. This hint, consisting of an assignment to some of the nodes, overwrote the agent’s current state. @a ? A simple explanation of th of cooper- ative search is given by observing that the hints provide consistent colorings for large parts of the graph. Agents ding hints in effect then attempt to combine them with ir current state. Although not always successful, those cases in which hints do combine well ahow the agent to proceed to a solution by searching in a reduced space of possibilities. Even if many of the hints are not success- ful, this results in a larger variation of performance and hence can still improve the performance of the group when measured by the time for the first agent to finish. As a more fond, but oversimplified, argument, sup- pose we view the agents as making a series of choices. Let pdj be the probability that agent i makes choice i cor- rectly (i.e., in the context of its previous choices, this one continues a path to a solution, e.g., by selecting a useful hint). ‘Ihe probability that the series of choices for agent i is correct is then just pi = npdj. Wi sufficient di- versity in hints and agents’ choices to prevent the pij from being too correlated, and viewing them as random variables, this multiplicative process results in a lognormal distribution [Redner, 19901 for agent random variable whose logarithm is normally distributed. ution has an extended tail compared to, say, a normal distribution or the distribution of the performance of the Brelaz heuristic on many graphs. Hence there is an increased likelihood that at least one agent will have much higher than average performance, leading to an im- provement in group performance. In practice, this simple argument must be modified to account for the possibility of backtracking and the fact that the quality of the hints changes during the Distributed Problem Solving 233 search [Clearwater et al., 19921, but nevertheless gives some insight into the reason for the improvement and highlights the importance of maintaining diversity. Be- cause of the high intrinsic variance in performance near the transition point, this in turn suggests why these coop- erative methods are likely to be most applicable for the hard problems in the transition region. Experimental In this section we compare the behavior of independent searches with the cooperative method described above for some hard to color graphs. A simple performance criterion is the number of search steps required for the first agent to find a solution. However, this could be misleading when agents use different search methods whose individ- ual steps have very different computational cost. It also ignores the additional overhead involved in selecting and incorporating hints. Here we present data based on actual execution time of an unoptimized serial implementation of the searches in which individual steps are multiplexed (i.e., each agent in the group takes a single step, with this procedure repeated until one agent finds a solution). The results are qualitatively similar to those based on counting the number of steps [Hogg and Williams, 19931, with the main differences being due to 1) a hint-exchange overhead which made individual cooperative steps about 5% slower than the corresponding independent ones, and 2) heuristic repair steps being about 2.4 times faster than Brelaz ones. This latter fact means that in a parallel implementation of a mixed group, the heuristic repair searches would actu- ally complete relatively more search steps than when run serially. A parallel implementation would also face pos- sible communication bottlenecks at the central blackboard though this is unlikely to be a major problem with the small blackboards considered here due to the relatively low reading rate and the possibility of caching multiple copies of the blackboard which are only slowly updated with new hints. Thus we can expect the cooperative agents to gain nearly the same speedup from parallel execution as the independent agents, i.e., a factor of 10 for our group size. While this must ultimately be addressed by compar- ing careful parallel implementations, the improvement in the execution time reported below, as well as the reduced number of search steps [Hogg and Williams, 19931, sug- gest the cooperative methods are likely to be beneficial for parallel solution of large, hard problems. In Figs. 1 and 2, we compare the performance of groups of 10 cooperative agents with the same number of agents running independently. Note that in both cases, coopera- tion generally gives better performance than simply taking the best of IO independent agents. Moreover, coopera- tion appears to be more beneficial as problem hardness (measured by the performance of a group of independent 250 Independent Time Fig. 1. Performance of groups of 10 cooperating agents vs. that of roups of 10 independent agents, usin the Brelaz search me a od. Each noint corresnonds to a d’ d erent rrranh and is the median, ov& 10 trials., bf the execution time’ in’ seconds, on a SparcStation 2, re uued for the first agent in the group to find a solution. Eat i! second of execution corresponds to about 90 search steps for each of the 10 agents. For comparison, the diagonal line shows the values at which cooperative and independent f erformance are equal. Cooperation is beneficial for points be ow this line. In these experiments, the blackboard was limited to hold 100 hints, and we used p = 0.5; q = 0.1 and graphs with 100 nodes. I 200 150 B 4 w 100 50 0 +--M--s------- *----.w-.--m-- 0 50 100 150 200 50 Independent Time Fig. 2. Cooperation with heuristic repair. Each second of execution corresponds to about 210 search ste s E for each of the 10 agents. For comparison, the diagonal ‘ne shows the values at which cooperative and independent performance are equal. Some of the independent agent searches did not finish within 50000 steps at which point they were terminated. In these cases, the median performance shown in the plot for the independent agents is actually a lower bound: the dashed lines indicate the possible range of independent agent performances. Search parameters are as in Fig. 1. agents) increases. We obtained a few graphs of signif- icantly greater hardness than those shown in the figures which confirm this trend. We also observed that typically only a few percent of the hints on the blackboard were subsets of any solution so it is not obvious a priori that using these hints should be helpful at all. Finally, Fig. 3 shows a combined society of agents. In this case, half the agents use the Brelaz method, half use heuristic repair, and hints are exchanged among all agents. Again, we see the benefit gained from cooperative search. For the graphs we generated, there was little correlation 60 a 100 150 200 250 Independent Time Fig. 3. Performance of groups of 10 cooperating agents, 5 using Brelaz and 5 using heuristic repair vs. the erformance of the same groups searching independently. R Eat second of execution corresponds to about 120 search steps for each of the 10 agents. For comparison, the diagonal line shows the values at which cooperative and independent performance are equal. Search parameters are as in Fig. 1. between solution cost for the two search methods, so that even when the agents were independent, this mixed society generally performed better than all agents using a single method. While these results are encouraging, we should note that further work is needed to determine the best ways to exchange hints in societies using multiple methods, as well as the relative amount of resources devoted to different methods. Of particular interest is allowing the mix of agent types to change dynamically based on progress made during the search. More fundamentally for this avenue of research is understanding precisely what aspects of an ensemble of problems (e.g., in this case, determined by the precise method we used to generate the graphs) are important for the benefit of cooperation and the design of effective hints. Possibilities include the variance in individual search performance, the relative hardness of the graph and the proximity to the phase transition point. Conclusio In summary, we have tested our conjecture that cooper- ative methods are particularly well suited to hard graph coloring problems and seen that, even using simple hints, they can improve performance. It is further encouraging that the basic concepts used here, from the existence of regions of hard search problems characterized by order parameters to the use of partial solutions as hints, are ap- plicable to a wide range of search problems. There are a number of questions that remain to be ad- dressed. An important one is how the observed coopera- tive improvement scales, both with problem size and, for a fixed size, with changes in the order parameters deter- mining problem difficulty. There is also the question of how much different a parallel implementation is. Another issue concerns whether these ideas can be ap- plied to problems with no solutions to more quickly de- termine that fact. This is particularly relevant to the hard problems since they appear to occur at or near the transi- tion from under- to overconstrained problems which have many and no solutions respectively. In cases with no so- lution, either one must compare complete search methods (e.g., by having at least one agent use a complete search method even when reading hints) or else evaluate both search speed and search accuracy to make a valid com- parison. More generally, when applied to optimization problems, one would need to consider quality of solutions obtained as well as time required. This study also raises a number of more general is- sues regarding the use of hints. As we have seen, di- versity can arise from the intrinsic variation near the transition point and from random differences in the use of hints. Nevertheless, as more agents, using the same basic method, are added diversity does not increase as rapidly [Cleat-water et al., 19921. This suggests more active approaches to maintaining diversity, such as ex- plicitly partitioning the search space or, more interest- ingly, combining agents using different search methods such as genetic algorithms [Goldberg, 19891 or simulated annealing [Johnson et al., 19911. From a more practical point of view, a key decision for designing cooperative methods is how hints are gener- ated and used, i.e., the “hint engineering”. This involves a number of issues. The first is the nature of the information to exchange. This could consist of any useful information concerning regions of the search space to avoid or likely to contain solutions. The next major question is when during its search should an agent produce a hint. With backtrack- ing, the agent always has a current partial solution which it could make available by placing it on a central black- board. Generally, agents should tend to write hints are likely to be useful in other parts of the search space. Possible methods to use include only writing the largest partial solutions an agent finds (i.e., at the point it is forced to backtrack) or only if the hint is at least comparable in size to those already on the blackboard. Complementary are when should an agent decide to read a hint blackboard, which one should it choose and how should it make use of the information for its subsequent search. Again there are a number of reasonable choices which have different benefits, in avoiding search, and costs for their evaluation, as well as more global consequences for the diversity of the agent population. For instance agents could select hints whenever a sufficiently good hint is available, or whenever the agent is about to make a ran- dom choice in its search method (i.e., use the hint to break ties), or whenever the agent is in some sense stuck, e.g., needing to backtrack or at a local optimum. For deciding which available hint to use, methods range from random selection [Clearwater et al., 19911 to picking one that is Distributed Problem Solving 235 a good match, in some sense, to the agent’s current state. Final issues are the hint memory requirements and what to discard from a full blackboard. Given this range of choices, is there any guidance for making good use of hints? Theoretical results [Clearwater et al., 19921 emphasize the use of diversity for good coop- erative performance. As a note of caution in developing more sophisticated hint strategies, the choices should pro- mote high diversity among the agents [Huberman, 1990, Hogg, 19901 giving many opportunities to try hints in different contexts. This means that choices that appear reasonable when viewed from the perspective of a single agent, could result in lowered performance for the group as a whole, e.g., if all agents are designed to view the same hints as the best to use. As with other heuristic techniques, the detailed imple- mentation of appropriate choices to maintain diversity of the group of agents while also maintaining reasonable indi- vidual performance remains an empirical issue. While we can expect further improvements from more sophisticated use of hints, the fact our relatively simple mechanisms are able to give increased performance suggests that such methods may be quite easily applied. Acknowledgments We have benefited from discussions with S. Clearwater. Cheeseman, P., Kanefsky, B., and Taylor, W. M. (1991). Where the really hard problems are. In Mylopoulos, J. and Reiter, R., editors, Proceedings of IJCAI91, pages 331- 337, San Mateo, CA. Morgan Kaufmann. Clearwater, S. H., Huberman, B. A., and Hogg, T. (1991). Cooperative solution of constraint satisfaction problems. Science, 254: 1181-l 183. Clearwater, S. H., Huberman, B. A., and Hogg, T. (1992). Cooperative problem solving. In Huberman, B., editor, Computation: The Micro and the Macro View, pages 33- 70. World Scientific, Singapore. Fishburn, J. P. (1984). Analysis of Speedup in Distributed Algorithms. UMI Research Press, Ann Arbor, Michigan. Goldberg, D. E. (1989). Genetic Algorithms in Search, Optimization and Machine Learning. Addison-Wesley, NY. Helmbold, D. P. and McDowell, C. E. (1989). Modeling speedup greater than n. In Ris, F. and Kogge, P. M., editors, Proc. of 1989 Intl. Co& on Parallel Processing, volume 3, pages 219-225, University Park, PA. Penn State Press. Hogg, T. (1990). The dynamics of complex computational systems. In Zurek, W., editor, Complexity, Entropy and the Physics of Information, volume VIII of Santa Fe Institute Studies in the Sciences of Complexity, pages 207-222. Addison-Wesley, Reading, MA. Hogg, T. and Williams, C. P. (1993). Solving the really hard problems with cooperative search. In Hirsh, H. et al., editors, AAAI Spring Symposium on AI and NP-Hard Problems, pages 78-84. AAAI. Huberman, B. A. (1990). The performance of cooperative processes. Physica D, 42138-47. Imai, M., Yoshida, Y., and Fukumura, T. (1979). A parallel searching scheme for multiprocessor systems and its application to combinatorial problems. In Proc. of IJCAI-79, pages 416-418. Johnson, D. S., Aragon, C. R., McGeoch, L. A., and Schevon, C. (1991). Optimization by simulated annealing: An experimental evaluation; part ii, graph coloring and number partitioning. Operations Research, 39(3):378-406. Kornfeld, W. A. (1981). The use of parallelism to im- plement a heuristic search. In Proc. of IJCAI-81, pages 575-580. Mehrotra, R. and Gehringer, E. F. (1985). Superlinear speedup through randomized algorithms. In Degroot, D., editor, Proc. of I985 Intl. Conf. on Parallel Processing, pages 291-300, Washington, DC. IEEE. Minton, S., Johnston, M. D., Philips, A. B., and Laird, P. (1990). Solving large-scale constraint satisfaction and scheduling problems using a heursitic repair method. In Proceedings of M-90, pages 17-24, Menlo Park, CA. AAAI Press. Mitchell, D., Selman, B., and Levesque, H. (1992). Hard and easy distibutions of SAT problems. In Proc. of 10th Natl. Con. on Artificial Intelligence (AAAI92), pages 459- 465, Menlo Park. AAAI Press. Pramanick, I. and Kuhl, J. G. (1991). Study of an inherently parallel heuristic technique. In Proc. of 1991 Intl. Co@ on Parallel Processing, volume 3, pages 95-99. Rao, V. N. and Kumer, V. (1992). On the efficiency of parallel backtracking. IEEE Trans. on Parallel and Distributed Computing. Redner, S. (1990). Random multiplicative processes: An elementary tutorial. Am. J. Phys., 58(3):267-273. Sehnan, B., Levesque, H., and Mitchell, D. (1992). A new method for solving hard satisfiability problems. In Proc. of 10th Natl. Conf. on Artificial Intelligence (AAAI92), pages 440-446, Menlo Park, CA. AAAI Press. Williams, C. P. and Hogg, T. (1992a). Exploiting the deep structure of constraint problems. Technical Report SSL92- 24, Xerox PARC, Palo Alto, CA. Williams, C. P. and Hogg, T. (1992b). Using deep structure to locate hard problems. In Proc. of 10th Natl. Conf. on Artificial Intelligence (AAAl92), pages 472477, Menlo Park, CA. AAAI Press. 236 Hogg | 1993 | 35 |
1,360 | A st First-Cut Andrew I?. Kosoresow” Department of Computer Science St anford University Stanford, CA 94305, U.S.A. kos@theory.stanford.edu Abstract This paper presents a fast probabilistic method for coordination based on Markov processes, pro- vided the agents’ goals and preferences are suffi- ciently compatible. By using Markov chains as the agents’ inference mechanism, we are able to ana- lyze convergence properties of agent interactions and to determine bounds on the expected times of convergence. Should the agents’ goals or pref- erences not be compatible, they can detect this situation since coordination has not been achieved within a probabilistic time bound and the agents can then resort to a higher-level protocol. The ap- plication, used for motivating the discussion, is the scheduling of tasks, though the methodology may be applied to other domains. Using this domain, we develop a model for coordinating the agents and demonstrate its use in two examples. Introduction In distributed artificial intelligence (DAI) , coordina- tion, cooperation, and negotiation are important in many domains. Agents need to form plans, allocate resources, and schedule actions, considering not only their own preferences, but also those of other agents with whom they have to interact. Making central deci- sions or deferring to another agent may not be possible or practical, because of design constraints or political considerations. In these situations, agents will need some mechanism for coming to an agreement without reference to an outside authority. There are other ad- vantages to having a distributed negotiator. Commu- nication patterns may become more balanced when a central node or set of nodes do not have to partic- ipate in every interaction. Information is localized. Each person’s information is only contained by the *This research was supported jointly by the National Aeronautics and Space Administration under grant NCC2- 494-Sll, the National Science Foundation under grant IRI- 9116399, and the Rockwell International Science Center- Palo Alto Facility. local agent and can be more closely controlled. Fur- ther, as each person’s (or set of persons’) schedule is maintained by a separate agent, the system would de- grade gracefully if some of the agents were to go off line. Given that there exists at least one satisfactory agreement that satisfies all the agents’ constraints, we want to find such an agreement within a reasonable amount of time. If such an agreement does not ex- ist, we would like to find an approximate agreement satisfying as many constraints as possible. In this paper, we propose a probabilistic method us- ing Markov processes for the coordination of agents, using the domain of scheduling tasks. We make two as- sumptions about the capabilities of the agents: Agents have a planning system capable of generating sets of possible plans and they have a high-level negotiation system capable of exchanging messages about goals, tasks, and preferences. Each agent has a set of tasks that it has to accomplish. Some of these tasks re- quire the participation of other agents. The agents may have some preference for who does which tasks. While it is possible for the agents to enter into full- scale negotiations immediately, we propose to have the agents first go through a brief phase of trading offers and counteroffers. Should the agents’ goals and prefer- ences be sufficiently compatible, the agents will come to an agreement with high probability without the need for full-scale negotiation. Otherwise, the agents would realize this and resort to the higher-level negotiation protocol.’ We propose that the agents would simultaneously post their proposed schedules. Based on these post- ings, the agents would compute their next proposal and repost until they had synchronized on a sched- ule. In order to calculate each posting, each agent has a Markov process generating its next schedule based ‘If the high-level protocol takes time T, the low-level protocol takes time t, and the low-level protocol succeeds some fraction p of the time for a set of k tasks, then pre- processing is worthwhile if kT 2 PW) + 0 - PPT) From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. on the current postings of all the agents; this process ule under this constraint is to use Markov procedures. would involve a lookup operation and possibly a ran- Informally, a AJurlcov procedure is a function that gen- dom coin flip. By combining the Markov processes of erates an action taking only the agent’s current world the individual agents, we obtain a Markov chain for the state as the input. 3 Since many rounds of negotiation entire system of agents. We can then analyze the prop- may be necessary to come to a satisfactory agreement, erties of the system by analyzing the Markov chain and Markov procedures are promising candidates for use in determine if the agents will converge to an agreeable such a system; for each iteration, all that is potentially schedule and the expected time of convergence. needed is a single lookup or a simple computation. If the agents have a model of the other agents’ tasks and preferences, they can then conjecture a Markov chain corresponding to the entire system of agents. Given this information, the agent can estimate the number of iterations needed to achieve coordination for a given probability. If coordination has not been achieved by this time, the agent then knows that its model of the other agents was incorrect (or that it has been very unlucky) and can then use the higher-level protocol. In the rest of the paper, we will develop a methodol- ogy for using Markov processes and give several illus- trative examples of how task scheduling would work using Markov processes. In the next section, we give some basic definitions and lay the groundwork for the use of Markov processes as a method of negotiation. Next, we define convergence properties and expected times. In the following section, we give two examples in a task scheduling domain demonstrating the use of Markov processes. Finally, we discuss related work and summarize our current results. Each agent is controlled by a Markov process, a vari- ant of a Markov procedure. A Markov process is a function that takes the system state and a randomly- generated number and yields an action. In our model, the system state is the set of offers made by the agents during the last clock tick. The action is the agent’s offer for the next clock tick. Thus, given the set of all offers, 0, and a system state, S, we have an evalua- tion function, &, such that given an o E 0, &(o) is the probability that the agent should offer o in system state S and CoEL7 ES(O) = 1. Let & be the state of the ith agent at -time t. So, given a random number generator, we car Q define a Markov process, M to gen- erate each agent ‘s next offer based on fs. Given that there are n agents and D is a random number such that 0 _< p _< 1 , df+’ ‘= Mi(Ai,Ai,. . .,dk,p) for the ith agent. A starting state is specified by each of the agents either by choosing an action randomly or by choosing a preferred action based on some criteria. Practically speaking, the entire Markov process for an agent will probably never be explicitly specified as it could contain an exponential number of states. For ex- ample, the number of possible ways to allocate a set of tasks among a group of agents is exponential in the number of tasks. More likely, the system states will be divided into equivalence classes, for which only a sin- gle set of actions will be considered. Furthermore, the agents’ constraints and preferences may preclude them from even considering many possible ways of allocat- ing the tasks. For an effective system, the number of these classes should be sub-exponential. We shall see an example of how system states collapse into a fewer number of equivalent classes. Basic Architecture In this paper, we consider a scenario where a group of agents needs to come to an agreement. We assume that the agents communicate by posting messages si- multaneously at discrete times called cZocl% t&s and that the posted messages are visible to all the agents concerned.2 At a clock tick, each agent is required to post some message. If it fails to do so for some reason, a default will be specified whose definition depends on the application. First let us consider the process of coming to an agreement from the perspective of an in- dividual agent and then examine the process from the perspective of the entire system of agents. After each clock tick, an individual agent in the sys- tem can examine the current postings of all the agents for that clock tick. Based on this information, the agent needs to generate its proposed schedule by the next clock tick. One method for generating a sched- 2 These assump tions provide a model that is simpler to analyze and may be later extended to cover agent-to- agent messages. They also allow us to postpone consider- ing agents that are Byzantine and otherwise malicious. For example, consider playing a game of Stone, Scissors, Paper. If one player delays slightly, it can win with knowledge of the the opponent’s move. Similarly, if the players are com- municating through an intermediary, the intermediary can distort the game by passing false information or delaying it. In order to learn something of the properties of this system, we can consider ail the agents-and their Markov processes as a single Markov chain.4 We call this the -System Markov Chain or simply the system chain. Using the above notation, the system process, SM, can be represented as St+i = SM(St, P) where St = (AE,d;,...,A;) and P is the n-tuple of ps, as defined above for Markov processes. As it is a Markov chain, certain properties including convergence and the expected times of convergence may be computed in many cases. 3While ignoring most of the history of an agent may seem to be restrictive, it has been shown to be effective, for example, in cases such as the Tit-for-Tat Algorithm for the Prisoners’ Dilemma [Axelrod, 19841. 4 See [Freedman, 19831 for a more complete discussion of Markov chains and related topics. 238 Kosoresow Convergence and Expected Times We now define whether the system chain converges or, in other words, whether the agents come to an agree- ment. First we need to define absorbing states for a system; a state S is defined to be an cabsorbing state if and only if given the system is in state S at time t, it will be in state S at time t + 1 with probabil- ity 1. If a Markov chain has at least one absorbing state, it is called an absorbing Markov chain. If an absorbing Markov chain has a non-zero probability of reaching some absorbing state from every state, let us call it a converging Markov chain, since eventually the Markov chain will end up in one of these states. If the system chain is a converging Markov chain and all the absorbing states are possible valid agreements, the agents’ Markov processes will lead to one of these agreements. Let us call this chain a converging system chain. In this case, a valid agreement is one where ei- ther all the agents’ constraints are satisfied or, if that is not possible, a satisfactory approximate solution is reached. Thus, if we can have the agents generate Markov processes which lead to a converging system chain, we know that they will eventually come to an agreement. In our case, the convergent states consist of those where all of the agents issue the same offer during one time-step and the subsequent states for all the agents is that same state. We can take advantage of partially convergent states when agents agree on the assignment of subsets of their tasks. If agreeing on these tasks does not preclude reaching an absorbing state, then the agents can reduce their search-spaces, by fixing the agreed-upon task assignments. We can also try to figure out the expected conver- gence time. Given a Markov chain M and a distri- bution D of initial system states, the expected time of convergence, T(M) D) is defined to be: T(M,D) = giP,(M,D,i) i=o where PC(M,V, t) is the probability that the Markov chain M will reach an absorbing state at time t. This latter quantity can be computed by either solving the recurrence relations derived from the specification of the Markov chain or by framing the Markov chain as a network-flow problem and simulating it. Thus, prov- ing convergence and calculating the time of conver- gence for the system chain will let us know whether an agreement is guaranteed and, if so, approximately when. The Scheduling of Multiple Tasks In this section, we show how to apply Markov processes to scheduling tasks. We will give two examples and an analysis of their expected times of convergence. Each of the agents in the system has some set of tasks that it has to schedule, possibly the empty set, 1. 2. 3. 4. 5. Exchange task lists with other agents. Using task lists, forxn a Markov process to generate schedule offers either randomly or weighted according to the agent’s preferences or constraints. Form Markov chain using the agent’s Markov process and the agent’s estimates of the others’ Markov processes. Determine estimated time of convergence using the Markov chain. Run Markov process until the time bound, established in Step 3, is reached or a suitable agreement is found. If an agreement is found, return the agreement. Otherwise, resort to higher-level protocol. Figure 1: Outline of the first-cut coordination protocol. Assume that there are n agents and they are concerned with a set of m available time slots. For the rest of the paper, we assume that the tasks take unit time and are independent. Each of the tasks requires some number of agents to complete it. Initially, the agents trade lists of tasks that need to be assigned and the available times for the tasks. By not communicating constraints and preferences in this protocol, agents may avoid having to calculate, communicate, and reason with these types of facts. Each of the agents then generates a Markov process M, as defined above, based on the set of task sched- ules that it considers valid. Each of the schedules consists of sets of the agents’ tasks that should be done during a particular time slot. Thus, the tuple ({Au, Bb}, {Bc}, {Au, Bd}) indicates that the agent suggests that the agents A and B do tasks a and b in time 1 respectively, B does task c in time 2, and A and B do tasks a and d in time 3 respectively. Since the agent does not know the other agents’ constraints, some or all of the schedules generated by one agent may not be valid for other agents. If the agent has some information about the other agents’ constraints (or it is willing to make some assumptions in the ab- sence of this information), it can now form a system chain. By calculating the expected time of convergence for the system chain, it now has an estimate on how long the protocol will take and it can use that informa- tion to decide when to go to the higher-level protocol. The agents will then trade offers until an agreement is reached or a deadline is passed. This process is out- lined in Figure 1. For this system, there is an exponential number of possible schedules for tasks, and thus both the agents’ Markov processes and the system chain could have ex- ponential size if they were represented explicitly. How- ever, often it is possible to store the agents’ Markov processes implicitly. Further, we can reduce the num- ber of system chain states by defining equivalence classes over the system states. For example, the set of states where everyone has agreed on a schedule can be dealt with as a single class. In the following exam- ple, we show how this fact can be used and give an Distributed Problem Solviqg 239 System state names: abcdefghijklanop 3J8 Agent A’s action equivalence classes: ebdsdeaccaedsdbe System state equivalence class: 1221213223121221 Agent states at time t: AgentA:lilil1l122222222 AgentB:1111222211112222 AgentC:11221l221i22il22 Agent D: 1 2 I 2 1 2 1 2 i 2 1 2 i 2 i 2 Corresponding agent actions at time : + 1: AgentA 821~1R1122~2~2111 AgentB:Ri2~2B221~~in12n AgentC:Bl2k2~22il~ln12~ AgentD:Rzi~l~1122~2k2ln Read system states down vertically. Figure 2: Markov processes for two teams of two agents coordinating usage of a resource. (‘R’ in the table above corresponds to a equiprobable random choice between ‘1’ and ‘2’.) overview of the procedure. In the second example, we sketch a case where the agents’ constraints are partially incompatible. Example 1: A Constrained Resource Suppose there are 4 agents: A, B, C, and D. There are two tasks that need to be done: tl and t2. Let A and D be the team that has to do tl and let B and C be the team that has to do t2. Finally, assume that both tasks utilize some resource that can only be used by one set of agents at a time and there are two time slots available: 1 and 2.5 This example could represent two teams of agents having to use a particular tool in a job-shop or two sets of siblings having to share a tele- vision set. Assuming that the agents find the resource equally desirable during both time slots, and that all the agents have equal input into the decision, we need to decide an order in which the agent teams get to use the resource. While workers in a job-shop or family might have other means for coming to a decision, we can abstract the situation and use Markov processes to come to a decision. The agents can specify their choice by a ‘1’ or a ‘2’. ‘1’ will indicate that the agent offers to do its task first, while a ‘2’ indicates that it offer to go second. Thus, we design the Markov process shown in Figure 2 for each of the agents. In this Markov process, each of the agents has to re- act to one of sixteen possible system states. As shown in Figure 2 for Agent A, each of the states falls into one of five equivalence classes for the agents, each of which corresponds to an action: (a) the agents are co- ordinated; (b) the agents are almost coordinated and to achieve coordination, I need to flip state; (c) the agents are almost coordinated and to achieve coordi- nation, the other person in my pair needs to flip state; (d) the agents are almost coordinated and to achieve coordination, one of the people in the other pair needs 5A simpler example consists of two agents trying to co- ordinate on a coin, where both of them decide on heads or tails. This is an example of a consensus problem as in [Asp- nes and Waarts, 19921. The solution for the given example can be easily modified to give a solution to multi-agent consensus. 118 Figure 3: The resulting system chain for Example 1. (State labels correspond to system state equivalence classes from Figure 2 and arc labels are probabilities of transitioning between the two given states.) to flip state; and (e) the agents are uncoordinated and need to flip randomly. Similarly, the system states fall into three equivalence classes: (1) the agents are unco- ordinated, (2) the agents are almost coordinated (and will get coordinated in the next time step), and (3) the agents are coordinated and need to flip randomly. These three equivalence classes are used to form the system chain shown in Figure 3. This Markov chain converges and the expected time to reach a convergent state is 2 .6 In this example, using Markov processes leads to a satisfactory solution in a relatively short expected time. The solution is fair since both outcomes are equally probable and no agent had more influence than any other agent. While there is no deterministic time bound for the agents coming to an agreement, there is no chance of deadlock which may result from a de- terministic protocol. Further, we do not depend on ordering the agents or on an external device for insur- ing honesty. Of course, there are situations where the agents can sabotage the efficiency or the convergence of such a method. Suppose that Agent D has decided that it prefers the first time slot. It can raise the prob- ability of choosing that slot to greater than $ or even go to the extreme of choosing it no matter what, thus increasing the expected time of convergence. Further, if Agent A decides that it must have the second time slot, we are left in a situation that has no satisfactory schedule. While such situations appear to lead to inef- ficiency or inability to come to a satisfactory solution, they may be used to take preferences into account and discover situations where there are irreconcilable con- flicts among the agents. An agent can be probabilisti- ‘The details of the expected time calculations is left to the full-length version of this paper. 240 Kosoresow tally certain that something unexpected is happening when the system has not converged to an agreement within a time bound. If the agents have a model of each other, then they can construct the system chain for their particular case. Given the chain, the agents can either analyt- ically determine the time of convergence for simple chains or derive it empirically for more complex ones. Similarly, they can use the cutoff for negotiation as an indication that they are not making progress. For example, if they assume the model described above, they would converge to a solution over 90% of the time within seven iterations. Thus, they can use seven as a cutoff to indicate that there might be incompatible preferences such as the ones described above for Agents A and D. Example 2: Non-compatible Goals In the second example, we have two agents whose goals are not sufficiently compatible. Even though coordina- tion failures may occur in this system, there is still a significant probability that the agents will coordinate on a plan and thus will not have to resort to a higher level protocol. It further demonstrates how conver- gence on a partial solution can lead to fast convergence during the remainder of the protocol. Suppose there are two agents, A and B. Their goal is to move a set of objects to one of three locations: 1, 2, or 3. Agent A would like to have objects a, b, and c at the same location, and Agent B would like to have objects b, c, and d at the same location. There are twelve possible tasks consisting of moving the four objects to the three locations, such as task al which consists of moving object a to location 1. The agents each have two time steps available dur- ing which they can schedule their actions. The initial communications would contain the tasks each agent needs done and the time frame. For example, Agent A would send ((al, bl, cl, ~2, b2, c2, a3, b3, c3), (27, T2)) indicating the nine tasks that it is interested in and the two time steps. Note that only the tasks were commu- nicated without any reference to constraints or pref- erences. The computation and communication of this information is left to the higher-level protocol, should it be necessary. Since we assume that tasks are of unit time and do not depend on each other, each agent can just enter the new tasks into its list of things to do and expand its Markov process to include them as a possibility. If this operation is not straightforward, as it is in this case, the agents might pop out of this low-level protocol and proceed with a higher-level one at this point. The agents’ offers would consist of the possible ways to do their tasks combined with the possible ways they believe that the other agent’s task could be incorpo- rated. For example, Agent A’s possible offers, at this point, consists of 216 possibilities: Each of the tasks needs to be assigned an agent, a time, and a location with the constraint that a, b, and c are assigned the same location. These can be represented by each of the agents implicitly. At each time step, the agents would suggest one of these offers and then compare to see they have made any progress. At this point, the agents can take advantage of par- tial coordination. If an assignment of tasks to agents, tasks to time slots, or tasks to locations has been agreed on, the agents then fix those parameters and ac- cordingly reduce the number of offers that they would consider for the next round. Let us first consider the assignment of the objects to locations. There are 81 possible outcomes in this case. With probability & the agents will coordinate on a possible assignment of locations; with probability &, the agents will result in a state from which an acceptable plan cannot be reached; with probability &, the agents will still be in the initial state; and with g, the agents will be par- tially coordinated where the probability is better than $ that they will eventually agree on an acceptable plan. Thus, the termination of this part of the schedule has an expected time of at most nine steps. In the other portion of the schedule, where we as- sign tasks to agents, there is no chance of a partial solution leading to a deadlock state. The agents sim- ply agree on how to put four tasks in four slots. Any partial solutions in this part of the schedule will con- strain subsequent actions leading to an agreement. Thus we see several important features of this methodology. The protocol is able to restrict the space of possible schedules by utilizing partial agreements. Even in an unfavorable situation, it will lead to an acceptable schedule in more than half the cases. Fur- ther, we were able to use this coordination protocol even though the two agents were unaware of the con- straints of the other agents. By using a quick offer- counteroffer protocol, there would be no need to trade this information and to do the computations associated with incorporating the information into the planner and/or the higher-level coordination protocol’s data structures. Even if our protocol was unsuccessful, it may provide us with some useful information for the higher-level protocol. Summary of Current and In this paper, we draw on a variety of sources. Previ- ous work in coordination and cooperation includes the game-theoretic approach taken by Genesereth, Rosen- schein, and Ginsberg as in [Ginsberg, 19851, for ex- ample. One advantage of our method is that knowl- edge of explicit payoff matrices is not necessary. Our method is also applicable to the more current work on the Postman Problem[Zlotkin and Rosenschein, 19891. Our method is more similar contract nets[Davis and Smith, 19831, though our approach is more like hag- gling than bidding for a contract. In [Ephrati and Rosenschein, 19911, [Ephrati and Rosenschein, 19921, [Jennings and Mamdani, 19921, istributed Problem Solving 241 and [Wellman, 19921, higher-level protocols are pro- posed for agent coordination and negotiation. These may be suitable for use with our protocol in part or in their entirety. Other work is related to providing com- ponents for a system for coordination and negotiation. For example, [Gmytrasiewicz et QZ., 19911 provides a framework whereby agents can build up a data struc- ture describing other agents with whom they interact, and [Kraus and Wilkenfeld, 19911 provides a frame- work for incorporating time as part of a negotiation. Our protocol might also be adapted for use in multi- agent systems such as those described in [Shoham and Tennenholtz, 19921. Taking these considerations into account, we have decided to use a probabilistic method with Markov pro- cesses and chains to try to guarantee that the agents will come to some agreement; the randomness provides us a tool for avoiding potential deadlock among the processes. Further, using Markov processes provides us with methods for determining whether a system of agents will converge to an agreement and, if so, in what expected time. We have demonstrated how Markov processes and Markov chains can be used for coordi- nating tasks. These techniques may also be useful for other domains involving coordination or negotiation. We are currently looking at applying Markov pro- cesses to the scheduling of elevators, job shops, and other resource allocation problems. We are also work- ing on implementing these procedures to test them em- pirically. There are also several general problems that need to be examined. In order to make this technique more convenient, we need to look at how to generate Markov chains from a formal specification of a prob- lem and how to recognize equivalence classes. To make the method less restrictive, it would be useful to try to relax the requirement that messages be posted simul- taneously. Further, it might be useful to employ di- rect messages between agents instead of having them post their messages. Finally, we would like to incorpo- rate profit/cost metrics and more complex interdepen- dent tasks with temporal or ordering constraints. As it stands, we see Markov processes as a useful technique for exploring agent coordination and, subsequently, ne- gotiation. Acknowledgements I would like to thank Nils Nilsson and Narinder Singh, for their many valuable comments and discussions re- garding this paper. In addition, I would like to thank Anil Gangolli, Matt Ginsberg, Michael Genesereth, Andrew Goldberg, Andrew Golding, Ramsey Haddad, Jane Hsu, Joseph Jacobs, George John, Illah Nour- bakhch, Oren Patashnik, Greg Plaxton, Devika Sub- ramanian, Vishal Sikka, Eric Torng, Rich Washington, James Wilson, and the members of the Bots, MUGS, and Principia Research Groups at Stanford for many productive conversations. References Aspnes, James and Waarts, Orli 1992. Randomized consensus in expected O(n log2 n) operations per pro- cessor . In 33rd Annual Symposium on Foundations of Computer Science. IEEE Computer Society Press. 137-146. Axelrod, Robert 1984. The Evolution of Cooperation. Basic Books, Inc., New York. Davis, Randall and Smith, Reid G. 1983. Negotia- tion as a metaphor for distributed problem solving. Artificial Intelligence 20:63-109. Ephrati, Eithan and Rosenschein, Jeffrey S. 1991. The Clarke tax as a consensus mechanism among au- tomated agents. In Proceedings of the Ninth Nationat Conference on Artificial Intelligence. Morgan Kauf- mann. 173-178. Ephrati, Eithan and Rosenschein, Jeffrey S. 1992. Constrained intelligent action: Planning under the influence of a master agent. In Proceedings of the Tenth National Conference on Artificial Intelligence. Morgan Kaufmann. 263-268. Freedman, David 1983. Murkov Chains. Springer- Verlag. Ginsberg, Matthew L. 1985. Decision procedures. In Huhns, Michael N., editor 1985, Distributed Artificial Intelligence. Morgan Kaufman. chapter 1, 3-28. Gmytrasiewicz, Piotr J.; Durfee, Edmund H.; and Wehe, David K. 199 1. The utility of communication in coordinating intelligent agents. In Proceedings of the Ninth National Conference on Artificial Intelligence. Morgan Kaufmann. 166-172. Jennings, N. R. and Mamdani, E. H. 1992. Using joint responsibility to coordinate collaborative prob- lem solving in dynamic environments, In Proceedings of the Tenth National Conference on Artificial Intel- ligence. Morgan Kaufmann. 269-275. Kraus, Sarit and Wilkenfeld, Jonathan 1991. The function of time in cooperative negotiations. In Pro- ceedings of the Ninth National Conference on Artifi- cial Intelligence. Morgan Kaufmann. 179-184. Shoham, Yoav and Tennenholtz, Moshe 1992. Emer- gent conventions in multi-agent systems: initial ex- perimental results and observations. In Principles of Knowledge Representation and Reasoning: Proceed- ings of the Third International Conference. Morgan Kaufmann. 225-231. Wellman, Michael P. 1992. A general-equilibrium approach to distributed transportation planning. In Proceedings of the Tenth National Conference on Ar- tificial Intelligence. Morgan Kaufmann. 282-289. Zlotkin, Gilad and Rosenschein, Jeffrey S. 1989. Ne- gotiation and task sharing among autonomous agents in cooperative domains. In Eleventh International Joint Conference on Artificial Intelligence. Morgan Kaufmann. 912-917. 242 Kosoresow | 1993 | 36 |
1,361 | Agents Contracting Tasks in Non-Colla Environments* Sarit Kraus Department of Mathematics and Computer Science Bar Ilan University Ramat Gan, 52900 Israel sarit@bimacs.cs.biu.ac.il Abstract Agents may sub-contract some of their tasks to other agent(s) even when they don’t share a com- mon goal. An agent tries to contract some of its tasks that it can’t perform by itself, or when the task may be performed more efficiently or better by other agents. A “selfish” agent may convince another “selfish” agent to help it with its task, even if the agents are not assumed to be benev- olent, by promises of rewards. We propose tech- niques that provide efficient ways to reach sub- contracting in varied situations: the agents have full information about the environment and each other vs. subcontracting when the agents don’t know the exact state of the world. We consider sit- uations of repeated encounters, cases of asymmet- ric information, situations where the agents lack information about each other, and cases where an agent subcontracts a task to a group of agents. We also consider situations where there is a com- petition either among contracted agents or con- tracting agents. In all situations we would like the contracted agent to carry out the task efficiently without the need of close supervision by the con- tracting agent. The contracts that are reached are simple, Pareto-optimal and stable. Introduction Research in Distributed Problem Solvers assumes that it is in the agents’ interest to help one another. This help can be in the form of the sharing of tasks, results, or information [Durfee, 19921. In task sharing, an agent with a task it cannot achieve on its own will attempt to pass the task, in whole or in part, to other agents, usually on a contractual basis [Davis and Smith, 19831. This approach assumes that agents not otherwise oc- cupied will readily take on the task. Similarly, in infor- mation or result sharing, information is shared among agents with no expectation of a return [Lesser, 1991; Conry et al., 19901. This benevolence is based on the *This material is based upon work supported by the Na- tional Science Foundation under Grant No. IRI-9123460. I would Iike to thank Jonathan Wilkenfeld for his comments. assumption common to many approaches to coordi- nation: That the goal is for the system to solve the problem as best it can, and therefore the agents have a shared, often implicit, global goal that they are all unselfishly committed to achieving. It was observed in [Grosz and Klaus, 19931 that agents may sub-contract some of their tasks to other agents also in environments where the agents do not have a common goal and there is no globally consis- tent knowledge. ’ That is, a selfish agent that tries to carry out its own individual plan in order to fulfill its own tasks may sub-contract some of its tasks to an- other selfish agent(s). An agent tries to contract some of its tasks that it can’t perform by itself, or when the task may be performed more efficiently or better by other agents. The main question is how an agent may convince another agent to do something for it when the agents don’t share a global task and the agents are not assumed to be benevolent. Furthermore, we would like the contracted agent to carry out the task efficiently without the need of close supervision by the contract- ing agent. This will enable the contracting agent to carry out other tasks simultaneously. There are two main ways to convince another self- ish agent to perform a task that is not among its own tasks: threats to interfere with the agent carrying out its own tasks or promises of rewards. In this paper we concentrate on subcontcracting by rewards. Re- wards may be in two forms. In the first approach one agent may promise to help the other in its tasks in the future in return for current help. As was long ago ob- served in economics, barter is not an efficient basis for cooperation. In a multi-agent environment, an agent that wants to subcontract a task to another agent may not be able to help it in the future, or one agent that may be able to help in another agent’s task may not need help in carrying out its own tasks. In the sec- ond approach a monetary system is developed that is used for rewards. The rewards can be used later for other purposes. We will show that a monetary system ‘Systems of agents acting in environments where there is no global common goal (e.g., [Sycara, 1990; Zlotkin and Rosenschein, 1991; Kraus et aal., 1991; Ephrati and Rosen- schein, 19911) are called M&i-Agent Systems [Bond and Gasser, 1988; Gasser, 19911. 243 From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. b for the multi-agent environment that allows for side payments and rewards between the agents yields an efficient contracting mechanism. The monetary profits may be given to the owners of the automated agents. The agents will be built to maximize expected utili- ties that increase with the monetary values, as will be explained below. The issue of contracts has been investigated in eco- nomics and game-theory in the last two decades (e.g., [Arrow, 1985; Ross, 1973; Rasmusen, 1989; Grossman and Hart, 1983; Hirshleifer and Riley, 19921). They have considered situations in which a person or a com- pany contracts a task to another person or company. In this paper we adjust the models that were devel- oped in economics and game-theory to fit distributed artificial intelligence situations. We will consider varied situations: In the Sec- tion Contracts Under Certainty one agent subcon tracts a task to another one when the agents have full information about the environment and each other. In the Section Contracts Under Uncertainty, we consider contracting when the agents don’t know the exact state of the world. The situation in which an agent may subcontract its tasks several times to the same agent is considered in Section Repeated Encounters, and sit- uations of asymmetric information or when the agents lack information about each other is dealt with in the Section Asymmetric and Incomplete Information. We conclude with the case of an agent subcontracting a task to a group of agents. In all these cases, we con- sider situations where the contracting agent doesn’t supervise the contracted agents’ performance and sit- uations where there is a competition among possible contracted agents or possible contracting agents. Preliminaries We will refer to the agent that subcontracts one of its tasks to another agent as the contracting agent and to the agent that agrees to carry out the task as the contracted agent. The eflort level is the time and work intensity which the contracted agent puts into fulfilling the task. We denote the set of all possible efforts by E. In all cases, the contracted agent chooses how much effort to extend, but its decision may be influenced by the contract offered by the contracting agent. We as- sume that there is a monetary value a(e) of performing a task which increases with the effort involved. That is, the more time and effort put in by the contracted agent, the better the outcome. The contracting agent will pay the contracted agent a wage w (which can be a function of q). There are several properties we require from our mechanism for subcontracting: Simplicity: The contract should be simple and there should be an algorithm to compute it. Pareto-Optimality: There is no other contracted ar- rangement that is preferred by both sides over the one they have reached. Stability: We would like the results to be in equilib- rium and that the contracts will be reached and exe- cuted without delay. Concerning the simplicity and stability issues, there are two approaches for finding equilibria in the type of situations under consideration here [Rasmusen, 19891. One is the straight game theory approach: a search for Nash strategies or for perfect equilibrium strate- gies. The other is the economist’s standard approach: set up a maximization problem and solve using cal- culus. The drawback of the game theory approach is that it is not mechanical. Therefore, in our pre- vious work on negotiation under time constraints, we have identified perfect-equilibrium strategies and pro- posed to develop a library of meta strategies to be used when appropriate [Kraus and Wilkenfeld, 1991a; Kraus and Wilkenfeld, 1991b]. The maximization ap- proach is much easier to implement. The problem with the maximization approach in our context is that players must solve their optimization problems jointly: the contracted agent’s strategy affects the contracting agent’s maximization problem and vice versa. In this paper we will use the maximization approach, with some care, by embedding the contracted agent’s maxi- mization problem into the contracting agent’s problem as a constraint. This maximization problem can be solved automatically by the agent. The agents’ utility functions play an important role in finding an efficient contract. As explained above, we propose to include a monetary system in the multi- agent environment. This system will provide a way for providing rewards. However, it is not always the case that the effort of an agent can be assigned the same monetary values. Each designer of an automated agent needs to provide its agent with a decision mechanism based on some given set of preferences. Numeric repre- sentations of these preferences offer distinct advantages in compactness and analytic manipulation [Wellman and Doyle, 19921. Therefore, we propose that each de- signer of autonomous agents will develop a numerical utility function that it would like its agent to maxi- mize. In our case the utility function will depend on monetary gain and effort. This is especially important in situations where there is uncertainty in the situa- tion and the agents need to make decisions under risk considerations. Decision theory offers a formalism for capturing risk attitudes. If an agent’s utility function is concave, it is risk averse. If the function is convex, it is risk prone, and a linear utility function yields risk neutral behavior [Hirshleifer and Riley, 19921. We denote the contracted agent’s utility function by U which is a decreasing function in effort and an in- creasing function in wage w. We assume that if the contracted agent won’t accept the contract from the contracting agent, its utility, (i.e., its reservation price) which is known to both agents is 6. This outcome can result either from not doing anything or performing some other tasks at the same time. We denote the contracting agent’s utility function by V and it is an 244 Kraus increasing function with the value of performing the task (Q) and decreasing function with the wage w paid to the contracted agent. In our system we assume that the contracting agent rewards the contracted agent af- ter the task is carried out. In such situations there should be a technique for enforcing this reward. In case of multiple encounters reputational considerations may yield appropriate behavior. In a single encounter some external intervention may be required to enforce commitments. Contracts Under Certainty In this case we assume that all the relevant information about the environment and the situation is known to both agents. In the simplest case the contracting agent can observe and supervise the contracted agent’s effort and actions and force it to make the effort level pre- ferred by the contracting agent by paying it only in case it makes the required effort. The amount of ef- fort required from the contracted agent will be the one that maximizes the contracting agent’s outcome, tak- ing into account the task fulfillment and the payments it needs to make to the contracted agent. However, in most situations it is either not possi- ble or too costly for the contracting agent to supervise the contracted agent’s actions and observe its level of effort. In some cases, it may be trying to carry out an- other task at the same time, or it can’t reach the site of the action (and that is indeed the reason for subcon- tracting). If the outcome is a function of the contracted agent’s effort and if this function is known to both agents the contracting agent can offer the contracted agent a forcing contract [Harris and Raviv, 1978; Rasmusen, 19891. In this contract, the contracting agent will pay the contracted agent only if it provides the outcome required by the contracting agent. If the contracted agent accepts the contract, he will perform the task with the effort that the contracting agent finds to be most profitable to itself even without supervi- sion. Note that the outcome won’t necessarily be with the highest effort on the part of the contracted agent, but rather the effort which provides the contracting agent with the highest outcome. That is, the contract- ing agent should pick an effort level e* that will gen- erates the efficient output level Q*. Since we assume that there are several possible agents available for con- tracting, in equilibrium, the contract must provide the contracted agent with the utility L2 The contract- ing agent needs to choose a wage function such that w* 9 w(Q*N = 6 and U(e, w(q)) < ti for e # e*. We demonstrate this case in the following example. Example I: Contracting Under Certainty The US and Germany have sent several mobile robots independently to Mars to collect minerals and ground is indifferent preferred by samples and to conduct experiments. One of the US robots has to dig some minerals on Mars far from the other US robots. There are several German robots in that area and the US robot would like to subcon- tract some of its digging. The US robot approaches one of the German robots that can dig in three lev- els of e$ort (e): Low, Medium and High denoted by 1,2 and 3 respectively. The US agent can’t supervise the German robot’s eflort since it wants to carry out another task simultaneously. The value of digging is q(e) = da. The US robot’s utility function, if a contract is reached, is V(q, w) = q - w and the Ger- man robot’s utility function in case it accepts the con- tract is U(e, 20) = 17 - z - 2e, where w is the payment to the German robot. If the German robot rejects the contract, it will busy itself with maintenance tasks and its utility will be 10. It is easy to calculate that the best eJjrort level from the US robot’s point of view is 2, in which there will be an outcome of m. The contract that the US robot oglers to the German robot is 35 if the outcome is &66 and 0 otherwise. This contract will be accepted by the German robot and its effort level will be Medium. There are two additional issues of concern. The first one is how the contracting agent chooses which agent to approach. In the situation of complete infor- mation (we consider the incomplete information case in Section Asymmetric and Incomplete Information) it should compute the expected utility for itself from each contract with each agent and chooses the one with the maximal expected utility. Our model is also appropriate in the case in which there are several contracting agents, but only one pos- sible contracted agent. In such a case, there should be information about the utilities of the contracting agents in the event that they don’t sign a contract. The contracted agent should compute the level of ef- fort that maximizes its expected utility (similar to the computation of the contracting agent in the reverse case) and make an offer to the contracting agent that will maximize its outcome. Contracts Under Uncertainty In most subcontracting situations, there is uncertainty concerning the outcome of an action. If the contracted agent chooses some effort level, there are several pos- sibilities for an outcome. For example, suppose an agent on Mars subcontracts digging for samples of a given mineral and suppose that there is an uncertainty about the depth of the given mineral at the site. If the contracted agent chooses a high effort level but the mineral level is deep underground the outcome may be similar to the case where the contracted agent chooses a low level of effort but the mineral is located near the surface. But, if it chooses a high effort level when the mineral is located near the surface, the outcome may be much better. In such situations the outcome of per- forming a task doesn’t reveal the exact effort level of Distributed Problem Solving 245 the contracted agent and choosing a ma1 contract is much more difficult. stable and maxi- We will assume that the world may be in one of several states. Neither the contracting agent nor the contracted agent knows the exact state of the world when agreeing on the contract, as well as when the contracted agent chooses the level of effort to take, af- ter agreeing on the contract. The contracted agent may observe the state of the world after choosing the effort level (during or after completing the task), but the contracting agent can’t observe it. For simplic- ity, we also assume that there is a set of possible out- comes to the contracted agent carrying out the task & = bzl, 4fn) such that ~1 < qz < . . . < qn that depends on the state of the world and the effort level of the contracted agent. Furthermore, we assume that given a level of effort, there is a probability distribu- tion that is attached to the outcomes that is known to both agents. 3 Formally, we assume that there is a probability function 53 : E x Q + #Z, such that for any e E E, Cy a@, qi) = 1 and for all qi E Q, &e, qi) > 0.4 The contracting agent’s problem is to find a contract that will maximize its expected utility, knowing that the contracted agent may reject the contract or if it ac- cepts the contract the effort level is chosen later [Ras- musen, 19891. The contracting agent’s payment to the contracted agent can be based only on the outcome. Let us assume that in the contract that will be offered by the contracting agent, for any qi i = 1, . . . , n the con- tracting agent will pay the contracted agent wi. The maximization problem can be constructed as follows (see also [Rasmusen, 19891). n Maximize,l,...,R x P(i7 qi)V(qi, W) (1) with the constraints: I e^ = argmaxeEE (2) Equation (1) states that the contracting agent tries to choose the payment to the contracted agent so as to 3A practicall question is how the agents find the prob- ability distribution. It may be that they have preliminary information about the world, e.g., what is the possibility that a given mineral will be in that area of Mars. In the worst case, they may assume an equal distribution. The model can be easily extended to the case that each agent has different beliefs about the state of the world [Page, 19871. *The formal model in which the outcome is a function of the state of the world and the contracted agent’s ef- fort level, and in which the probabilistic function gives the probability of the state of the world which is inde- pendent of the contracted agent’s effort level is a special case of the model described here [Page, 1987; Ross, 1973; Harris and Raviv, 19781. maximize its expected utility subject to the constraint that the contracted agent will prefer the contract over rejecting it (3) and that the contracted agent prefers the effort level that the contracting agent prefers, given the contract it is offered (2). The main question is whether there is an algorithm to solve this maximization problem and whether such a contract exists. This depends primarily on the util- ity functions of the agents. If the contracting agent and the contracted agent are risk neutral, then solving the maximization problem can be done using any lin- ear programming technique (e.g, simplex, see for exam- ple [Pfaffenberger and Walker, 19761.) Furthermore, in most situations, the solution will be very simple: the contracting agent will receive a fixed amount of the outcome and the rest will go to the contracted agent. That is, ‘uti = qi - C for 1 < i 5 n, where the constant C is determined by constraint (3) [Shavell, 19791. Example 2: Risk Neutral Agents Under Uncer- tainty Suppose the utility function of the German robot from Example 1 is U(w, e> = w-e and that it can choose be- tween two eflort levels Low (e=l) and High (e=2) and its reservation price is ti = 1. There are two possible monetary outcomes to the digging: q1 = 8 and q2 = 10 and the US robot’s utility function is as in the previous example, i. e., V(q,w)=q-w. If the German robot chooses the Lower level eflort then the outcome will be q1 with probability 2 and q2 with probability $ and if it takes the High level eflort the probability of q1 is $ and of q2 it is g. In such situations, the US robot should reserve to itself a profit of “5. That is, w1 = la and w2 = 3a. The German robot will choose the High level eflort. If the agents are not neutral toward risk, the problem is much more difficult. However, if the utility function for the agents are carefully chosen, an algorithm does exist. Suppose the contracted agent is risk averse and the contracting agent is risk neutral (the methods are also applicable when both are risk averse). Grossman and Hart [Grossman and Hart, 19831 presented a three- step procedure to find appropriate contracts. The first step of the procedure is to find for each possible effort level the set of wage contracts that induce the con- tracted agent to choose that effort level. The second step is to find the contract which supports that effort level at the lowest cost to the contracting agent. The third step is to choose the effort level that maximizes profits, given the necessity to support that effort with a costly wage contract. For space reasons, we won’t present the formal details of the algorithm here, and also in the rest of the paper. Repeated Encounters Suppose the contracting agent wants to subcontract its tasks several (finite) times. Repetition of the encoun- ters between the contracting and the contracted agents enables the agents to reach efficient contracts if the 246 Kraus number of encounters is large enough. The contract- ing agent could form an accurate estimate of the con- tracted agent’s effort, based on the average outcome, over time. That is, if the contracting agent wants the contracted agent to take a certain effort level 2, in all the encounters, it can compute the expected outcome over time if the contracted agent actually performs the task with that effort level. The contracting agent can keep track of the cumulative sum of the actual out- comes and compare it with the expected outcome. If there is some time T in which the outcome is below a given function of the expected outcome, the contract- ing agent should impose a big “punishment” on the contracted agent. If the function over the expected outcome is chosen carefully [Radner, 198 11, the proba- bility of imposing a “punishment” when the contracted agent is in fact carrying out the desired effort level can be made very low, while the probability of eventually imposing the “punishment” if the agent doesn’t do e is one. Asyrm-netric and Hxicomplete Hmformation In some situations the contracting agent does not know the utility function of the contracted agent. The con- tracted agent may be one of several types that reflect the contracted agent’s ability to carry out the task, its efficiency or the cost of its effort. However, we assume that given the contracted agent’s type, its utility func- tion is known to its opponent. For example, suppose Germany builds robots of two types. The specifica- tions of the robots are known to the German robots and to the US robots, however, the US robots don’t know the specific type of the German robots that they meet. As in previous sections the output is a function of the contracted agent’s effort level, and the prob- ability function p indicates the probability of each outcome, given the effort level and the agent’s type. The question remains which contract the contract- ing agent should offer when it doesn’t know the con- tracted agent’s type. A useful technique in such sit- uations is for the contracting agent to search for an optimal mechanism [Demougin, 19891 as follows: the contracting agent offers the contracted agent a menu of contracts that are functions of its type and the out- come. The agents chooses a contract (if at all) and announces it to the contracting agent. Given this con- tract, the contracted agent chooses an effort level which maximizes its own expected utility. In each of the menu’s contracts, the contracted agent’s expected util- ity should be at least as its expected utility if it doesn’t sign the contract. We also concentrate only on con- tracts in which it will always be in the interest of the contracted agent to honestly report, its type. It was proven that this requirement is without loss of gener- ality [Myerson, 19821. It was also shown that in some situations, an efficient contract can be reached without communication [Demougin, 19891, but we omit the dis- cussion here for space reasons. If there are several agents whose types are unknown to the contracting agent and it must choose among them, the following mechanism is appropriate: The contracting agent announces a set of contracts based on the agent’s type and asks the potential contracted agents to report their types. On the basis of these re- ports the contractin B agent chooses one agent [McAfee and McMillan, 1987 . In other situations, the contracting agent knows the utility function of the contracted agent, but the con- tracted agent is able to find more information on the environment than the contracting agent. For example, when the German robot reaches the area where it needs to dig, it determines the structure of this area. This information is known only to the German robot and not to the US robot. The mechanism that should be used in this context is the following: The contracting agent offers a payment arrangement which is based on the outcome and the message the contracted agent will send to the contracting agent about the additional in- formation it possesses. If the contracted agent accepts the offer, it will observe the information (by going to the area, or using any of its sensors etc.). Then it will send a message to the contracting agent and will choose its effort level. Eventually, after the task is finished and the outcome is observed, the contracting agent will pay the rewards. Also in this case [Chris- tensen, 19811, the agents can concentrate on the class of contracts that induce the contracted agent to send a truthful message. This is since for any untruthful contracts, a truthful one can be found in which the ex- pected utility of agents is the same. A maximization solvable problem can be constructed here, but we omit it for space reasons. Subcontracting to a Group Suppose that the task the contracting agent wants to contract for can be performed by a group of agents. Each of the contracted agents is independent in the sense that it tries to maximize its own utility. The contracting agent offers a contract to each of the pos- sible contracted agents. If one of them rejects the of- fer, then the contracting agent cannot subcontract the task. Otherwise, the contracted agents simultaneously choose effort levels. In other situations, the contracting agent can’t ob- serve the individual outcome (or such an outcome does not exists) but rather observe only the overall outcome from the effort of all agents [Holmstrom, 1982].Bere, even in the case of certainty, i.e., the state of the world is known, there is a problem in making the contracted agents take the preferred level of action, since there is no way for the contracting agent to find out the effort level of each of the individual agent, given the overall output. For example, suppose two robots agreed to dig minerals, but they both put the minerals in the same truck, so it is not possible to figure out who digs what. istributed Problem Solving 247 If the contracting agent wants the contracted agents to take the vector of levels effort e* it can search for a contract such that, if the outcome is q 2 q(e*) then w(a) = bi and otherwise 0, such that U(ey , bi) 2 tii. That is, if all agents choose the appropriate effort level, each of them gets bi and if any of them does not, all get nothing. Conclusions In this paper we presented techniques that can be used in different cases where sub-contracting of a task by an agent to another agent or a set of agents in non- collaborative environments is beneficial. In all the sit- uations, simple Pareto-optimal contracts can be found by using techniques of maximization with constraints. In the case where the agents have complete informa- tion about each other, there is no need for negotiations and a contract is reached without a delay even when the contracting agent doesn’t supervise the contracted agent’s actions. If there is asymmetric information, or the a.gents are not sure about their opponents’ utility functions, a stage of message exchange is needed to reach a contract. References Arrow, K. J. 1985. The economics of agency. In Pratt, J. and Zeckhauser, R., editors 1985, Principals and Agents: The Structure of Business. Harvard Business School Press. 37-51. Bond, A. H. and Gasser, L. 1988. An analysis of problems and research in DAI. In Bond, A. H. and Gasser, L., editors 1988, Readings in DAI. Morgan Kaufmann Pub., Inc., Ca. 3-35. Christensen, J. 1981. Communication in agencies. Bell Journal of Economics 12:661-674. Conry, S.E.; Macintosh, D. J .; and Meyer, R.A. 1990. DARES: A distributed automated REasoning system. In Proc. of AAAISO, MA. 78-85. Davis, R. and Smith, R.G. 1983. Negotiation as a metaphor for distributed problem solving. Artificial Intelligence 20:63-109. Demougin, D. 1989. A renegotiation-proof mechanism for a principle-agent model with moral hazard and adverse selection. The Rand Journal of Economics 20:256-267. Durfee, E. 1992. What your computer really needs to know, you learned in kindergarten. In Proc. of AAAI-92, California. 858-864. Ephrati, E. and Rosenschein, J. 1991. The Clarke tax as a consensus mechanism among automated agents. In Proc. of AAAI-91, California. 173-178. Gasser, L. 1991. Social concepts of knowledge and action: DA1 foundations and open systems semantics. Artificial Intelligence 47( l-3):107-138. Grossman, S. and Hart, 0. 1983. An anaysis of the principal-agent problem. Econometrica 51(1):7-45. Grosz, B. and Kraus, S. 1993. Collaborative plans for group activities. In IJCA193, French. Harris, M. and Raviv, A. 1978. Some results on in- centive contracts with applications to education and employment, health insurance, and law enforcement. The American Economic Review 68( 1):20-30. Hirshleifer, J. and Riley, J. 1992. The Analytics of Uncertainty and Information. Cambridge University Press, Cambridge. Holmstrom, B. 1982. Moral hazard in teams. Bell Journal of Economics 13(2):324-340. Kraus, S. and Wilkenfeld, J. 1991a. The function of time in cooperative negotiations. In Proc. of AAAI- 91, California. 179-184. Kraus, S. and Wilkenfeld, J. 1991b. Negotiations over time in a multi agent environment: Preliminary re- port. In Proc. of IJCAI-91, Australia. 56-61. Kraus, S.; Ephrati, E.; and Lehmann, D. 1991. Nego- tiation in a non-cooperative environment. J. of Ex- perimental and Theoretical AI 3(4):255-282. Lesser, V.R. 1991. A retrospective view of fa/c dis- tributed problem solving. IEEE Transactions on Sys- tems, Man, and Cybernetics 21(6):1347-1362. McAfee, R. P. and McMillan, J. 1987. Competition for agency contracts. The Rand Journal of Economics 18(2):296-307. Myerson, R. 1982. Optimal coordination mechanisms in generalized principal-agent problem. Journal of Mathematical Economics 10:67-81. Page, F. 1987. The existence of optimal contracts in the principal-agent model. Journal of Mathematical Economics 16(2):157-167. Pfaffenberger, R. and Walker, D. 1976. Mathemati- cal Programming for Economics and Business. The IOWA State University Press, Ames, IOWA. Radner, R. 1981. Monitoring cooperative agreements in a repeated principal-agent relationships. Econo- metrica 49(5):1127-1148. Rasmusen, E. 1989. Games and Information. Basil Blackwell Ltd., Cambridge, Ma. Ross, S. 1973. The economic theory of agency: The principal’s problem. The American Economic Review 63(2):134-139. Shavell, S. 1979. Risk sharing and incentives in the principal and agent relationship. Bell Journal of Eco- nomics 10:55-79. Sycara, K. P. 1990. Persuasive argumentation in ne- gotiation. Theory and Decision 28:203-242. Wellman, M. and Doyle, J. 1992. Modular utility rep- resentation for decision-theoretic planning. In Proc. of AI planning Systems, Maryland. 236-242. Zlotkin, G. and Rosenschein, J. 199 1. Incomplete in- formation and deception in multi-agent negotiation. In Proc. IJCAI-91, Australia. 225-231. 248 Kraus | 1993 | 37 |
1,362 | Victor Lesser Hamid Nawabt Computer Science Department University of Massachusetts Amherst, MA 01003 {lesser izaskun klassner}@cs.umass.edu Abstract This paper presents the IPUS (Integratecl Process- ing and Understanding of Signals) architecture to address the traditional perceptual paradigm’s shortcomings in complex environments. It has two premises: (1) the search for correct interpretations of signal processing algorithms’ (SPAS) outputs re- quires concurrent search for SPAS and control pa- rameters appropriate for the environment, and (2) interaction between these search processes must be structured by a formal theory of how inappro- priate SPA usage can distort SPA output. We de- scribe IPUS’s key components (discrepancy detec- tion, diagnosis, reprocessing, and differential di- agnosis) and their instantiation in an acoustic in- terpretation system. This application, along with another in the radar domain, supports our claim that the IPUS paradigm is feasible and generic. Introduction In traditional knowledge-based perceptual systems [B, 171, numeric signal processing is fixed, and interpre- tation processes are limited to analyzing the single view afforded by this processing. This paradigm as- sumes that a small set of front-end signal processing algorithms (SPAS) with fixed parameter settings can produce adequate evidence for deriving plausible inter- pretations under all scenarios. The complex environ- ments that next-generation systems will monitor, how- ever, have variable signal to noise ratios, unpredictable source behaviors, and many sources whose signatures can mask or otherwise distort each other. Under the traditional paradigm, such environments often require combinatorially explosive SPA sets with multiple pa- rameter settings to capture the variety of signals ad- equately [7] and to handle the variety of processing *This work was supported by the Rome Air Develop- ment Center of the Air Force Systems Command under contract F30602-91-C-0038, and by the Office of Naval Re- search under contract N00014-92-J-1450. The content does not necessarily reflect the position or the policy of the Gov- ernment, and no official endorsement should be inferred. Pzaskun Gallastegi Frank Klassner *ECS Department Boston University Boston, MA 02125 hamid@buengc.bu.edu goals the current environment may dictate. To avoid this problem, we argue that knowledge-based percep- tual research needs to consider a paradigm incorpo- rating dynamic SPA reconfiguration. This term refers not only to reconfiguration for tracking changes in sig- nal behavior, but also to (repeated) reconfiguration for analyzing cached data to reduce uncertainty in signal interpretations. Research in active vision and robotics has recog- nized the importance of tracking-oriented reconfigu- ration [19], and tends to use a control-theoretic ap- proach for making reconfiguration decisions. It is in- deed sometimes possible to reduce the reconfiguration of small sets of front-end SPAS to problems in linear control theory. In general, however, the problem of de- ciding when an SPA (e.g. a shape-from-X algorithm or an acoustic filter) with particular parameter settings is appropriate to a given environment may involve non- linear control or be unsolvable with current control the- ory techniques. Recent systems in other fields [4, 5, 6, 9, 111 have used symbolically-oriented architectures that permit interpretation processes to reconfigure front-end signal processing. However, as the elated Work section will show, their architectures have not been general enough. We have developed an architecture to per- mit more general interaction between signal process- ing and signal interpretation by explicitly representing the theory underlying front-end SPAS. The Integrated Processing and Understanding of Signals (IPUS) ar- chitecture has two premises for complex environments. The first is that the search for correct interpretations of numeric SPAS’ outputs requires a concurrent search for SPAS and control parameters appropriate for the en- vironment. The second premise is that the interaction between these search processes must be bidirectional and structured by a formal theory of how inappropri- ate parameter settings or applications of SPAS lead to specific discrepancies in SPA output. This paper presents (1) the generic architecture, (2) the IPUS components’ generic design and interac- tion, (3) IPUS instantiated in a sound understanding testbed, (4) related work, and (5) conclusions. Distributed Problem Solving 249 From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. The Generic IPUS Architecture Before describing IPUS we must first discuss SPAS, the basic means for analyzing environmental signals. When applied to a signal, an SPA instance produces correlates, which serve as evidence for hypothesizing features of objects (e.g. sounds or physical objects). An SPA instance is specified by values for a generic SPA’s parameters, and these values induce capabilities or limitations with respect to the scenario being mon- itored. We use “SPA” to refer to SPA instances. Con- sider the Short-Time Fourier Transform (STFT) [15] in the acoustic domain. An STFT instance has partic- ular values for its parameters, such as analysis window length, frequency-sampling rate, and decimation factor (consecutive analysis windows’ separation). Depend- ing on a scenario’s spectral features and their time- variant nature, these parameter values increase or de- crease the instance’s usefulness in monitoring the sce- nario. Instances with large window lengths provide fine frequency resolution for scenarios containing sounds with time-invariant components, but at the cost of poor time resolution for scenarios containing sounds with time-varying components. Figure la shows the generic IPUS architecture. Two types of signal interpretation hypotheses are stored on the hierarchical blackboard: current signal data’s in- terpretations and expectations about future data’s in- terpretations. The design of IPUS assumes that signal data is ana- lyzed in blocks. IPUS uses an iterative process to con- verge to appropriate SPAS and interpretations. The following is a summary (see Architecture Compo- nents and [12, 141 for more detail). For each data block, the loop starts by processing the signal with an initial SPA configuration. These SPAS are selected not only to identify and track the objects most likely to appear, but also to provide indications of when less likely or unknown objects have appeared. In the next loop step, a discrepancy detection process tests for dis- crepancies between the correlates of each SPA in the current configuration and expectations based on (1) object models, (2) th e correlates of other SPAS in the configuration, and (3) application-domain signal char- acteristics. These comparisons may occur both after SPA output is generated and after interpretations are generated. If discrepancies are detected, a diagnosis process then attempts to explain them in terms of a set of distortion hypotheses. This diagnosis uses the for- mal theory underlying the signal processing. The loop ends with a signab reprocessing stage that proposes and executes a search plan to find a new front-end (i.e. a set of SPAS) to eliminate or reduce the hypothesized dis- tortions. After the loop’s completion, if there are any similarly-rated competing top-level interpretations, a differential diagnosis process selects and executes a re- processing plan to detect features that will discrimi- nate among the alternatives. IPUS is intended to integrate the search for interpre- tations of SPA correlates with the search for SPA pa- rameter values appropriate to the scenario. In complex environments we argue that these searches must inter- act bidirectionally under the guidance of a domain’s formal signal processing theory- The dual search in the framework becomes apparent with the following obser- vations. Each time data is reprocessed, whether for disambiguation or distortion elimination, a new state in the SPA search space is tested for how well it elimi- nates distortions. The measurement of distortion elim- ination or disambiguation assumes that the system’s current state in the interpretation space matches the scenario being observed. Failure to remove a hypoth- esized distortion after a bounded search in the SPA space will lead to a new search in the interpretation space. This occurs because the diagnosis and repro- cessing results represent attempts at justifying the as- sumption that the current interpretation is correct. If either diagnosis or reprocessing fails, there is a strong likelihood that the current interpretation is not cor- rect and a new search is required-in the interpretation space. Furthermore, the results of failed reprocessing can constrain the new interpretation search by elim- inating from consideration objects with features that should have been found during the reprocessing. We designed IPUS to serve as the basis of percep- tual systems that can manage their interpretations’ uncertainty levels. Therefore, we had to provide the architecture’s control framework with a way to rep- resent factors that affect interpretations’ certainties. The control framework also had to support context- - - sensitive focusing on particular uncertainties in order to control engagement and interruption of the archi- tecture’s reprocessing loop. For these reasons, IPUS uses the RESUN [3] frame- work to control knowledge source (KS) execution. This framework supports the view of interpretation as a process of gathering evidence to resolve hypotheses’ sources of uncertainty (SOUs). It incorporates a lan- guage for representing SOUs as structures which trig- ger the selection of appropriate interpretation strate- gies. Problem-solving is driven by information in the problem solving model, which is a summary of the cur- rent interpretations and the SOUs associated with each one’s supporting hypotheses. An incremental, reactive planner maintains control using control plans and fo- cusing heuristics. Control plans are schemas that de- fine the strategies and SPAS available to the system for processing and interpreting data, and for resolving interpretation uncertainties. Focusing heuristics are context-sensitive tests to select SOUs to resolve and processing strategies to pursue. Architecture Components This section provides detailed, yet generic, descriptions of the key architectural components. Our focus is on the three roles a domain’s formal signal processing the- ory can play in guiding interpretation and processing in 250 Lesser Figure 1: la shows the generic IPUS architecture, lb shows the architecture instantiatedfor the sound understanding testbed. Solid arrows indicate dataflow relations. Dotted arrows indicate plans that the planner can pursue when trying to reduce SOUs (discrepancies) in the problem solving model that were selected by the focusing heuristics. Knowledge to instantiate the architecture for an application is shown in parentheses in 1 b. Reprocessing plans can produce SPA output at any abstraction level, not just the lowest. a complex environment: (1) providing methods to de- termine discrepancies between an SPA’s expected cor- relate set and its computed correlate set, (2) defining distortion processes that explain how discrepancies be- tween expectations and an SPA’s computed correlates result when the SPA has inappropriate values for spe- cific parameters, and (3) specifying strategies to repro- cess signals so that distortions are removed or ambigu- ous data is disambiguated. We relate a signal processing theory to SPAS and their interaction with the environment using SPA pro- cessing models. An SPA processing model describes how the output of the SPA changes when one of its control parameters is varied while all the others are held fixed. SPA processing models serve as the basis for defining how the parameter settings of an SPA can introduce distortions into the SPA’s computed correlates. These distortions cause SPA output discrepancies. Consider an SPA processing model corresponding to the STFT’s WINDOW-LENGTH parameter and how this model can be used to define distortions. Assume that an STFT with an analysis window of length VV is applied to a signal sampled at rate R. If the signal came from a scenario containing two or more frequency tracks closer than R/W, Fourier theory predicts that the tracks will appear as one track in the STFT’s correlates. iscrepancy Detection Discrepancy detection is crucial to IPUS’s iterative ap- proach. Its inclusion in the IPUS loop relies on sev- eral observations. An SPA’s correlates can be com- pared with expectations based on object models or on a priori environment constraints such as maximum bounds on sounds’ rate of temporal change in fre- quency. Most importantly, a domain’s signal process- ing theory can specify how one SPA’s correlates for a context-independent feature can serve as the basis of expectations for another SPA’s output correlates. This specification can serve to check an SPA’s appropriate- ness to the environment. It can also serve to decide where to selectively apply another SPA in the signal data stream to obtain correlates for context-dependent features. For example, in the acoustic domain a time- domain energy tracking algorithm can detect impul- sive sources whose short-duration frequency compo- nents might be smoothed to indetectability in the out- put of an STFT with a wide analysis window. Thus, the energy algorithm can serve as a standard against which STFT output can be compared. Distributed Problem Solving 251 We categorize discrepancies in focusing heuristics diagnostic consideration in th .e following order: for faults A discrepancy detected between an SPA’s cor- relates and correlates from other SPAS applied to the same data. In [2] we discuss several fault dis- crepancy detection algorithms used in the sound un- derstanding testbed. Faults are considered for diag- nosis first since inconsistency among the outputs of two or more SPAS within a front-end almost always indicates a front-end’s inappropriate application. violations A discrepancy detected between an SPA’s correlates and environment constraints. Violations are ranked second for diagnosis since they reflect a comparison between only one SPA’s output and do- main characteristics that may be incompletely spec- ified. conflicts A discrepancy between an SPA’s correlates and the output expected based on previous high- level interpretations. Conflicts are ranked third for diagnosis since they reflect a comparison between SPA output and interpretations which may not be accurate even if they are based on appropriately- processed data. In IPUS, conflict discrepancy detection is distributed among all KSs that interpret lower-level data as higher- level concepts. Each such KS checks if any data can support a sought-after expectation. If no such data or only partially supportive data is found, the KS records this fact as an SOU in the problem solving model, to be resolved at the discretion of the focusing heuristics. Once a data block’s front-end processing is completed, a discrepancy detection KS checks if SPA correlates are consistent with each other, testing for violations and faults defined by the system designer. An important consideration in discrepancy detection is that expectation hypotheses are sometimes only ex- pressible qualitatively, as in the example, “During the next 400 to 800 msec, a sinusoidal component currently at 1200 Hz will shift to a frequency between 1700 and 2000 Hz.” Thus, our testbed discrepancy detection components use a range calculus similar to Allen’s [l] to specify discrepancies. Discrepancy Diagnosis The discrepancy diagnosis KS is included to take ad- vantage of the fact that a signal domain’s SPA process- ing models can predict the form of an SPA’s correlates when the SPA’s parameter values are appropriate OT inappropriate to the current scenario. The KS models this knowledge in a database of dis- tortion operators. When an operator is applied to a description of undistorted SPA output, it returns the output with the operator’s distortion introduced. The KS uses these operators in a means-ends analy- sis framework [16] to “explain” discrepancies between the expected form of an SPA’s correlates and the ac- tual form of the SPA’s computed correlates. There are two inputs for this KS: an initial state representing the expected correlates’ form and a goal state represent- ing the computed correlates’ form. The formal task of diagnosis is to generate an operator sequence map- ping the initial state onto the goal state. Note that there is a difference between discrepancies and distor- tions. Distortions are used to explain discrepancies. It is also possible for several distortions to explain the same kinds of discrepancies. In the IPUS Instantia- tion section we will see how a “low frequency resolu- tion” distortion explains ‘missing’ track discrepancies. The KS’s search for a distortion operator sequence is iteratively carried out using progressively more com- plex abstractions of the initial and goal states, until a level is reached where a sequence can be generated us- ing no more signal information than is available at that level. Thus, the KS mimics expert diagnostic reasoning in that it offers simplest explanations first [18]. Once a sequence is found, the KS enters its verify phase, “drops” to the lowest abstraction level, and checks that each operator’s pre- and post-conditions are met when all available state information is considered. If verification succeeds, the operator sequence and a di- agnosis region indicating the hypotheses involved in the discrepancy are returned. If it fails, the KS at- tempts to “patch” the sequence by finding operator subsequences that eliminate the unmet conditions and inserting them in the original sequence. If no patch is possible, and no alternative explanations can be gener- ated, the hypotheses hypotheses in the initial state are annotated with an SOU with a very negative rating. An issue not addressed in earlier work [16] that arose in the development of IPUS is the problem of inap- plicable explanations. Sometimes the first explana- tion offered by the KS will not enable the reprocessing mechanism to eliminate a discrepancy. In these cases, the architecture permits reactivation of the diagnostic KS with the previous explanation supplied as one that must not be returned again. To avoid repeating the search performed for the previous explanation, the KS stores with its explanations the search-tree context it was in when the explanation was produced. The KS’s search for a new explanation begins from that point. Signal Reprocessing Once distortions have been explained, it falls to the re- processing KS to search for appropriate SPAS and pa- rameter values that can reduce or remove them. This component incorporates the following phases: assess- ment, selection, and execution. The reprocessing KS input includes a description of the input and output states, the distortion operator sequence hypothesized by the diagnosis KS, and a description of the discrep- ancies present between the input and output states. The assessment phase uses case-based reasoning con- strained by signal processing theory to generate re- processing plans that have the potential of eliminat- ing the hypothesized distortions present in the current 252 situation. For example, Fourier theory indicates that frequency resolution distortions, if actually present in STFT output, can be eliminated in a reapplication of the SPA with its FFT-SIZE parameter double or quadruple that of the original setting. In the selection stage, a plan is selected from the retrieved set based on computation costs or other cri- teria supplied by focusing heuristics. The execution phase consists of incrementally adjusting the SPAS pa- rameters, applying the SPAS to the portion of the sig- nal data that is hypothesized to contain distortions, and testing for discrepancy removal. The execution phase is necessarily incremental because the situation description is at least partially qualitative, and there- fore it is generally impossible to predict a priori exact parameter values to be used in the reprocessing. Execution continues until the distortion causing the discrepancy is removed or plan failure occurs. Plan failure is indicated when either the plan’s iterations ex- ceed a fixed threshold or a plan iteration requires a SPA parameter to have a value outside fixed bounds. When failure occurs, the diagnosis KS can be re-invoked to find an alternative explanation for the original distor- tions. If no alternative explanation can be found, the hypotheses involved in the discrepancy are annotated with SOUs indicating low confidence due to irresolv- able discrepancies. Different ial Diagnosis We include the differential diagnosis KS to produce re- processing plans that prune the interpretation search space when ambiguous data is encountered. Its input is the ambiguous data’s set of alternative interpretations, and it returns the time period in the signal to be repro- cessed, the evidence each interpretation requires, and the set of proposed reprocessing plans. The KS first labels any observed evidence in the in- terpretation hypotheses’ overlapping features as “am- biguous.” It then determines the hypotheses’ discrim- inating features (e.g., in the acoustic domain, those frequency tracks of the competing source hypotheses’ models which don’t overlap any other models’ tracks). For each discriminating feature with no observed evi- dence, the KS posits an explanation for how the evi- dence could have gone undetected, assuming the source was present. These explanations index into a plan database, and select reprocessing plans to cause the missing evidence to appear. The KS then checks each ambiguous data region for resolution problems based on source models (e.g., a frequency region’s peaks could support one source Y component or two source Z com- ponents), and selects reprocessing plans to provide finer component resolution in those regions. The reprocessing plan set returned is the first non- empty set in the sequence: missing-evidence and ambiguous-evidence plan sets’ intersection, missing- evidence plan set, ambiguous-evidence plan set. This hierarchy returns the plans most likely to prune many interpretations from further consideration. The alter- native hypotheses’ temporal overlap region defines the reprocessing region, and the ambiguous and missing evidence handled by the reprocessing plan set defines the support evidence. A plan from the returned set is then iteratively executed as in the reprocessing KS until either a plan-failure criterion is met or at least one support evidence element is found. This KS’s explanatory reasoning for missing evi- dence is primitive compared to the discrepancy diag- nosis KS’s. Only simple, single distortions like loss of low-energy components due to energy thresholding are considered; no multiple-distortion explanations are constructed. This design is justified because the KS’s role is to quickly prune large areas of interpretation spaces, without preference for any particular interpre- tation. When a particular interpretation is preferred (rated) over alternatives and a detailed explanation for its missing support is required, IPUS control plans would instead use the discrepancy diagnosis KS, en- coding the preferred interpretation in the initial state. IPUS Instantiation We have implemented a sound understanding testbed to test the IPUS architecture’s realizability and gen- erality (see Figure lb). In this section we discuss one of the testbed experiment scenarios and how the ar- chitecture structured the application of acoustic signal processing knowledge to the scenario’s interpretation. The discussion is not intended to illustrate specific con- trol plans’ execution or specific SOUs’ generation. The testbed version described here is called configuration C.1. We are currently developing a second version C.2 that still relies on the basic IPUS framework but that uses approximate-knowledge KSs to constrain the number of sound models retrieved when large sound libraries are used. The testbed uses 1500Kb of Common Lisp code and runs on a TI Explorer II+. All SPAS are implemented in software. The testbed SPA database has 3 classes: STFT, energy tracking, and spectral peak-picking. For this experiment the source database contains 5 syn- thetic and noise-free real-world acoustic source mod- els; the signal is sampled at 10KHz. The scenario and pertinent source models appear in Figure 2. The testbed was initially configured to track a hairdryer sound with two frequency components at 1000 and 1050 Hz. The configuration had a high peak-picking energy threshold to minimize the number of low-energy noise peaks produced by the hairdryer, and STFT pa- rameter settings to provide enough resolution to sepa- rate the hairdryer’s frequency components. The tele- phone ring and the door slam represent unexpected source events for which the testbed must temporarily switch SPA configurations if it is to identify them with sufficient certainty. Because the testbed’s SPA settings were originally set for tracking the hairdryer, the testbed must de- Distributed Problem Solving 253 Block1 BkockZ Model confused wllh Phone Figure 2: Scenario and pertinent source definitions. Darker shading indicates higher signal energies. tect several discrepancies and perform reprocessing to reasonably analyze the scenario. In block l’s data, the [1200,1220] H g z re ion has insufficient resolution to dis- play the phone ring’s components due to the frequency- sampling provided by the STFT SPA’s FFT-SIZE pa- rameter value. This causes a narrow-band set of peaks with no clear energy trends to appear in the region, thus violating the noise distribution model and raising a discrepancy. The output could support the phone ring, the doorbell, or even both. Had only one candi- date interpretation been identified, the testbed would have handled the violation discrepancy via the repro- cessing loop. Because more than one interpretation exists, however, the testbed’s focusing heuristics se- lect differential diagnosis to resolve the interpretation uncertainty. The diagnosis finds two reasons for the confusion: the peak-picking SPA’s high energy thresh- old designed for the hairdryer would prevent the door- bell’s low-energy 2200 Hz component from appearing if it were present and the [1200,1220] region’s low fre- quency resolution. The uncertainty is resolved in favor of the phone ring interpretation through reprocessing that doubles the FFT-SIZE and decreases the energy threshold. The phone ring’s definition and block l’s in- terpretation generate the expectation for block 2 that it should contain the phone ring’s frequencies. Because the testbed’s primary goal is to track the hairdryer, the parameter settings are reset to their original values. In block 2, the testbed detects a fault discrepancy between its time-domain energy-estimator SPA output and its STFT SPA output. The energy- estimator detects the door slam’s substantial energy increase followed about 0.1 seconds later by a precipi- tous decrease. The STFT SPA, however, produces no significant set of peaks to account for the signal en- ergy flux. This is because the SPA’s time decimation parameter is too small. The testbed also detects a conflict discrepancy between expectations established from block 1 for the [1200,1220] frequency region and the STFT SPA’s output. The STFT SPA produces a peak set with no energy trends that can support the phone ring’s expected continuation because of inade- quate frequency sampling in the region. Both discrep- ancies are resolved by reprocessing based on discrep- 254 Lesser ancy diagnosis explanations. The first discrepancy is resolved through reprocessing with a larger decimation value and smaller STFT windows, while the second is resolved through reprocessing with the finer frequency sampling provided by a 2048 FFT-SIZE. Related Work IPUS represents the formalization and extension of concepts explored in our work on a diagnosis system that used formal signal processing theory to debug sig- nal processing systems [16] and in work on meta-level control [lo: that used a process of fault-detection, diag- nosis, and replanning to choose appropriate parameters for controlling a problem-solving system. Recent systems have begun to explore interaction between interpretation activity and signal processing. The GUARDIAN system’s [9] data management com- ponent controls signal sampling rates with respect to real-time constraints. It is designed for monitoring simple signals such as heart rate and does not seem ad- equate for monitoring signals with complex structures that must be modeled over time. The framework is typical of systems whose input data points already rep- resent useful information and require no formal front- end processing. Many perceptual frameworks [4, 5, 61 implement the reprocessing concept only as reconfiguration guided by differential diagnosis. Often, they continuously gather data from every available SPA whether required for in- terpretation improvement or not. Only when ambigu- ous data is observed are certain SPAS’ outputs actually examined to distinguish between competing interpreta- tions. This approach’s uncertainty representations at- tribute deviations between signal behavior and event models solely to chance source variations, never to a signal’s interaction with unsuitable SPAS. In the GOLDIE system [ll] interpretation goals guide the choice of image segmentation algorithms, their parameter settings, and their application regions within an image array. The system generates sym- bolic explanations for an algorithm’s (un)suitability to a particular region. In these features the frame- work approaches the capabilities of IPUS, but notably it does not incorporate diagnosis. If an algorithm’s segmentation is unexpectedly poor7 the system cannot diagnose the result and use this information to refor- mulate algorithm search, but simply re-segments with the original search’s next rated algorithm. Conclusion IPUS provides structured, bidirectional interaction be- tween the search for SPAS appropriate to the envi- ronment and the search for interpretations to explain the SPAS’ output. The availability of a formal sig- nal processing theory is an important criterion for de- termining the architecture’s applicability to a domain. IPUS allows system developers to organize signal pro- cessing knowledge into formal concepts of discrepancy tests, SPA processing models, distortion operators, and reprocessing-SPA application strategies. A major ar- chitectural contribution is to unify SPA reconfiguration performed for symbolic-based interpretation processes with that performed for numeric-based processes as a single reprocessing concept. With respect to scaling, one might argue that the time required by multiple reprocessings under IPUS would be unacceptably high in noisy environ- ments. This view ignores IPUS’s advantage over other paradigms in that it selectively samples several front- end processings’ outputs, avoiding the traditional ap- proach of continuously sampling several front-end pro- cessings’ results. IPUS also encourages the develop- ment of fast, highly specialized, theoretically sound SPAS for reprocessing in appropriate contexts [ 131. In this respect the IPUS paradigm decreases the expected processing time for scenarios requiring several process- ing views for plausible interpretations. Our acoustic testbed experiments indicate that the basic functionality and interrelationships of the archi- tecture’s components are realizable. An indication of the architecture’s generality can be seen in its use not only in the acoustic interpretation testbed discussed in this paper but also in a radar interpretation system be- ing developed at Boston University. Our current work in the architecture is concerned with predicting bounds on the amount of reprocessing an environment can in- duce in IPUS-based systems. Acknowledgments We would like to acknowledge Norman Carver for his role in developing IPUS’s control framework and evidential reasoning capabilities. Malini Bhandaru and Zarko CvetanoviC were important contributors to the testbed’s early implementation stages, and Erkan Dorken was an important contributor to the design of testbed SPAS. References [l] Allen, J. F.; Hayes, P. J. “A Common-Sense The- ory of Time,” Proc. 1985 Int ‘1 Joint Conf. on AI. [2] Bitar, N.; Dorken, E.; Paneras, D.; and Nawab, H. “Integration of STFT and Wigner Analysis in a Knowledge-Based Sound Understanding System,” Proc. 1992 IEEE Int’I Conf. on Acoustics, Speech and Signal Processing. [3] Carver, N. “A New Framework for Sensor Inter- pretation: Planning to Resolve Sources of Uncer- tainty,” Proc. 1991 AAAI. [4] Dawant, B.; Jansen, B. “Coupling Numerical and Symbolic Methods for Signal Interpreta- tion,” IEEE Trans. Systems, Man and Cybernet- ics. Jan/Feb 1991. [5] De Mori, R.; Lam, L.; Gilloux, M. “Learning and Plan Refinement in a Knowledge-Based System for Automatic Speech Recognition,” IEEE Trans. Pattern Analysis and Machine Intelligence, Feb 1987. [6] Dove, W. Knowledge-Based Pitch Detection, PhD Thesis, Computer Science Dept., MIT, 1986. [7] Dorken, E.; Nawab, H.; Lesser, V. “Extended Model Variety Analysis for Integrated Processing and Understanding of Signals,” Proc. 1992 IEEE Int 7 Conf. on Acoustics, Speech and Signal Pro- cessing. [8] Erman, L.; Hayes-Roth, R.; Lesser, V.; Reddy, D. “The Hearsay II Speech Understanding System: Integrating Knowledge to Resolve Uncertainty,” Computing Surveys, v. 12, June 1980. [9] Hayes-Roth, B.; Washington, R.; Hewett, R.; Hewett, M.; Seiver, A. “Intelligent Monitoring and Control,” Proc. 1989 Int ‘1 Joint Conf. on AI. [lo] Hudlicki, E.; Lesser, V. “Meta-Level Control Through Fault Detection and Diagnosis,” Proc. 1984 AAAI. [ll] Kohl, C.; Hanson, A.; Reisman, E. “A Goal- Directed Intermediate Level Executive for Image Interpretation”’ Proc. 1987 Int ‘1 Joint Conf. on AI. [12] Lesser, V.; Nawab, H.; et al. “Integrated Signal Processing and Signal Understanding,” TR 91- 34, Computer Science Dept., University of Mas- sachusetts, Amherst, MA, 1991. [13] Nawab, H.; Darken, E. “Efficient STFT Ap- proximation using a Quantization and Difference Method,” Proc. 1993 IEEE Int’I Conf. on Acous- tics, Speech and Signal Processing. 141 Nawab, H.; Lesser, V. “Integrated Processing and Understanding of Signals,” ch 6, Knowledge-Based Signal Processing, A. Oppenheim and H. Nawab, eds, Prentice Hall, New Jersey, 1991. 151 Nawab, H.; Quatieri, T. “Short-Time Fourier Transform,” Advanced Topics in Signal Process- ing, Prentice Hall, New Jersey, 1988. [16] Nawab, H.; Lesser, V.; Milios, E. “Diagnosis Us- ing the Underlying Theory of a Signal Process- ing System,” IEEE Trans. Systems, Man and Cy- bernetics, Special Issue on Diagnostic Reasoning, May/June 1987. [17] Nii, P.; Feigenbaum, E.; Anton, J.; Rockmore, A.; “Signal-to-Symbol Transformation: HASP/SIAP Case Study,” AI Magazine, vol 3, Spring 1982. [18] Peng, Y.; Reggia, J. “Plausibility of Diagnositic Hypotheses: The Nature of Simplicity,” Proc. 1986 AAAI. [19] Swain, M.; Stricker, M. eds. Promising Directions in Active Vision, NSF Active Vision Workshop, TR CS 91-27, Computer Science Dept, University of Chicago, 1991. Distributed Problem Solving 255 | 1993 | 38 |
1,363 | lementation Computer Science Department University of Massachusetts Amherst, Massachusetts 01003 sandholm@cs.umass.edu Abstract This paper presents a formalization of the bidding and awarding decision process that was left undefined in the original contract net task allocation protocol. This formalization is based on marginal cost calculations based on local agent criteria. In this way, agents having very different local criteria (based on their self- interest) can interact to distribute tasks so that the network as a whole functions more effectively. In this model, both competitive and cooperative agents can interact. In addition, the contract net protocol is extended to allow for clustering of tasks, to deal with the possibility of a large number of announcement and bid messages and to effectively handle situations, in which new bidding and awarding is being done during the period when the results of previous bids are unknown. The protocol is verified by the TRACONET (TRAnsportation Cooperation’ NET) system, where dispatch centers of different companies cooperate automatically in vehicle routing. The implementation is asynchronous and truly distributed, and it provides the agents extensive autonomy. The protocol is discussed in detail and test results with real data are presented. 1 1 Introduction The contract net protocol (CNP) (Smith 1980; Smith & Davis 1981; Davis & Smith 1988) for decentralized task allocation is one of the important paradigms developed in distributed artificial intelligence (DAI). Its significance lies in that it was the first work to use a negotiation process involving a mutual selection by both managers and contractors. It was initially applied to a simulated distributed acoustic sensor network. In this interpretation lPrimary supp ort for this work came from the Technology Development Centre of Finland, during the period which the author was working at the Technical Research Centre of Finland, Laboratory for Information Processing, Lehtisaarentie 2A, SF- 00340 Helsinki, Finland. Additional support comes from DARPA contract N00014-92-J-1698. The content of the information does not necessarily reflect the position or the policy of the Government and no official endorsement should be inferred. 256 Sandholm application, the agents were totally cooperative, and selection of a contractor was based on suitability, for example adjacency, processing capability, and current agent load. However, there was no formal model discussed in this work for making task announcing, bidding and awarding decisions. This paper presents such a formal model, where agents locally calculate their marginal costs for performing sets of tasks. The choice of a contractor is based solely on these costs. The pricing mechanism generalizes the CNP to work for both cooperative and competitive agents. Another important issue not covered in previous work on the GNP is the risk attitude of an agent toward being committed to activities it may not be able to honor, or the honoring of which may turn out to be unbeneficial. Additionally, in previous CNP implementations, tasks have been negotiated one at a time. This is not sufficient, if the effort of carrying out a task depends on the carrying out of other tasks. The framework is extended to handle task interactions by clustering tasks into sets to be negotiated over as atomic bargaining items. Finally, the practical problem of announcement message congestion is solved. Our case problem, vehicle routing, is structured in terms of a number of geographically dispersed dispatch centers of different companies. Each center is responsible for the deliveries initiated by certain factories and has a certain number of vehicles to take care of the deliveries. The geographical main operation areas of the centers overlap considerably. This provides for the potential for multiple centers to be able to handle a delivery. Every delivery has to be included in the route of some vehicle. The local problem of each agent is a heterogeneous fleet multi-depot routing problem, where the vehicle attributes include cost per kilometer, maximum route duration, maximum route length, maximum load weight and maximum load volume (Sandholm 1992a). The objective is to minimize the transportation costs. In solving the problem, each dispatch center - represented by one intelligent agent2 - first solves its local 2Another choice would be that each agent represented one vehicle. This small grain size approach would probably not be as efficient, because such a large number of agents would congest the negotiation network and the method would be too From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. routing problem. After that, an agent can potentially negotiate with other dispatch agents to take on some of their deliveries or to let them take on some of its deliveries for a dynamically constructed charge. In the negotiations the agents exchange sets of deliveries whenever this is profitable, i.e., whenever a contractor is able to carry out the task set with less costs than the manager agent. The negotiations can be viewed as an iterative way of making the routin solution better by going through only feasible solutions. f Here ‘feasible’ means that each center can take care of all of its deliveries. This is how a solution closer to the global optimum is reached although no global optimization run is performed. The use of contract nets as opposed to centralized problem solving is most fruitful in operative decision making in volatile domains such as ours and the factory domain of (Parunak 1987). The negotiation is real-time since after each contract is made the exchange of deliveries is made immediately. Thus, between individual negotiations some delivery orders may have been dispatched, new orders may have arrived, and the available vehicles may have changed. There is no iteration among the agents until an equilibrium is reached unlike the approach of (Wellman 1992), where the bids include a number of the similar items an agent wants to buy and it is assumed that the purchase of one type of items is independent of the purchase of other types of items. In our system, each item (task set) is different and task sets of different announcements are highly interdependent. In the equilibrium approach of (Kuwabara and Ishida 1992), at each iteration, the seller sets the price based on demand and the buyers state the quantity they want to buy. Section 2 presents the architecture of our implementation. Section 3 discusses the local control strategy of an agent. In sections 4 to 7, the negotiation phases of announcing, bidding, awarding and award taking are detailed respectively. Section 8 presents test results with real data and section 9 concludes. The vehicle routing application is implemented in a system called TRACONET (TRAnsportation Cooperation NET).4 The asynchronous automatic negotiations in TRACONET resemble a directed government contracting scheme, where each involved party is allowed to make one bid for each announcement it receives, and the bids of the other parties are not revealed to it. The negotiations are directed in the opportunistic. When the number of vehicles is small, this approach does work, though. An example is given in (McElroy et al. 1989), where automatically guided vehicles transport items inside a factory. 3Centralized versions of iterative routing are discussed in (Waters 1987) and (Wang & Beasley 1984). 4The system is implemented in an object-oriented fashion using the C++ language and the X11 Window System on a network of HP 9000 workstations. Each agent is implemented as one HP-UX (UNIX) process. The agents negotiate over the file system and share no memory. sense that an announcement is not sent to all other agents (Parunak 1987), fig. 1. The agents have no fixed hierarchy among themselves. An agent can act both as a manager and a contractor of delivery sets, but it does not have to take both roles, nor is it required to negotiate with all other agents. Further, each agent can reallocate deliveries received from other agents. When announcing, an agent tries to buy some other agent’s transportation services at a price, the maximum of which it specifies in the announcement. When bidding, an agent tries to sell its own services at a price, the minimum of which it specifies in the bid. Awarding means actually buying the services of some other center and award taking means actually selling one’s services. Unlike the original CNP, in the awarding phase explicit loser messages are sent, fig. 1. These messages free the bidder agents from the commitment of their bids, which affects the pricing of new bids and the evaluation of other agents’ bids as will be described. Another option would be to consider a bid a loser if it has not received an award within a time limit, but this does not fit our asynchronous approach, because it forces the manager to award within a strict time limit. The time to analyze bids varies depending on the state of the agent and the number of messages received by it. At this point, we do not know how to realistically set an appropriate upper bound for this time. In our approach, we introduce additional message traffic, which hopefully results in more accurate announcing, bidding and awarding, since the agent will know early on, which of its bids it still may have to honor. Announcing Bidding Awarding Figure 1. Message passing, when agent 1 gives a set of deliveries to agent 2 to be done. Each agent has two main parts: the bargaining system and the local optimizer. The bargaining system is divided into four major components: the announcer, the bidder, the awarder and the award taker. The bargaining system is not restricted to any specific local optimization algorithm5, but the local optimizer has to provide five services. These relate to the counting of marginal costs of a set of deliveries (to remove or to add), to optimizing all deliveries of an agent and to removing and adding sets of deliveries to the agent’s routing solution. Agents in the same negotiation network can use different local optimization algorithms tuned to the requirements of each center separately. The local optimizer services could also be given manually by a transportation coordinator in dispatch centers that do not use automatic optimization. Interactive routing is discussed in (Waters 1984) and (Powell & Sheffi 1989). 5A good overview of centralized routing algorithms is given in (Bodin et al. 1983). istributecl Problem Solvhg 259 3 Local control In TRACONET, an agent first calls its own local optimizer to make the routing decisions concerning the deliveries and vehicles that belong to the associated dispatch center. Based on these initial solutions, the agents start the negotiations. During the negotiations, the local controZ loop of an agent repeatedly goes through a sequence of invoking the bidder, awarder, award taker and announcer. The bidder, awarder and award taker handle all the messages that have been received by the time of their calls. In contrast, the announcer sends at most one announcement to agents during one local control loop cycle. It is preferable to first handle all received messages before sending a new announcement, so that the agents do not get congested by announcements, and announcements are constructed according to the most up to date view of the agent’s local routing decisions. The messages received during the operation of the bidder, awarder or award taker are handled on the next cycle of the local control loop. This prevents the system from getting stuck at any single phase even if large amounts of messages are coming in. An agent can enter and exit the negotiation network dynamically. When joining the network the agent first deletes all announcements and loser messages that may have accumulated in the incoming message media. Then the agent is ready for the negotiations. However, exiting the negotiation process is not as simple for two reasons. First, some other agent might be awarding a delivery set to the agent and if the agent has exited the negotiations, it will not receive the award. Secondly, some other agent might be making a bid to the agent and if the agent exits the negotiation, the other agent does not receive even a loser message for the bid and will not be freed from the commitment of its bid. The second problem is solved by sending a loser message to the other agents for all unhandled announcements sent to them previously. The first problem is solved by going through a listening phase before logging out of the network. During this phase no announcements and no bids are made. The phase can be ended, when replies (awards or loser messages) have been received for all unhandled bids that have been sent out. If an agent wants to reoptimize its local solution, it must first exit the negotiations, reoptimize and then possibly rejoin the negotiations. If the agent did not exit temporarily, the marginal costs calculated before reoptimization would not be valid after it. 4 ~~no~nci~~ An agent’s announcer chooses a set of deliveries from the deliveries of the center and announces them to other centers in order to get bids from them. In the implementation the announcements focus on deliveries ending in the geographical main operation areas of the potential contractors, because these deliveries are most likely to lead to contracts. The announcing methods differ from each other in the number of tasks (deliveries) to be clustered into each announcement, and in whether a delivery set that has already been announced can be reannounced (Sandholm 1992b). Reannouncing leads to better results, but the negotiations are considerably longer. This, however, is not a serious problem, if we assume that actual deliveries are being done during the negotiations and reannouncing is not done immediately. In algorithm 1, a set of deliveries consists of only one (randomly chosen) delivery, and reannouncing is allowed. The c’rem(T) service provided by the local optimizer gives a heuristic approximation of the marginal cost crem(T) saved if the delivery set T were removed from the routing solution of the agent. The implemented calculation of c’rem(T) will be described in section 6. If the estimate c’rem(T) is too low, the other center’s will not bid even though that might be beneficial. On the other hand, if the estimate is too high, the agent will receive also unbeneficial bids. The actual value of c’rem(T) is not as crucial here as it is in the awarding phase, because announcements are not binding. Therefore, even an incorrect calculation of c’rem(T) will not lead to unbeneficial contracting. Randomly choose one of the deliveries ending in another center’s main operation area. T = {the chosen delivery}. Maximum price of the announcement c,, = c’,,(T). For all centers except this center itself If the end stop of the delivery is in the center’s main operation area Then send an announcement to the center. Algorithm 1. A simple announcer algorithm. Announcing one delivery at a time is not sufficient in general. This is due to the fact that the deliveries are dependent, i.e., for two disjoint delivery sets T1 and T2, for the manager, crem(T1 U T2) # crem(TI) + %ern(TZ)- For example, if the removal cost of either of two deliveries alone is small, but the removal cost of both of them together is large, announcing one delivery at a time would probably not lead to a contract, but announcing two at a time probably would. For the tasks to be truly independent, the following would also have to hold for each potential contractor: Cadd(Tl U T2) = C&j(Tl) + Cadd(T’J), where Ca&j(T) gives the marginal cost of adding task set T to the agent’s routing solution, as will be explained in section 5. The clustering of tasks into (not necessarily disjoint) sets to be bargained over as atomic bargaining items is a complex problem. To solve it, TRACONET’s more refined announcer algorithms use domain dependent heuristics. These algorithms and experiments with them in a domain, where all deliveries originate at a common factory have been discussed in (Sandholm 1992b). For example, in one of them, a delivery dl was clustered with another delivery d2, the end stop of which was next to the end stop of dl in a route, if c’rem({dl, d2)) > ct * c’rem({dl}), where a was a constant. If no more beneficial contracts of any k tasks at a time can be made between any two agents, the solution is called 258 Sandholm k-optimal, which is a necessary, but not a sufficient condition for optimality. Neither does m-optimality guarantee n-optimality, if n f m. 5 ding An agent’s bidder reads the announcements sent by other agents. If the maximum price mentioned in the announcement is higher than the price that the deliveries would cost if done by this center, a bid is sent with the latter price. Otherwise, no bid is sent for the specified announcement. Denote an arbitrary bid by b and the set of tasks of that bid by Tb. Let Buns be the set of unsettled bids sent by an agent previously. Define Bpos to be the set of possible bids that can be awarded to the agent when b is also awarded to the agent, i.e., Bpos = {x 1 x E Buns, TX n Tb = 0 }. Let Tcur be the current set of tasks of the agent. Let function f(T) compute the total cost of the local optimal solution with task set T. Let Cadd(T) be the marginal cost of adding task set T into the local solution. For any bid b, the cost Cadd(Tb) is bounded below by cm,&&) = min [ f(Tb U Tcur U Tz) - fCrcur U Tz) 1, B C Bpos zEB zEB andaboveby C+add(Tb) = maX [ f(Tb U Tcur U Td - f(Tcur U TJ I- B E Bpos zEB zEB Setting the bid price to be C-add(Tb) is an opportunistic approach, and setting it to be C+add(Tb) is a safe approach. Assuming that all of the unsettled bids sent by the agent will be awarded to the agent, the bid price can be calculated bY Calladd = f(Tb U Tcur U Td - f(Tcur U Td, z-pea z-pm and assuming that none of the unsettled bids sent by agent will be awarded to it, the bid price is as follows: the c”‘“add(Tb) = f(Tb U Tcur) - f(Tcur). Clearly, c’add(Tb) 5 @add(Tb) 5 c+add(Tb) and C’a&j(Tb) S Cnon add(Tb) S C+add(Tb), but the partial order of Calladd and cnon add(Tb) varies. This is because in this domain, both economies of scale (implying calladd < C”O”add(Tb) ) and diseconomies of scale (implying C”O”add(Tb) < Cal ‘add(Tb) ) are present. In (Wellman 1992), only diseconomies of scale are present. The cost C”O”add(Tb) is faster to compute than calladd( and it gives a better approximation of Cadd(Tb) when bids are seldom awarded to the agent. This is usually the case, if the network has many agents. In the original CNP, an agent could have multiple bids concerning different contracts pending concurrently in order to speed up the operation of the system (Smith 1980). We have followed this approach for the same reason, although negotiations over only one contract at a time allow a more precise bid price. If only one bid is allowed to be pending from one agent at a time, Bpos = 0 and cmadd(Tb) = c+,~~(T~) = calladd = cnonadd(T,,). Fig. 2 compares results of allowing multiple bids and awards simultaneously to those of allowing only one announcement (implying only one award) and one bid at a time. Calculation of the local utility function takes time. This has not been taken into account in the CNP or in work in game theory. In our domain, calculating the marginal costs (and therefore the announcing, bidding and awarding) takes computational time. Because the calculation of the truly optimizing function f takes exponential time in our domain, we use a heuristic approximation f , for which f(T) s f(T) for any task set T. In our domain, the calculation of f (T u Tcur) would be very fast if we knew f(Tcur), because it could be calculated incrementally by just adding the new tasks T to the solution without altering the original solution. The problem is that we do not know the optimal f(Tcur), but only a heuristic approximation f(Tcur) of it. In the tests presented in this paper, the bid price C’a&j(Tb) was calculated incrementally like this with respect to the current heuristic solution assuming that none of the agent’s unsettled bids are awarded to it. This assumption makes the calculation semi-opportunistic. Therefore an agent using this strategy may make unbeneficial contracts now and then. A safe approach would be to use a heuristic upper bound for C+add(Tb) as the bid price, but its calculation is slower than that of C’a&j(Tb). Read in all received announcements and call this set A. For each announcement a E A Call the set of deliveries in a Ta and the maximum price cmax. If f (Tcur U Ta U Tpos) < 00 (Feasibility check; Tpos defined w.r.t. a potential bid b with the deliveries of a.) Set Chid = dad&T& If chid< cmax Send a bid with the identifier of the announcement, the name of this center and cost Chid. Algorithm 2. The bidding algorithm. Because of binding bids, a feasibility check in algorithm 2 checks that the agent’s transportation solution will be feasible even if all of the previous unsettled possible bids and this bid are awarded to the agent. In domains (unlike ours), where the feasibility check often restricts the bidding, the bidder should choose the most profitable combination among the possible combinations of beneficial bids to send. Using the previously discussed bidding methods, the negotiation network got congested with announcements, i.e., some of the agents were receiving announcements at a faster pace than they could process. The problem occurred only with announcements, because in our domain the number of them far exceeds the number of other messages. The reason the congested agents could not keep in pace was that the time to handle an announcement increased with the number of previously sent unsettled bids - mainly 259 because of the feasibility check. The more announcements an agent had received, the more bids it was able to make, which slowed it down, and during the bidding process even more announcements kept coming in. The congestion problem was solved by making the bidder consider only announcements newer than a certain time limit. This is sensible also, because bids made on older announcements would probably not get to the managers before the negotiations concerning these announcements would be over. 6 An agent’s awarder reads the bids of other agents. Before handling the bids concerning a certain announcement, it checks that a fixed time has passed since the sending of the announcement, so that many potential contractors have had time to bid. An award or loser message is sent to every agent to whom an announcement concerning the same contract was sent earlier. The award is sent to the agent with the most inexpensive bid.6 After an award is sent, the awarder removes the set of deliveries from the agent’s current deliveries T,, and from its transportation solution. If no bids for an announcement have been received by the time of the mentioned time limit, the awarding is postponed until the first bid for this announcement is received. If this takes longer than a second time limit, the agent simply forgets that it has made such an announcement and sends loser messages to all agents to whom the announcement was sent previously. Bids received later for this announcement are deleted. In the awarding phase the manager has a chance to check that awarding is still beneficial to itself, i.e., it does not have to accept any bid. In deciding whether the awarding is beneficial, the manager has to also consider the unsettled bids that it has sent. Awarding to bid b is beneficial iff Crem(Tb) > cb, where cb is the price mentioned in the bid b, and Crem(Tb) is the cost of removing the tasks Tb from the manager’s own local solution. Unlike in the bidding phase, Bpos = Bun.9 Th e cost Crem(Tb) is bounded above by . C+rem(Tb) = max [ f(Tcur U Tz) - f((Tcur - Tb) U Tz) 17 B c Bpos zEB zEB and below by C-rem(Tb) = min E f(Tcur U Td - f((Tcur - Tb) U T& I- B C Bpos zEB zEB Assuming that all of the agent’s unsettled bids will be awarded to it, Crem(Tb) is calculated by callrem = f(Tcur U Tz) - f((Tcur -Tb) U Tzh z-p z E Bpos %f some of the deliveries of the announcement have already been awarded out by an award of some other announcement, all messages sent are loser messages. and assuming that none of the agent’s unsettled bids will be awarded to it, Crem(Tb) is calculated as follows: cnonrem(Tb) = f(Tcur) - f(Tcur -Tb)* Clearly, C’rem(Tb) d callrem S C+rem(Tb) and C’pm(Tb) S Cnonrem(Tb) s C+rem(Tb), but the partial order of Callrem and Cnonrem(Tb) varies. If only one bid is allowed to be pending from an agent at a time, then [C allrem = C’rem(Tb) and @Onrem = C+rem(Tb)l Or [@on rem(Tb) = C- rem(Tb) and callrem = @rem(T Similar to our discussion of f’, because calculating the truly optimizing f functi,on takes a long tim:, we use a heuristic approximation f , for which f(T) s f ,Cr) for any task set T. In our domain, the calculation of f (Tcur -Tb) would be fast if we knew f(Tcur), because it could be calculated decrementally by just removing the tasks Tb from the solution without altering the original solution. The problem is that we do not knoy, the optimal f(Tcur), but only a heuristic approximation f (Tcur) of it. In the tests presented in this paper, the benefit check price C’rem(Tb) was calculated decrementally like this with respect to the current heuristic solution assuming that none of the agent’s unsettled bids are awarded to it. The assumption makes this calculation semi-opportunistic, and an agent using this strategy may have to take unbeneficial awards later. A safe approach would be to use a heuristic upper bound for c+,,(Tb) as the benefit check price, but its calculation is slower than that of C’rem(Tb). In the current implementation, all bids received before the start of the awarding phase are handled in order of receipt before going to any next negotiation phase. If the check for benefit is used, the order of awarding may be important - though this seldom is the case in our domain. The awarding of one task set may disable the beneficial awarding of another. Usually the number of received bids per local control loop cycle is small, so the awarder could try all possible orderings of awarding sets of deliveries and carry out the best ordering. 7 Taking awards An agent’s award taker reads the awards and inserts the deliveries from the awards to the agent’s deliveries T,,, and its transportation solution. Some contracts may have sneaked in between the bidding for a certain set of deliveries and taking the corresponding award. These contracts have altered the routing solution. If opportunistic pricing is used, taking the award might no longer be profitable for the center. Because bids are binding, the center is committed to take the award anyway. Making bids non-binding would not solve the problem, because the contractor, after receiving an award, would have to inform the manager that it has taken the award or that it will not take it. This would require the manager to keep the delivery set in its routing solution until award taking is confirmed, during which, some changes may have sneaked into its routing solution and the problem rearises. 260 Sandholm Experimental resu The purpose of the experiments was to validate the distributed problem solving approach in reducing the total transportation costs among autonomous dispatch centers. A detailed presentation of these experiments is given in (Sandholm 1992a). Table 1 provides results of one example experiment. As can be seen, the negotiations led to considerable transportation cost savings in reasonable time even in such a large problem. In the experiment, company A owned the first three centers and company I3 owned the last two. The centers were located around Finland. The agents had similar local optimization modules and each agent’s original local routing solution was acquired heuristically using a parallel insertion algorithm (Sandholm 1992a). Each agent executed on its own HP 9000 ~300 workstation. The profit of each contract was divided in half between the agents, i.e., the actual price of a contract was half way between the maximum price mentioned in the announcement and the bid price. A choice closer to a real world competing agent contracting scheme would be to let the contract price equal the bid price. In 30 minutes, each agent goes through its main control loop 100 - 200 times. minutes minutes 121 km 5% 270 km 9% Total 771 77 187 km 11% 17% 1 Table 1. Columns 2 - 4 characterize the one week real vehicle and delivery data of the experiments, and the last two columns show results of the negotiations. Figure 2 presents example runs with two unsafe bidding schemes. Due to the semi-opportunistic pricing explained before, the local costs of the agents do not decrease monotonically in case 1. An agent is forced to take unbeneficial awards now and then. The unbeneficial contracts are somewhat compensated for by other contracting within the time window shown. The cost of an agent in case 1 decreases faster (in the sense of local control loop cycles required) than in case 2. In case 2, the cost decreased monotonically for every agent. To guarantee monotonic decrease of the cost using opportunistic pricing, one bid at a time should be allowed and awarding should be allowed only when no bid is pending from the agent. This would require even more local control loop cycles than case 2, where awarding can happen while a bid is pending. In case 1, the agents have to consider more messages on each local control loop cycle. Therefore, the previously mentioned time limits were set to be longer in case 1, and in the same actual time, the agents of case 2 go through more main control loop cycles than in case 1. - -.a F “tgure 2. An example run with the results of me ftve agents one below another. The x-axis show the number of local control loop cycles for each agent. The thin gray line shows the evolution of the total length of the truck routes of an agent in kilometers. The black line shows the evolution of the local cost for each agent, so the black line takes into account the amounts paid by the managers to the contractors for carrying out the transportation tasks. The figures in the left column (case 1) show the normal case, where multiple announcements and bids are allowed simultaneously. The right column (case 2) shows the case, where only one announcement (implying at most one award) and one bid are allowed to be pending from one agent at a time. The role of DAI systems with cooperative and competitive agents is likely to increase in the future. Especially important will be enterprise cooperation: allowing autonomous, even competitive, enterprises to cooperate through the on-line, dynamic establishment of contracts among enterprises. The groundwork for computerizing this cooperation is currently being made by building networks of enterprises with electronic data interchange. This paper presents, to our knowledge, the first prototype of an application where different enterprises work together automatically using DAI techniques. Our methodology is presented through a concrete application domain, vehicle routing, but it is applicable to other task allocation problems - assuming that a reasonable local objective function is known for each agent. Distributed Problem Solving 261 TRACONET uses task negotiation. Another solution technique for the same problem is to negotiate over resources. If there are many tasks per resource (eg. many deliveries in one truck route), a higher resolution of cooperation is achieved by exchanging tasks. All possible solutions reached by resource exchange can be reached by task exchange, but not vice versa, so the best possible solution when negotiating tasks is at least as good as the best possible solution when negotiating resources. This does not necessarily imply that after a certain number of iterations, the solution using task negotiation is as good or better than the solution using resource negotiations. Also, if we use a limit on the maximum number of tasks per announcement, it may happen that the best global solution of task negotiations can not be reached at all. If fast computation is crucial, the coarser grain size negotiations - resource negotiations in this case - may be preferred. In domains with many resources per task, the above arguments should be reversed. We have extended the CNP with a formal model for making announcing, bidding and awarding decisions based on local marginal cost calculations. Additionally, announcing, bidding and awarding are allowed while the results of previous bids are still unknown. Safe and opportunistic pricing policies are discussed: opportunism speeds up the negotiations, but safe policies guarantee monotonic decrease of the local cost. Task interaction is handled by heuristically clustering tasks into announcements negotiated over atomically. The implementation is asynchronous and truly distributed and solves the message congestion problems. At this stage, the announcing, bidding and awarding decisions do not anticipate future contracts. Future research also includes estimating the marginal costs when a local solution does not exist, so that the agents could negotiate before they solve the local routing problem, and even if a feasible solution to the local problem does not exist at the moment. In the future we wish to extend the protocol for contracts involving multiple agents. In TRACONET, the bidder can only bid for the announced task sets, but allowing counterproposals with different content may speed up the negotiations. Currently there is just one focus in the contract space and it is committal. Moving non-committal foci in the contract space would enable jumping over local minima, because multiple contracts would be made before the agents have to commit. Finally, other than per centual proftt division mechanisms, and intelligent local reoptimization activation should be implemented. Acknowledgements I would like to thank professor Victor Lesser from the University of Massachusetts at Amherst, Computer Science Department, and research professor Seppo Linnainmaa from the Technical Research Centre of Finland, Laboratory for Information Processing, for their support. Bodin, L. et al. 1983. Routing and scheduling of vehicles and crews: The state of the art. Computers and Operations Research 10(2):63-211. Davis, R., and Smith, R.G. 1988. Negotiation as a Metaphor for Distributed Problem Solving. In: Bond, A., and Gasser, L. eds. Readings in Distributed Artificial Intelligence, 333-356. San Mateo, Calif.: Morgan Kaufmann. Kuwabara, K., and Ishida, T. 1992. Symbiotic Approach to Distributed Resource Allocation: Toward Coordinated Balancing. In Proceedings of the European Workshop on Modeling Autonomous Agents and Multi-Agent Worlds ‘92. McElroy, J. et al. 1989. Communication and cooperation in a distributed automatic guided vehicle system. In Proceedings of the IEEE Southeastcon ‘89,999-1003. Parunak, H.V.D. 1987. Manufacturing Experience with the Contract Net. In: Huhns, M. ed. Distributed Artificial Intelligence, 285-310. Los Altos, Calif.: Morgan Kaufmann. Powell, W., and Sheffi, U. 1989. Design and implementation of an interactive optimization system for network design in the motor carrier industry. Operations Research 37( 1): 12-29. Sandholm, T. 1992a. Automatic Cooperation of Dispatch Centers in Vehicle Routing. M.Sc. Thesis. Research Report No. J-9, Laboratory for Information Processing, Technical Research Centre of Finland. Sandholm, T. 1992b. Automatic Cooperation of Area- Distributed Dispatch Centers in Vehicle Routing. In Preprints of the International Conference on Artificial Intelligence Applications in Transportation Engineering, 449-467. Institute of Transportation Studies, Univ. of Calif., Irvine. Smith, R.G. 1980. The Contract Net Protocol: High-Level Communication and Control in a Distributed Problem Solver. IEEE Trans. on Computers C-29(12):1 104-1113. Smith, R.G., and Davis, R. 1981. Frameworks for Cooperation in Distributed Problem Solving. IEEE Trans. on Systems, Man, and Cybernetics 11(1):61-70. Waters, C.D. 1984. Interactive vehicle routing. Journal of the Operational Research Society 35(9):821-826. Waters, C.D. 1987. A solution procedure for the vehicle- scheduling problem based on iterative route improvement. Journal of the Operational Research Society 38(9):833- 839. Wellman, M. 1992. A General-Equilibrium Approach to Distributed Transportation Planning. Proceedings of the AAAI -92,282-289. Wong, K., and Beasley, J. 1984. Vehicle routing using fixed delivery areas. Omega International Journal of Management Science 12(6):591-600. 262 Sandhohn | 1993 | 39 |
1,364 | &hod I: Escaping F!rom Escal inima Paul Morris Intelli Corp 1975 El Camino Real West Mountain View, CA 94040 morris@intellicorp.com Abstract A number of algorithms have recently been pro- posed that use iterative improvement (a form of hill-climbing) to solve constraint satisfaction problems. These techniques have had dramatic success on certain problems. However, one factor limiting their wider application is the possibility of getting stuck at non-solution local minima. In this paper we describe an iterative improvement algorithm, called Breakout, that can escape from local minima. We present empirical evidence that this method is very effective in cases where previ- ous approaches have difficulty. Although Break- out is not, theoretically complete, in practice it appears to almost always find solutions ,for solv- able problems. We prove that an idealized (but less efficient) version of the algorithm is complete. Introduction Several recent papers have studied iterative improve- ment methods for solving constraint satisfaction and optimization problems. (See [Minton et al. [Zweben 19901, 19901, [S osic & Gu 19911, [Minton et al. 19921, [Selman, Levesque, & Mitchell 19921.) These methods work by first generating an initial, flawed “solution” (i.e., containing constraint violations) to a problem. They then try to eliminate the flaws by mak- ing local changes that reduce the total number of con- straint violations. Thus, they perform hill-climbing in a space where goodness is measured in terms of how few constraints are violated, in the hope that even- tually a point will be reached that provides an ac- ceptable solution to the problem. The papers provide empirical and analytical evidence that such methods can lead to rapid solutions for importarit classes of problems. One drawback of such methods, however, is the pos- sibility of becoming stuck at locally optimal points that are not acceptable as solutions. (We will hence- forth call these “local minima,” viewing the local changes as movements on a cost surface where the height reflects the current number of constraint vi- olations.) While the above approaches incorporate some techniques to mitigate this problem, these are at best only moderately successful. For example, the Minton et al. and Selman et al. algorithms can es- cape from plateaus on the cost surface, since they al- low random “sideways” local changes. However, they still get caught in other local minima. This causes them to miss solutions in many difficult SAT and IC- coloring problems. While the random walk character of the Zweben algorithm would seem to ensure almost certain eventual movement to a solution, this kind of probabilistic guarantee may not be very useful.’ The Sosic and Gu approach formulates the search space in a way that avoids local minima. However, the method is specific to iv-queens, and has no obvious general- ization to other problems. Another remedy for the local minimum problem is to repeatedly restart the iterative improvement pro- cess from new random starting points until an accept- able solution is reached, as is done in the Selman et al. algorithm. This amounts to randomly searching the local minima for a solution. Figure 1 illustrates why this is computationally impractical in many cases. It- erative improvement methods derive their power from an assumption that the number of constraint viola- tions is a rough indicator of the closeness to a solu- tion. In general, we might expect some noise in the estimate, suggesting a cost surface with a cross-section something like that shown in the upper part of the fig- ure. (The lowest point in the surface represents a so- lution.) Now Fonsider an algorithm that gets stuck at each of the loFa1 minima shown. Notice that repeated restarting will perform no better on the upper surface than it would on the lower surface shown in the figure. That is, it fails to take advantage of the overall trend of the surface. Looking only at the lower surface, it is easy to see that the average time to a solution depends on the number of local minima in a region around the solution (and thus indirectly on the “volume” of the ‘As an analogy, if two flasks are connected by a tube, the air molecules will, with probability 1, eventually all pile up in one flask. However, the mean time before this happens is enormous. 40 Morris From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. Figure 1: Why escaping is better than restarting. region). For CSPs, the dimension of the cost surface (i.e., the number of states adjacent to a given state) increases linearly with the size of the problem. This makes the “volume” (and hence, presumably, the ex- pense of a restart search) increase rapidly with the size. Now consider an algorithm that can escape from the local minima shown. This should perform better on the upper surface, since it takes advantage of the general trend towards a solution. For this type of sur- face, the time required to find a solution should be largely independent of the dimensionality. It is known that for certain problems, like N-queens, almost all the local minima are narrow plateaus (Mor- ris [1992]). Th e a b ove analysis suggests that plateau- escaping algorithms (like that of Minton et uZ.) would solve such problems very efficiently. Indeed, this does appear to be the case. In this paper we present a deterministic algorithm for solving finite constraint satisfaction problems us- ing an iterative improvement method. The algorithm includes a technique called breakout for escaping from local minima. In the following sections, we will define the algorithm, and compare its performance to that of other methods. Finally, we will prove a theoretical result that helps explain the success of the algorithm. The reakout Algorithm We now discuss the Breakout algorithm in more de- tail and consider how it applies to a constraint sat- isfaction problem (CSP). The essential features of this algorithm were first introduced in Morris [1990], where it was applied to the Zebra problem (see Dechter [1990]) .2 Informally, a CSP consists of a set of variables, each of which is assigned a value from a set called the do- main of the variable. A state is a complete set of as- signments for the variables. The solution states must satisfy a set of constraints, which mandate relation- ships between the values of different variables. We refer the reader to Dechter [1990] for the formal def- inition of a CSP. In this paper we will consider only finite CSPs, i.e., where there is a finite set of variables, and the domain of each variable is finite. Constraint satisfaction problems are generally expressed in terms of sets of tuples that are allowed. For our purposes, it is convenient to instead focus attention on the no- goods, i.e., the tuples that are prohibited. The intuition for the breakout algorithm comes from a physical force metaphor. We think of the vari- ables as repelling each other from values that con- flict. The variables move (i.e., are reassigned) under the influence of these forces until they reach a posi- tion of equilibrium. This corresponds to a state where each variable is at a value that is repelled the least by the current values of the other variables. In physical terms, an equilibrium state is surrounded by a poten- tial barrier that prevents further movement. If this is not a solution, we need some way of breaking through that barrier to reach a state of lower energy. In an equilibrium state, the variables that are still in conflict are stable because they are repelled from al- ternative values at least as much as from their current values. Suppose, however, the repulsive force associ- ated with the current nogoods is boosted relative to the other nogoods. As the repulsive force on current values increases, at some point, for some variable, it will exceed that applied against the alternative values. Then the variable will no longer be stable, and the iterative improvement procedure can continue. The boosting process has effectively changed the topogra- phy of the cost surface so that the current state is no longer a local minimum. We refer to this as a breakout from the equilibrium position. In iterative improvement, the cost of a state is mea- sured as the number of constraints that it violates, i.e., the number of nogoods that are matched by the state. For the Breakout algorithm, we associate a weight with each nogood, and measure the cost as the sum of the weights of the matched nogoods. The weights have 1 as their initial value. Iterative improvement proceeds as usual until an equilibrium state (i.e., a lo- cal minimum) is reached. 3 At that point, the weight of each current nogood is increased (by unit increments) until breakout occurs. Then iterative improvement resumes. Figure 2 summarizes the algorithm. We note that the algorithm does not specify the ini- tial state. In our experiments we used random start- 2Selman and Kautz [1993] have independently devel- oped a closely related method. 3Plateau points are treated just like any other local minima. Thus, the algorithm relies on breakouts to move it along plateaus. Automated Reasoning 41 UNTIL current state is SOhLtiOn DO IF current state is not a local minimum THEN make any local change that reduces the total cost ELSE increase weights of all current nogoods END Figure 2: The Breakout Algorithm. ing points. The results of Minton et al. suggest that it may be worthwhile to use a greedy preproccessing algorithm to produce an initial point with few viola- tions. This will generally shorten the time required to reach the first local minimum. The algorithm also does not specify which local change to make in a non-equilibrium state. In our implementation, we simply use the first one found in a left-to-right search. Experimental Results We tested the breakout algorithm on several types of CSP, including Boolean 3-Satisfiability, and Graph K- Coloring. We describe the results here. Boolean Satisfiability For Boolean 3-satisfiability, we generated random solvable formulas. The results we report here are for 3-SAT problems where the clause density4 has the critical value of 4.3 that has been identified as par- ticularly difficult by Mitchell et al. [1992]. We use the same method of generating random problems as theirs except for the following: to ensure the problems are solvable, we select a desired solution5 in advance and modify the generation process so that at least one literal of every clause matches it. Specifically, we re- ject (and replace) any clauses that would exclude the desired solution. 6 The variables are then initialized to random values before starting the breakout solu- tion procedure. Note that for Boolean satisfiability problems, the nogoods are just the clauses expressed negatively. The results are shown in table 1 for a range of prob- lem sizes, averaged over 100 trials in each case. We wish to emphasize that the algorithm never failed to find a solution in any of the trials. Note that the growth of total hill-climbing (HC) steps appears to be a little faster than linear, but less than quadratic. The timing figures shown are the average elapsed time for 4The number of clauses divided by the number of variables. 5By symmetry, it doesn’t matter which one. ‘In an earlier version of this paper, instead of using a rejection method, the negated/unnegated status of one literal was directly chosen to match the solution. That method produced very easy problems. 42 Morris Vars Breakouts HC steps Time (sec.) 100 60 168 3.2 Table I: Breakout on S-SAT problems with prear- ranged solution. Table 2: GSAT on similar problems. the (combined) creation and solution of 3-SAT prob- lems, running in Lisp on a Sun 4/ 110. A recent paper [Williams & IIogg 19931 has noted that using a prespecified solution when generating random problems introduces a subtle bias in favor of problems with a greater number of solutions, and thus is likely to produce easier problems. (On the other hand, it is a convenient way of producing large known-solvable problems, which are otherwise diffi- cult to obtain.) Selman et al. [1992] use a different random generation process for testing their GSAT al- gorithm. First they generate formulas that may or may not be satisfiable. Unsolvable formulas are then filtered out by an exhaustive search (using a variant of the Davis-Putnam algorithm). Thus, the results in table 1 cannot be directly compared to those reported for GSAT. To obtain a better comparison, we reimple- mented GSAT and tested it on problems generated in the same way as those used for testing Breakout. The results are shown in table 2 (averaged over 100 trials). The Tries parameter here is the number of restarts (including the final successful one). The Total Flips parameter is the number of local changes needed to reach a solution (summed over all the Tries), This figure is roughly comparable to the number of hill- climbing steps for the Breakout algorithm. In GSAT, each Try phase is limited to hdm-Flips steps; for these experiments, Max-Flips was set to n2/20, where n is the number of variables.7 In terms of the machine-independent parameters,” the problems we use are clearly easier for GSAT than those on which it was originally tested. In particu- lar, the Tries figure appears to stay roughly constant ‘An O(n2) setting was suggested by Bart Selman (per- sonal communication). ‘The sun 4/110 is slower than the MIPS machine used by Selman et ~1. Density 3 4 5 6 7 Breakouts 6.4 36 74 23 8.1 HC steps 41 115 248 143 99 Table 3: Breakout for different clause densities as the problem size increases. Nevertheless, the per- formance of Breakout seems significantly better than that of GSAT, and avoids the inconvenience of hav- ing to set the Max-Flips parameter. We remark that testing with a variant of the Davis-Putnam algorithm shows exponential growth for these problems. We also ran Breakout for different values of the clause density, keeping the number of variables fixed at 100. Instead of a peak centered at 4.3, we found one in the neighbourhood of 5. (Testing at higher res- olution indicates a range from 4.8 to 5.2 as the region of greatest difficulty.) The results shown in table 3 are each averaged over 1,000 trials. In preliminary testing of Breakout on problems gen- erated in the same way as those in Selman et al. [1992], average performance appears to degrade to exponen- tial, like that of GSAT, and the peak difficulty occurs at the 4.3 value of the density. Remarkably, the av- erage appears dominated by a small number of very difficult problems. The average over 100 trials has been observed to fluctuate by an order of magnitude, depending on how many of the very difficult problems are encountered. These problems may have cost sur- faces that more closely resemble the lower surface in figure 1. Graph Coloring For graph K-coloring, we generated random solvable K-coloring problems with n nodes and m arcs in essen- tially the same way as described in Minton et al. [1992] (and attributed there to Adorph and Johnson [1990]). That is, we choose a coloring in advance that divides the I< colors as equally as possible between the nodes. Then we generate random arcs, rejecting any that vi- olate the desired coloring, until m arcs have been ac- cepted. The entire process is repeated until a con- nected graph is obtained. We used two sets of test data, one with K = 3 and m= 2n, and the other with K = 4 and m = 4.7n. The first set are the “sparse” problems for which Minton et al. report poor performance of their Min- Conflicts hill-climbing (MCHC) algorithm. The sec- ond represents a critical value of the arc density iden- tified by Cheeseman et al. 119911 as producing partic- ularly difficult problems. Table 4 shows the results on the first set of test data. Each figure is averaged over 100 trials. The algorithm never failed to find a solution on any of the trials. We note the number of breakouts seems to increase roughly linearly (with some fluctuation). The number of transitions per breakout also appears to be slowly Table 4: Breakout and MCHC on 3-coloring Nodes 30 60 90 120 150 Breakouts 8 47 189 655 1390 HC steps 49 308 1257 3959 8873 Table 5: Breakout on 4-coloring growing. This performance can be contrasted with that of MCHC, which shows an apparent exponential decline in the frequency with which solvable sparse 3- coloring problems are solved (within a bound of 9n steps), as the number of nodes increases.g Table 5 shows the results on the second set of test data. In this case, each figure is averaged over 100 tri- als, except for N = 120 and 150, which were averaged over only 99 trials each. This really does seem to be a more difficult task for Breakout. For the omitted trials, the algorithm failed to reach a solution even af- ter 100,000 breakouts, and was terminated. This may be a further instance of the phenomenon of a small number of very difficult problems sprinkled among the majority of easier problems. A ,Complete Algorithm The experimental results show that Breakout has re- markable success on important classes of CSPs. This is partially explained by the discussion regarding fig- ure 1. However, one point has not yet been answered. Since Breakout modifies the cost function, it appears plausible that it could often get trapped in infinite loops; yet the experimental data shows this almost never occurs (at least, for randomly generated prob- lems). In this section, we provide some insight into this by showing that a closely related algorithm is the- oretically complete; that is, it is guaranteed to even- tually find a solution if one exists. Consider the effect of a breakout on the cost SUP- face: the cost of the current state, and perha.ps sev- eral neighbouring states (that share nogoods with the current state), is increased. However, all that is re- ally needed to escape the local minimum is that the cost of the current state itself increase. We are thus led to consider an idealized version of Breakout where that is the only state whose cost changes. To facil- itate this, we assume every state has a stored cost associated with it that can be modified directly. (The initial costs would be the same as before.) This ideal- ‘We thank Andy Philips for providing this data. Automated Reasoning 43 UNTIL current state is so&tion DO IF current state is not a docal minimum THEN make any local change that reduces the cost ELSE increase stored cost of current state END Figure 3: The Fill Algorithm. ized algorithm is summarized in figure 3. We will call this the Fill algorithm because it tends to smoothly fill depressions in the cost surface. It turns out that this idealized version of Breakout is complete, as we prove here.1° In the following, we say two states are adjacent if they differ in the value of a single variable, a state is visited when it occurs as the current state during the course of the algorithm, and a state is lifted when its stored cost is incremented as a result of the action of the algorithm. Note that lifting only occurs at a local minimum. Theorem 1 Given a finite CSP, the Fill algorithm eventually reaches a solution, if one exists. Proof: Suppose the algorithm does not find a solu- tion. Then we can divide the state space into states that are lifted infinitely often, and states that are lifted at most a finite number of times. Let S be the set of states that are lifted infinitely often. A bound- ary state of S is one that is adjacent to a state not in S. To see that S must have a boundary state, con- sider a path that connects any state in S to a solution. Let s be the last state that belongs to S on this path. Clearly s is a boundary state of S. As the algorithm proceeds, there must eventually come a time when all the following conditions hold. 1. The states outside S will never again be lifted. 2. The cost of each state in S exceeds the cost of every state not in S. 3. A boundary state of S is lifted. Notice that at the moment the last event occurs, the boundary state involved must be a local minimum. But this contradicts the fact that the state is adjacent to a state not in S, which (by the second condition) has a lower cost. Thus, the assumption that a solution is not found must be false. The Breakout algorithm may be regarded as a “sloppy,” or approximate version of the Fill algorithm, where some of the increase in cost spills onto neigh- bouring states. Note that Breakout is much more effi- cient because of the compact storage of the increased costs. “The reader may wonder whether a simpler algorithm that just marked local minima, and never visited them again, would be complete. It turns out this is not the case because of the possibility of “painting oneself into a corner.” Note that Fill may revisit states. The Fill algorithm is itself related to the LRTA* algorithm of Korf [1990], which has also been proved complete. The latter algorithm has been studied in the context of shortest path problems, rather than CSPs. In a path problem, the goal state is usually known ahead of time. Note, however, that this is not essential as long as a suitable heuristic distance function is available. Iterative improvement implic- itly treats a CSP as a path problem by seeking a path that transforms an initial state into a solution state, thereby obtaining the solution state as a side product. From this viewpoint, the number of conflicts serves as a heuristic distance function. (However, this heuris- tic is not admissible in the sense of the A* algorithm, because it may occasionally overestimate the distance to a solution.) Both Fill and LRTA* have the effect of increasing the stored cost of a state when at a local minimum. We note the following technical differences between the two algorithms. 1. LRTA* transitions to the neighbour of least cost, whereas the Fill algorithm is satisfied with any lower cost neighbour . 2. LRTA* may modify costs at states that are not local minima, and may decrease costs as well as increas- ing them. Item 1 suggests Fill/Breakout is more suited for CSPs, where the number of states adjacent to a given state is generally very large. One might consider using the Fill algorithm directly to solve CSPs. However, the only obvious advantage of this over Breakout is the theoretical guarantee of completeness. It appears that, in practice, Breakout almost always finds a solution anyway, and has a much lower overhead with regard to storage and retrieval costs. The Fill algorithm requires storage space pro- portional to n x I, where n is the number of variables, and 1 is the number of local minima encountered on the way to a solution. By contrast, Breakout only requires storage proportional to the fixed set of no- goods derived from the specification of the problem. Moreover, preliminary experiments suggest that Fill requires many more steps than Breakout to reach a solution. This may be due to a beneficial effect of the cost increase spillovers in Breakout-presumably depressions get filled more rapidly. It is known that Breakout itself is not complete. As a counterexample, consider a Boolean Satisfiability problem with four variables, x, y, Z, w, and the clause xvyvzvw together with’the 12 clauses -xvy 1xv.z 1xvw 1yvx 1yvz -yvw l%VX Tzvy T%vw -wvx 1wvy lWV% 44 Morris Note that these clauses have a single solution, in which all the variables are true. Suppose the initial state sets all the variables to false. It is not hard to see that the Breakout algorithm will produce oscillations here, where each variable in turn moves to true, and then back to false. ’ To understand this better, consider the three states 5’1, 5’2, and 5’s, such that x is true in S1, y is true in Sz, and both x and y are true in Ss. All of the other variables are false in each case. Each time 5’1 occurs as an local minimum, the weight of each of its nogoods is incremented. Thus, the total cost of 5’1 increases by 3. Since S1 shares two nogoods with Ss, the cost of the latter increases by 2 at the same time. Similarly, when state Sz be- comes a local minimum, the cost of 5’s increases by 2. This means that S’s undergoes a combined increase of 4 during each cycle, which exceeds the increase for each of Si and 5’2. Thus, Ss is never visited, and this path to a solution is blocked. Thus, the basic reason for incompleteness is that the cost increase spillovers from several local minima can conspire to block potential paths to a solution. However, this kind of blockage requires nogoods to interact locally in a specific “unlucky” manner. For large random CSPs, the number of possible exits from a region of the state space tends to be very large, and the probability that all the exits get blocked in this way would appear to be vanishingly small. This may explain why we did not observe infinite oscillations in our experiments. conchlsions The class of Boolean 3-Satisfiability problems is of im- portance because of its central position in the family of NP-complete problems. We have seen that the Break- out algorithm performs very successfully on S-SAT problems with prearranged solutions, including those at the critical clause density. Breakout also performs quite well on K-coloring problems, and appears supe- rior to previous approaches for both of these classes. We have provided analyses that explain both the ef- ficiency of the algorithm, and its apparent avoidance of infinite cycles in practice. In particular, an ideal- ized version of the algorithm has been proved to be complete. Several possibilities for future work suggest them- selves. The relationship to LRTA* ought to be ex- plored in greater detail, particularly in view of the attractive learning capabilities of LRTA*. One might also consider applying some form of Breakout to other classes of search problems where a cost measure can be distributed over individual “flaws” in a draft solution. More generally, the metaphor of competing forces that inspired Breakout may encourage novel architectures for other computational systems. Acknowledgements The author is grateful to Rina Dechter, Bob Filman, Dennis Kibler, Rich Korf, Steve Minton and Bart Selman for beneficial discussions, and would also like to thank the anonymous referees for their useful comments. eferences Adorph, H. M., and Johnson, M. D. A discrete stochastic neural network for constraint satisfaction problems. In Proceedings of IJCNN-90, San Diego, 1990. Cheeseman, P.; Kanefsky, B.; and Taylor, W. M. Where the really hard problems are. In Proceedings of IJCAI-91, Sydney, Australia, 1991. Dechter, R. Enhancement schemes for constraint processing: backjumping, learning, and cutset de- composition. Artificial Intelligence, 41(3), 1990. Korf, R. E. Real-time heuristic search. Artificial Intelligence, 42(2-3), 1990. Minton, S.; Johnston, M. D.; Philips, A. B.; and Laird, P. Solving large scale constraint satisfac- tion and scheduling problems using a heuristic repair method. In Proceedings of AAAI-90, Boston, 1990. Minton, S.; ,Johnston, M. D.; Philips, A. B.; and Laird, P. Minimizing conflicts: a heuristic repair method for constraint satisfaction and scheduling problems. Artificial Intelligence, 58(1-3), 1992. Mitchell, D.; Selman, B.; and Levesque, H. Hard and Easy Distribution of SAT Problems. In Proceedings of AAAI-92, San Jose, California, 1992. Morris, P. Solutions Without Exhaustive Search: An Iterative Descent Method for Solving Binary Constraint Satisfaction Problems. In Proceedings of AAAI-90 Workshop on Constraint-Directed Reason- ing, Boston, 1990. Morris, P. On the Density of Solutions in Equilibrium Points for the Queens Problem. In Proceedings of AAAI-92, San Jose, California, 1992. Selman, B.; Levesque, H.; and Mitchell, D. A New Method for Solving Hard Satisfiability Problems. In Proceedings of AAAI-92, San Jose, California, 1992, Selman, B.,, and Kautz, H. Domain-Independent Extensions to GSAT: Solving Large Structured Sat- isfiability Problems. In Proceedings of IJCAI-93, Chambery, France, 1993. Sosic, R., and Gu, J. 3,000,OOO Queens in Less Than One Minute. Sigart Bulletin, 2(2), 1991. Williams, C.P., and Hogg, T. Exploiting the Deep Structure of Constraint Problems. Preprint, Xerox PARC, 1993. Zweben, M. A Framework for Iterative Improvement Search Algorithms Suited for Constraint Satisfaction Problems. In Proceedings of AAAI-90 Workshop on Constraint-Directed Reasoning, Boston, 1990. | 1993 | 4 |
1,365 | Generating E Using Compositio Patrice 0. Gautier and Thomas R, Gruber Knowledge Systems Laboratory Stanford University 701 Welch Road, Building C Palo Alto, CA 94304 gautier@ksl.sta.nford.edu Absixact Generating explanations of device behavior is a long-stand- ing goal of AI research in reasoning about physical systems. Much of the relevant work has concentrated on new methods for modeling and simulation, such as qualitative physics, or on sophisticated natural language generation, in which the device models are specially crafted for explana- tory purposes. We show how two techniques from the mod- eling research-compositional modeling and causal order- ing-can be effectively combined to generate natural lan- guage explanations of device behavior from engineering models. The explanations offer three advances over the data displays produced by conventional simulation software: (1) causal interpretations of the data, (2) summaries at ap- propriate levels of abstraction (physical mechanisms and component operating modes), and (3) query-driven, natural language summaries. Furthermore, combining the composi- tional modeling and causal ordering techniques allows models that are more scalable and less brittle than models designed solely for explanation. However, these techniques produce models with detail that can be distracting in expla- nations and would be removed in hand-crafted models (e.g., intermediate variables). We present domain-independent filtering and aggregation techniques that overcome these problems. 1. Introduction This paper presents a method for generating explanations of device behavior characterized by systems of mathemati- cal constraints over continuous-valued quantities. Such models are widely used in engineering for dynamical sys- tems, such as electromechanical and thermodynamic con- trol systems. Given such a model and initial conditions, conventional simulation software can predict and plot the values of these quantities over time. However, the data can be difficult to interpret because conventional simulators do not explain how the predicted behavior arises from the structure of the modeled system and physical laws. What we call explanations are presentations of in- formation about the modeled system that satisfy three requirements. First, an explanation offers a meaningful interpretation of the simulation data, explaining how and why and not just what happened. For engineering tasks Funding was provided by NASA Grant NCC2-537 and NASA Grant NAG 2-58 1 (under ARPA Order 6822). 264 Gautier such as design and diagnosis, it is useful to provide causal and functional interpretations. In this paper we focus on causal interpretations. Second, an explanation should present information at appropriate levels of abstraction. What is appropriate depends on the system being modeled, the purpose of the model, and the modeling primitives. For our tasks, we need explanations at the level of physical mechanisms and component operating modes, rather than graphs of numeric variables. Third, an explanation is a presentation of information in a format that is comprehensible to the human user. In the context of natural language generation, relevant design issues include choosing an appropriate level of detail, summarizing data, and adapting to information needs of users. We have developed a system that generates explana- tions with these properties. It is part of the Device Modeling Environment (DME) [123, which integrates model formulation support, qualitative and numerical simulation, and explanation generation. A separate report introduces the explanation architecture and describes the text generation and human interface techniques [ 111. In this paper, we focus on how two techniques from qualitative reasoning research, compositional modeling [6] and causal ordering [13] are applied and combined to pro- duce explanations that satisfy the three criteria outlined above without imposing ad hoc or unscalable modeling formalisms. In Section 3 we present a series of example explanations generated in an interactive session. In Section 3 we describe how compositional modeling is used, and in Section 4 we describe the use of causal ordering. In Section 5 we analyze why it works, explaining how combining the two techniques makes it possible to achieve the three design requirements, and how problems that arise from this design are addressed. The final section compares related work. 2. A Running Example We will demonstrate the explanation technique using a model of the space shuttle’s Reaction Control System (RCS). The RCS is a system of thrusters that are used to steer the vehicle. The system consists of tanks, regulators, valves, thrusters, pipes, and junctions. The RCS system model comprises 160 model fragments that generate 150 equations relating 180 parameters. Figure 1 shows the topological structure of the RCS system. A similar picture From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. is displayed on the DME user’s screen, providing a user interface for monitoring and editing capabilities. At any time, the user may ask for explanations by clicking the mouse on text or graphics on the screen. Figure 2 shows a sequence of explanations produced in response to user queries. The first presentation, labeled (a), is an explanation of the salient events of the current state. This “what just happened” explanation uses heuris- tics to filter irrelevant information. Of the 180 variables, only one was mentioned in the explanation of Figure 2a. All text and graphics presented in DME explanations are mouse-sensitive. By clicking on an icon, word, phrase, or sentence, the user can ask follow-up questions about any part of an explanation. In Figure 2a, the user clicks on the sentence stating that the check valve is now in blocking mode. The system produces a menu of possible queries relevant to this fact, as shown in Figure 2b. The user asks for an explanation by selecting a query from the menu. The system then generates the explanation shown in Figure 2c, which explains why the selected behavior was observed. In this case, the behavior is determined by a single variable, the pressure differntial of the quad check valve. The user then asks for the influences on that vari- able by clicking on the sentence describing the value of the variable, and selecting the desired query from the menu shown in Figure 2d. The resulting explanation, shown in presentation 2e, summarizes the causal influences on the pressure differen- tial variable. The value of this variable is determined by two chains of influences: the upstream pressures from the helium tank through the pressure regulators and the down- stream pressures coming from the oxygen tank (see Figure 1). Using heuristics described in Section 4, the sys- tem simplifies these chains into two influences. In the ex- Figure 1: Schematic of the RCS system planation, it says that the value of the variable in question, pressure differential, is determined by the input pressure at the oxygen tank and the nominal pressure of the primary pressure regulator. It explains that these were the salient influences because the secondary regulator was in pass- through mode and the primary regulator was in normal mode. It shows the equation that results from this simplifi- cation at the bottom of the display. In Figure 2f, the user follows up by asking for the in- fluences on the input pressure of the oxygen tank. This produces the explanation shown in Figure 2g, which ex- tends the previous explanation. In this case, the system explains that the oxygen pressure is determined by the amount of gas and two exogenous constants “by the ideal gas law for the oxygen tank.” Clicking on this phrase would result in an explanation that the oxygen tank is mod- eled as an ideal gas container, which is why the ideal gas law is governing in this situation. These are a few of the explanation types that can be generated on demand (others are described in [ 111). We now look at the roles of compositional modeling and causal ordering in generating these explanations. Engineering models used to describe and predict the behavior of systems like the RCS are typically mathematical models. A set of continuous variables represents the values of physical quantities such as pressures and temperatures. The behavior of the system is defined by a set of equations that constrain these variables. Each model is based on a set of approximations, abstrac- tions, and other assumptions about what is relevant to produce the desired data at a reasonable cost. Model formulation is the task of constructing a model from available primitives to answer some question. In electromechanical domains, these models are often constructed by hand from background knowledge of physics and engineering. In the compositional modeling approach [6] to model formulation, engineering models are constructed from modular pieces, called model fragments. A model frag- ment is an abstraction of some physical domain, mecha- nism, structure, or other constituent of a model that con- tributes constraints to the overall behavior description. Model fragments can represent idealized components, such as resistors, transistors, logical gates, electrical junctions, valves, and pipes, and physical processes such as flows. Each model fragment has a set of activation conditions that specify when an applicable model holds in a given simulation (e.g., the model of a boiling process can only hold when the temperature of the water is above some threshold). Each model fragment contributes a partial de- scription of the behavior that occurs when the conditions hold. The behavior is specified in terms of algebraic and logical constraints on the values of simulation variables, DME uses a compositional modeling approach for as- sembling a mathematical model from a library of model fragments. Model formulation and simulation are inter- Intelligent User Interfaces 265 k) (b) Cd) Figure 2: A sequence of explanations produced in response to user queries 266 Gautier Figure 3: The causal order graph for the RCS in state 3. Nodes are variables. Solid arcs are direct influences, and dashed arcs denote integration relationships. DME currently uses two salience heuristics to select causal influences, both of which collapse a chain of influ- ences into a virtual, single-step influence. e Cokzpsing equality chains: Chains in the influence sub- branching arcs. Influences go roughly from left to right; nodes on the left boundary are either exogenous, constant, or integrated variables. The integrated variables are com- puted from their derivatives on each numeric integration loop, as depicted by dashed arcs in the figure. graph of the form vl=v2, v2=v3...=vn are transformed into the equation vl = vn, and only these two influences are presented to the user. The path between nodes POPr-reg-A and PDQuad-check in Figure 4 was collapsed us- ing this heuristic. D The direct influences on a variable such as PDQuadscheck are those connected by single arcs leading into the variable. The subgraph of all influences on a vari- able is formed by recursively following influence arcs to the left. To generate the causal explanation of Figure 2e, the system computed the subgraph of influences for the vari- able to explain, Fl&a&che&. The subgraph of influences on this variable and the associated equations are shown in Figure 4. Collapsing paths of same dimension: Chains of variables of the same physical dimension are collapsed into single-step influences. The value of a variable is a phys- ical quantity of some dimension, such as pressure, tem- perature, or flow. If a sequence of variables all have the same dimension, then they are presumed to be driven as a whole by some influence. For example, the path of fluid flow in the RCS corresponds to sequences of influ- ences of the same dimension (flow). A straightforward application of causal ordering to ex- planation would be to output the entire subgraph of influ- ences. However, when this sub- graph is large the resulting output would not be an effective presenta- tion of information in textual form (violating our third requirement). A second design alternative would be to only output the immediately adjacent nodes in the influence graph. For example, the explana- tion of Figure 2e would read “PDQuad-check is the difference of the pressure at the output node of the quad check valve (PDln(Quad-check)) and the Pressure at the input of the quad check valve (PD1n(Qua&c.,eck)) by the definition of pressure differential.” Then the user could traverse the next node in influence graph by clicking on one of these two variables and asking for causal influences on it. This approach can distribute a single explanation over several presenta- tions, and requires the user to sort out irrelevant detail. The influences to report to the user are those con- nected to the influenced variable by a collapsed sequence or an adjacent node in the influence graph. The resulting influences need not be terminals in the original influence FigWe 4: The subgraph Of hIfhenCeS on PDQua&check, which are shown in black in Figure 3. The circled variables appear in the explanation of Figure 2e. To overcome this problem, DIvIE applies salience heuristics to select a subset of influences to report in an explanation. The system works back through the subgraph from the influenced variable, collapsing paths called influ- ence chains to a single-step influence. For example, in the influence graph of Figure 4, the chain from FOUt(()2-rank) to FDQua&&& and the chain from POPreregeA to PDQuadvcheck were collapsed. Instead of describing the graph of 12 po- tential influences, the explanation says that PDQuadmcheck is simply the difference between FOur(02-rank) and POprmreg-~. Intelligent User Interfaces 267 leaved. During a simulation, DME monitors the activation conditions of model fragments; at each state, the system combines the equations of active model fragments into a set called the equation model. The equation model is then used to drive a conventional numerical simulation. A qualitative state is a period during which the equation model remains unchanged. Within a qualitative state, the numeric values of modeled parameters can change. When parameter values cross certain limit points, the activation conditions of some model fragments become true and others become false, leading to qualitative state transitions. DME monitors the numerical simulation for such changes, and updates the equation model for each new state. The data predicted by simulation are a mathematical consequence of initial values and the constraint equations given in the model. Interpreting the data requires an un- derstanding of the knowledge used in formulating the model, such as the physical mechanisms and component structure underlying the equations. If the engineer looking at the output is not the person who built the model, or if the model is complex and contains hidden assumptions, then it can be difficult for the engineer to make sense of the simulation output. DME’s explanation services are intended to address this problem by relating predicted data to the underlying modeling choices. Compositional modeling plays an essential role for explanation by providing the derivation of the equations from model fragments. DME uses the derivation information in several ways. First, transitions between qualitative states are ex- plained as a change in the set of active model fragments. For example, the summary of salient events (Figure 2a) de- scribes those variables whose values have crossed limit points and lists the model fragments that have become ac- tive (quad check valve closed) and inactive (quad check valve open). To explain what caused such a change in be- havior, the system shows how the activation conditions of the model fragment were satisfied (Figure 2~). Furthermore, DME uses the analysis of limit points in the activation conditions as a salience heuristic, focusing the summary on just those parameters that could lead to quali- tative state transitions. Second, the principles or assumptions underlying an equation can be explained by reference to the model frag- ments that contributed them. For example, when the user asked the system to describe the influences on the pressure at the input of the oxygen tank, it showed the ideal gas law equation applied to the tank (Figure 2g). It knew that the ideal gas law equation was contributed by the ideal-tank model fragment, which is inherited by the model fragment representing the oxygen tank. Derivation knowledge is also used when simplifying a causal influence path. As shown in Figure 2e, the system relates the pressure at the quad check valve to the pressure at the primary regulator by explaining that the secondary regulator (which is between the other two components) is in pass-through mode. The model fragment for the pass- through mode specifies that the input and output pressures are equal, and the system removes this equation in the ex- planation (see Section 4). Knowing the source of this equation-the pass-through operating mode-helps ex- plain why the pressure at the quad check valve is deter- mined by the pressure at the primary regulator. The equations in the model used for simulation in DA4E do not specify any causal orientation; they are purely mathe- matical constraints on the values of variables. The pres- sure differential variable, for instance, is related mathemat- ically to almost every other variable in the model. To gen- erate a causal explanation of the influences on this vari- able, one needs to determine which variables directly in- fluence this variable. One approach is to build ad hoc models specifically for explanation, in which the causal influences on all vari- ables are fixed in advance. We reject this option because it is brittle and does not scale with the size of the model. Another approach would be to use compositional model- ing, but build an assignment of direct influences into the model fragments. This is done in QP theory [8], in which causality is specified explicitly through direct inj7uences. Using the QP approach, the model fragments representing processes each contribute causal dependency and orienta- tion information, which can be propagated through func- tional dependencies (indirect influences) to produce a global ordering for the composed model. The only prob- lem with this scheme is that it requires the model fragment writers to anticipate all plausible causal influences in ad- vance. Instead, we assume that causal influences can be in- ferred at run time. We use an adaptation of the causal or- dering procedure developed by Simon and Iwasaki [ 131 to infer a graph of the influence relation. Given a set of equa- tions constraining a set of model parameters, and the knowledge of which parameters are exogenous (i.e., de- termined outside the modeled system), the algorithm pro- duces a dependency graph over the remaining parameters. For each variable in an equation model, its causal influ- ences are determined as follows. If it is an exogenous variable and/or a constant, then by definition it is influ- enced by no other variable. If it is a discrete variable, then it can only be changed by the effect of a discrete mode fragment (e.g., the triggering of a relay), by an operator ac- tion (the opening of a valve), or by forward chaining through rules from one of these events. If a variable is in- tegrated from its derivative, then it is influenced by the derivative (e.g., acceleration causes change in velocity). Otherwise, the influences on the variable are the variables that were used to compute it in the numeric simulation. The order of computation is exactly the order given by the causal ordering graph. Figure 3 shows the causal order graph of the equation model in effect for the explanations of Figure 2. Each node in the graph corresponds to a variable in the model. Each arc represents an influence given by an equation; equations relating more than two variables appear as 268 Gautier graph (i.e., variables that are not influenced). For exam- ple, the variable PoutCoz-tan~) is reported as an influence on p D Quad-check, but the former is in turn influenced by three other parameters of the oxygen tank. The user can ask for these influences by invoking a follow-up question, as shown in parts f and g of Figure 2. In explanations where sequences are collapsed, the system displays the equation that results from symbolically solving the set of relevant equations. For example, the equations shown in Figure 4 are reduced to the equation PDQuad-check=POut(02-tank) - P”(Pr-reg-a)~ Presenting a chain of influences as a single-step influ- ence imposes a view of the data that suggests a causal in- terpretation. The system gives the “reasons” for this causal relationship by listing the model fragments that contributed a collapsed equation and that have activation conditions dependent on time-varying variables. Typically these are operating modes of components. For example, the system justified the single-step jump to the nominal pressure of the primary regulator “because the secondary regulator A was in pass through mode and the primary regulator A was regulating normally” (from Figure 2e). 5. Summary and Analysis The use of compositional modeling and causal ordering techniques is responsible for several desired properties of the explanation approach we have presented. First, it is possible to generate causal interpretations (our first design requirement) from models that are de- signed for engineering analysis and simulation, rather than being crafted specially for explanation. Because causal in- fluences can be inferred from equation models using causal ordering, they need not be built in to the model. Because explanation is integrated with compositional modeling, ex- planations of the causes of changes in qualitative state (e.g., discrete events like the quad check valve becoming blocked) can be determined by an analysis of the logical preconditions of model fragments that are activated and deactivated. The set of conditions to report need not be an- ticipated in advance, and it can change as the model frag- ment library evolves. Second, the explanations can be presented at useful levels of abstraction (requirement 2), even though they are driven by numerical simulation data. Low-level events such as changes in variable values and mathematical con- straints on variables are explained in terms of the model fragments that contributed the constraints; the model frag- ments represent abstractions such as components, operat- ing modes, and processes. This capability is possible be- cause the derivation of equations from the original model fragments is known to the system. Third, the explanations can be presented in a suitable format for human consumption (requirement 3). DME’s simple text generation procedures are adequate to produce small explanations; the capability to ask interactive follow- up questions gives the user control over the level of detail and lessens the need for advanced text planning and user modeling. The pedagogical quality of the explanations is a func- tion of the quality of the models. If the model builder di- vides a system model into modular fragments that make sense in the domain (e.g., components, connections, and operating modes), then the explanation system will be able to describe them. None of the explanation code knows anything about flows, pressures, or even junctions. It knows about the structure of model fragments-activation conditions, behavior constraints, quantity variables-and some algorithms for text generation. Furthermore, the model builder may add textual anno- tations incrementally, and the explanations will improve gracefully. For example, if components are not named with text annotations, the system will generate names based on the Lisp atoms used to specify the model frag- ments. As the textual annotations are added, the system can compose them into more useful labels, such as the sub- scripted variable notation. This capability is possible be- cause of the modularity and compositionality enabled by the compositional modeling. The major problem with the use of causal ordering and compositional modeling is the introduction of irrelevant detail. A model built specially for explanation can include only those variables and causal relations that are relevant to the target explanations. A model composed from independent model fragments includes intermediate variables and equations such as those modeling the flows and pressures at junctions. Since the causal ordering algorithm is domain-independent and works bottom-up from equations, rather than top-down from model fragments, these intermediate variables are included in the determination of causal influences on a variable. The solution taken in DME was the application of salience heuristics, as described in Section 4. Although these are not specific to any domain, they are aimed at eliminating the intermediate variables and equations that occur when modeling circuits using constraint equations. Additional heuristics may be needed in other classes of models. Fortunately, it is possible to degrade smoothly when the heuristics fail. If an influence chain should have been collapsed but was not, the user can easily traverse it with a few clicks of the mouse. 6. Related Work Existing systems for generating explanations of device be- havior typically depend on models built explicitly for the explanation or tutoring task [10,19]. When explanations are generated from more general behavior models, the ex- planations typically follow from hard-coded labeling of causal influence [21] or component function [ 141. Much of the work in explanation has concentrated on the generation of high-quality presentations in natural lan- guage based on discourse planning and user modeling [7,17,18,19]. These presentation techniques are indepen- dent of the modeling method or technique for determining causal influence, and so could be adapted for the explana- tion approach presented in this paper. Intelligent User Interfaces 269 The task of giving a causal interpretation to device be- havior has been addressed from several perspectives [3,13,20]. Alternatives to the causal ordering algorithm have been developed, such as context sensitive causality [16] and a method based on bond graphs [20]. These methods differ in the information they require and the class of models they accept. Given a model like the RCS and the same exogenous labeling, these methods should all produce the same ordering graph. Any of these methods could be used by the explanation technique we have de- scribed. The idea of using such a causal interpretation for gen- erating explanations has been previously proposed. In QUALEX [4], a causal graph is computed from a set of confluences [2], and the graph is used to explain the propa- gation of perturbations (“if X goes up, Y goes down”). This system is limited by the modeling representation: the confluence equations can only predict the sign of the first derivative and do not scale. Qualitative models have been used to generate expla- nations in tutoring and training systems [10,21]. DME’s explanation system can also generate explanations on such models (using QSIM [15] for simulation). Qualitative models have known limitations of scale. Work on the SIMGEN systems [5,9] was the first to achieve the effect of qualitative explanation using numeri- cal simulation models. SIMGEN also uses a compositional modeling approach, and explains simulation data using the derivation of equations from model fragments. The SIMGEN strategy is to build parallel qualitative and quantitative model libraries, analyze the possible qualitative state transitions for a given scenario description, and compile out an efficient numeric simulator. While DME determines model fragment activation and assembles equation models at run time, SIMGEN precomputes and stores the information relating the quantitative model and the qualitative model (which we call the derivation information). For causal explanations, the SIMGEN systems use the causal assignment that is built into the direct and indirect influences of the qualitative model. If the directly in- fluenced variables are exogenous, this produces the same causal ordering as the Iwasaki and Simon algorithm. To answer questions such as “what affects this variable?“, SIMGEN currently shows the single-step influences and does not summarize chains of influences. In principle, the DME explanation method could use the model derivation information from a SIMGEN model library, and SIMGEN could use DME’s text composition, causal ordering, user interface, and filtering techniques. Furthermore, QSIM-style qualitative models can be de- rived from quantitative models as used in DME. We are working with the authors of SIMGEN and QPC [l] (which is similar to DME and uses QSIM) on a common modeling formalism that might make it possible to exchange model libraries and test these conjectures. Bibliography VI PI [31 141 [51 J. Crawford, A. Farquhar, & B. Kuipers. QPC: A Compiler from Physical Models into Qualitative Differential Equations. AAAZ-91, pp. 365-371, 1990. J. de Kleer & J. S. Brown. A qualitative physics based on confluences. ArtifciaI Intelligence, 24:7-83, 1984. J. de Kleer & J. S. Brown. Theories of Causal Ordering. Artificial intelligence, 29(1):33-62, 1986. S. A. Douglas & Z.-Y. Liu. Generating causal explanation h=i8y a cardio-vascular simulation. ZJCAZ-89, pp. 489-494, . B. Falkenhainer & K. Forbus. Self-explanatory simula- tions: Scaling up to large models. AAAZ-92, pp. 685-690, 1992. [a 171 l-81 193 B. Falkenhainer & K. D. Forbus. Compositional modeling: Finding the right model for the job. Artificial Intelligence, 51:95-143, 1991. S. K. Feiner & K. R. McKeown. Coordinating text and graphics in explanation generation. AAAI-90, pp. 442-449, 1990. WI [ill WI u31 iI41 [I51 W-51 1171 WI [I91 [201 WI K. D. Forbus. Qualitative Process Theory. Art+cial Intelligence, 24~85-168, 1984. K. D. Forbus & B. Falkenhainer. Self-explanatory simula- tions: An integration of qualitative and quantitative knowl- edge. AAAI-90, pp. 380-387, 1990. K. D. Forbus & A. Stevens. Using qualitative simulation to generate explanations. Proceedings of the Third Annual Conference of the Cognitive Science Society, 19 8 1. T. R. Gruber & P. 0. Gautier. Machine-generated explana- tions of engineering models: A compositional modeling ap- proach. IJCAI-93,1993. Y. Iwasaki & C. M. Low. Model Generation and Simulation of Device Behavior with Continuous and Discrete Changes. Intelligent Systems Engineering, 1(2)1993. Y. Iwasaki & H. Simon. Causality in device behavior. Artificial Intelligence, 29~3-32, 1986. A. M. Keuneke & M. C. Tanner. Explanations in knowl- edge systems: The roles of the task structure and domain functional models. IEEE Expert, 6(3):50-56, 1991. B. Kuipers. Qualitative simulation. Artificial Intelligence, 29:289-388, 1986. M. Lee, P. Compton, & B. Jansen. Modelling with Context-Dependent Causality. In R. Mizoguchi, Ed., Proceedings of the Second Japan Knowledge Acquisition for Knowledge-Based Systems Workshop, Kobe, Japan, pp. 357-370, 1992. J. D. Moore & W. R. Swartout. Pointing: A way toward explanation dialog. AAAI-90, pp. 457-464, 1990. C. Paris. The Use of Explicit User Models in Text Generation: Tailoring to a User’s Level of Expertise. PhD Thesis, Columbia University, 1987. D. Suthers, B. Woolf, & M. Cornell. Steps from explana- tion planning to model construction dialogues. AAAZ-92, pp. 24-30,1992. J. Top & H. Akkermans. Computational and Physical Causality. IJCM-91, pp. 1171-1176, 1991. B. White & J. Frederiksen. Causal model progressions as a foundation for Intelligent learning. Artificial Intelligence, 42(1):99-155, 1990. Acknowledgments The DME system is the product of members of the How Things Work project, including Richard Fikes, Yumi Iwasaki, Alon Levy, Chee Meng Low, Fritz Mueller, James Rice, and Pandu Nayak. Brian Falkenhainer and Ken Forbus have been very influential. 270 Gautier | 1993 | 40 |
1,366 | Vi USC/Information Sciences Institute 4676 Admiralty Way Marina de1 Rey, CA 90292 Abstract Examples form an integral and very important part of many descriptions, especially in contexts such as tutor- ing and documentation generation. The ability to tailor a description for a particular situation is particularly impor- tant when different situations can result in widely vary- ing descriptions. This paper considers the generation of descriptions with examples for two different situations: introductory texts and advanced, reference manual style texts. Previous studies have focused on any the exam- ples or the language component of the explanation in isolation. However, there is a strong interaction between the examples and the accompanying description and it is therefore important to study how both these components are affected by changes in the situation. In this paper, we characterize examples in the context of their description along three orthogonal axes: the in- formation content, the knowledge type of the example and the text-type in which the explanation is being gen- erated. While variations along either of the three axes can result in different descriptions, this paper addresses variation along the text-type axis. We illustrate our dis- cussion with a description of a list from our domain of LISP documentation, and present a trace of the system as it generates these descriptions. ntroduction Examples are an integral part of many descriptions, espe- cially in contexts such as tutoring and documentation gen- eration. Indeed, the importance of using illustrative ex- amples in communicating effectively has long been rec- ognized, e.g., (Greenwald, 1984; Doheny-Farina, 1988; Norman, 1988). People like examples because examples tend to put abstract, theoretical information into concrete terms they can understand. In fact, one study found that 76% of users looking at system documentation initially ignored the description and went straight to the examples (LeFevre and Dixon, 1986). A system that generates descriptions must The authors gratefully acknowledge support from NASA-Ames grant NCC 2-520 and DARPA contract DABT63-91-C-0025. CCcile Paris also acknowledges support from NSF grant IRI-9003087. Department of Computer Science University of Southern California Los Angeles, CA 90089 thus be able to include examples. Furthermore, e abil- ity to tailor a description for a particular situation is partic- ularly important as different situations can result in widely varying descriptions, where both the textual descriptions and the accompanying examples vary. Some researchers have already looked at how a textual description can be affected by different situations (or different users), e.g. (Paris, 1988; Bateman and Paris, 1989). Others have studied how to con- struct or retrieve appropriate examples, e.g. (Rissland and Soloway, 1980; Ashley and Aleven, 1992; Rissland, 1983; Suthers and Rissland, 1988). However, the issue of tailoring descriptions that include examples for the situation at hand has not been addressed. Yet, it is clear that one cannot plan a description tailored to a user, and then independently and as an afterthought, add some examples to the description: Sweller and his colleagues found that if the examples and the descriptive component were not well integrated, the combi- nation could result in reduced user comprehension (Chandler and Sweller, 199 1; Ward and Sweller, 1990). Examples and text must be presented to the user as a coherent whole, and together, appropriately tailored to the situation. Because examples are crucial in documentation (Charney et al., 1988; Feldman and Mlausmeier, 1974; Klausmeier and Feldman, 1975; Reder et QZ., 1986), and documentation is a critical factor in user acceptance of a system, we chose au- tomatic documentation as our domain to investigate the issue of generating descriptions that include examples. This do- main has additional advantages: there is a large body of work on documentation writing, a lot of actual material that we can study, including numerous examples of the text types we are concerned with (introductory and advanced). In previous work, we have described the issues that must be addressed for a system to be able to generate descriptions with well integrated examples (Mittal and Paris, 1992). In this paper, we show how two specific situations, introductory texts and advanced texts, result in two different such descriptions. This paper is structured as follows: Section 2 briefly reviews the issues that arise when generating text with examples. Sec- tion 3 presents a categorization of example types that allows us to provide a characterization of the differences between the texts in introductory vs references manuals and Section 4 dis- cusses these differences. Section 5 describes our text planning framework, and Section 6 presents a trace of the algorithm. Intelligent User Interfaces 271 From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. A list always begins with a left parenthesis. Then come zero or more pieces of data (called the elements of a list) and a right parenthesis. Some examples of lists are: (AARDVARK) (RED YELLOW GREEN BLUE) (2 3 5 Ii 19) (3 FRENCH FRIES) A list may contain other lists as elements. Given the three lists: (BLUE SKY) (GREEN GRASS) (BROWN EARTH) we can make a list by combining them all with a paren- theses. ((BLUE SKY) (GREEN GRASS) (BROWN EARTH)) Figure 1: A description of list in an introductory text from (Touretzky, 1984), p.35 Section 7 concludes with a look at the limitations. tegrating Examples in escriptive Texts Many issues need to be considered when generating descrip- tions that integrate descriptive text and examples, because both these components co-constrain and affect each other. The in- clusion of examples in an explanation can sometimes cause additional text to be generated; at other times, it can cause certain portions of the original explanation to be elided. A generation system must therefore take into account the inter- action between the descriptive text and the examples, as well as effects from other factors, such as the presentation order of the examples, the placement of the examples with respect to each other, as well as the descriptive text, etc. While we have discussed these issues elsewhere (MittaI and Paris, 1992; Mittal, 1993 forthcoming), we review some of them here: What should be in the text, in the examples, in both? What is a suitable example? How much information should a single example attempt to convey? Should there be more than one example? If multiple examples are to be presented, what is the order of presentation? If an example is to be given, should the example be pre- sented immediately, or after the whole description is pre- sented? This will determine whether the example(s) appear within, before, or after the descriptive text. Should prompts’ be generated along with the examples? Answers to these questions will depend on whether the text an introductory or advanced text. Consider, for example, the descriptions of 1 is t given in Fig. 1 taken from (Touretzky, 1984), an introductory manual, and Fig. 2 taken from (Steele Jr., 1984), a reference manual: they contain very different information in both their descriptive portions as well as their examples; while Fig. 1 contains 8 lists (which are used either as examples or as background to the examples), Fig. 2 has only “Prompts’ are attention focusing devices such as arrows, marks, or even additional text associated with examples (Engelmann and Camine, 1982). A list is recursively defined to be either the empty list or a CONS whose CDR component is a list. The CAR components of the coHses are called the elements of the list. For each element of the list, there is a COBS. The empty list has no elements at all. A list is annotated by writing the elements of the list in order, separated by blank space (space, tab, or return character) and surrounded by parentheses. For example: (a b c) ; A list of 3 symbols (2.0~0 (a 1) #\*) ; A list of 3 things:a ; floating point number, ; another list, and a ; character object Figure 2: A description of list from a reference manual from (Steele Jr., 1984), p.26 2 examples. Finally, the examples in Fig. 1 do not contain prompts, while those in Fig. 2 do. Categorizing Exa In order to provide appropriately tailored examples, we must first characterize the type of examples that can appear in de- scriptions. This will then help the system in choosing appro- priate examples to present as part of a description. While some example categorizations (Michener, 1978; Polya, 1973) have already been proposed, we found these inadequate as they do not take the context of the whole ex- planation into account. This is because previous attempts at categorizing example types were done in an analytical rather than a generational context, and, as a result, these categoriza- tions suffered from two drawbacks from the standpoint of a computational generation system: (i) they do not explicitly take into account the context in which the example occurred, and (ii) they did not differentiate among different dimensions of variation. An example of how important the context is in determining the category of the example can be seen if we look at the two descriptions of a list, shown in Fig. 3, taken from our LISP domain. The empty list NIL is an anomalous example for the first definition, while it is a positive example for the second one. Thus it is clear that categorization depends upon not only the example, but the surrounding context (which includes the descriptive text accompanying the example) as well. Based on our analysis of a number of instructional texts, numerous reference manuals and large amounts of system documentation, we formulated a three dimensional system to categorize examples by explicitly taking the context into account. The three dimensions are:2 1. the polarity of the example with respect to the description: It can be: (i) positive, i.e., the example is an instance of the description, (ii) negative, i.e., the example is not an instance of the description, or (iii) anomalous, i.e., the example either looks positive and is actually negative, or vice-versa. 2Further details on this classification of examples into a three dimensional space may be found in (Mittal and Paris, 1993). 272 Mittal A left parenthesis followed by zero or more S-expressions followed by a right parenthesis is a list. From (Shapiro, 1986) A list is recursively defined to be either the empty list or a CONS whose CDR component is a list. The CAR components of the coIses are called the elements of the list. For each element of the list, there is a CONS. The empty list has no elements at all. The empty list NIL therefore can be written ( ), because it is a list with no elements. From (Steele Jr., 1984) Figure 3: Rvo definitions that cause NIL to be classified dif- ferently as an illustrative example. 2. 3. the knowledge type being communicated: for example, a concept, a relation or a process is being described. the genre or text-type to be generated: For now, we only take into consideration two text-types:3 (i) descriptions in introductory texts, and (ii) descriptions in reference manu- als. These are, in our case, closely related to the user types: introductory texts are intended for beginners and naive users while advanced texts are intended for expert users.4 Note that each of these axes can be further sub-divided (for instance, concepts can be further specified as being single- featured or multi-featured concepts, etc.). Figure 4: Three dimensions for example categorization. Such a categorization is essential to narrow the search space for suitable examples during genera- tion. Furthermore, it al- lows us to make use of the numerous results in educa- tional psychology and cog- nitive science, on how to best choose and present ex- amples for a particular text- and knowledge-type. For example, results there sug- gest constraints that can be taken into consideration with respect to the number of ex- amples to present, e.g., (Markle and Tiemann, 1969), their order of presentation, e.g., (Carnine, 1980; Engelmann and Carnine, 1982), whether anomalous examples should be pre- sented, e.g., (Engelmann and Carnine, 1982), etc. 3We make use of the notion of a text-type here only in a very broad sense to define distinct categories that affect the generation of examples in our framework for the automatic documentation task. However, these text-types can be refined further. Indeed, several detailed text typologies have been proposed by linguists e.g., (Biber, 1989; de Beaugrande, 1980). 4We have in fact referred to this axis as ‘user type’ in other work. Introductory ES Advanced Texts We now consider how descriptions that contain examples dif- fer, when we move along the text-type axis of our categoriza- tion, from introductory to advanced text. We address each of the questions presented in Section 2: ive component: In the case of the introductory descriptive component contains surface or syn- tactic information; in the case of the reference text-type, the description must include complete information, including the mal structure of the concept. actual examples: Examples in both text-types illustrate cal features5 of the surface or syntactic form of the concept or its realization. In introductory texts, however, examples are simple and tend to illustrate only one feature at a time. (Sometimes it is not possible to isolate one feature, and an example might illustrate two features; in this case, the system will need to generate additional text to mention this fact.) On the other hand, examples in reference texts are multi-featured. The number of examples: Since introductory texts contain usually single-featured examples, the number of examples de- pend upon the number of critical features that the concept possesses. In contrast, reference texts contain examples that contain three or four features per example (Clark, 197 l), and, therefore, proportionately fewer examples need to be pre- . polarity of the examples: Introductory texts make use of both positive and negative examples, but not anomalous examples. Advanced texts on the other hand, contain positive and anomalous examples. The position OP the examples: In introductory texts, the ex- amples are presented immediately after the point they illustrate is mentioned. This results in descriptions in which the exam- ples are interspersed in the text. On the other hand, examples in reference texts must be presented only after the description cept is complete. The system needs to generate prompts for exam- contain more than one feature. The system must also generate prompts in the case of recursive examples (they use other instances which are also instances of the concept), and anomalous examples if background text has not yet been generated (as is done for introductory texts). These guidelines are summarized in Fig. 5. In the following section, we will illustrate how a system can use these guidelines to generate descriptions (text and examples) for both introductory and advanced texts, in our domain of the programming language LISP. Our system is part of the documentation facility we are building for the Explainable Expert Systems (EES) frame- work (Swartout et al., 1992). The framework implements the integration of text and examples within a text-generation sys- tem. More specifically, we use a text-planning system that con- structs text by explicitly reasoning about the communicative %itical features are features that are necessary for an example to be considered a positive example of a concept. Changes to a critical feature cause a positive example to become a negative example. Intelligent User Interfaces 273 goal to be achieved, as well as how goals relate to each other rhetorically to form a coherent text (Moore and Paris, 1989; Moore, 1989; Moore and Paris, 1992). Given a top level com- municative gOal (such as (KNOW-ABOUT HEARER (CONCEPT LIST) ),6 the system finds plans capable of achieving this goal. Plans typically post further sub-goals to be satisfied. These are expanded, and planning continues until primitive speech acts are achieved. The result of the planning process is a discourse tree, where the nodes represent goals at vari- ous levels of abstraction, with the root being the initial goal, and the leaves representing primitive realization statements, such as (INFORM . . . ) statements. The discourse tree also includes coherence relations (Mann and Thompson, 1987), which indicate how the various portions of text resulting from the discourse tree will be related rhetorically. This tree is then passed to a grammar interface which converts it into a set of inputs suitable for input to a grammar. Plan operators can be seen as small schemas which describe how to achieve a goal; they are designed by studying natural language texts and transcripts. They include conditions for their applicability, which can refer to the system knowledge base, the user model, or the context (the text plan tree under construction and the dialogue history). In this framework, the generation of examples is accomplished by explicitly posting the goal of providing an example while constructing the text. We now describe a trace of the system as it plans the presen- tation of descriptions similar to the ones presented in Fig. 1 and 2. First, assume we want to produce a description of a 1 is t for an introductory manual. The system is given a top-level goal: (KNOW-ABOUT HEARER (CONCEPT LIsT)).The text planner searches for applicable plan operators in its plan- library, and it picks one based on the applicable constraints such as the text-type (introductory), the knowledge type (con- cept), etc.7 The text-type restricts the choice of the features to present to be syntactic ones. The main features of list are retrieved, and two subgoals are posted: one to list the critical features (the left parenthesis, the data elements and the right parenthesis), and another to elaborate upon them. At this point, the discourse tree has only two nodes apart from theinitialnodeof (KNOW-ABOUT H (CONCEPT LIST)): namely (i) (BEL H (MAIN-FEATURES LIST (LT-PAREN DATA-ELMT RT-PAREN))), and (ii) (ELABORATION FEA- TURES) ,’ which will result in a goal to describe each of the features in turn. The planner searches for appropriate operators to satisfy these goals. The plan operator to describe a list of features indicates that the features should be mentioned in a sequence. Three goals are appropriately posted at this point. These 6See the references given above for details on the notation used to represent these goals. 7When several plans are available, the system chooses one using selection heuristics designed by (Moore, 1989). *ELABORATIONis oneof the coherencerelations definedin (Mann and Thompson, 1987). 274 lwttd For each issue, the effect of the text-type is: Examules: introductory: simple, single critical-feature advanced: complex, multiple critical-features structure features four features) Positioning the Examples: introductory: immediately after points being illustrated advanced: after the description is complete Prompts: indroductory: prompt if example has more than one feature advanced: prompts if anomalous and recursive examples Figure 5: Brief description of differences between examples in introductory and advanced texts. goals result in the planner generating a plan for the first two sentences of Fig. 1. The other sub-goal (the ELABORATION) also causes three goals to be posted for describing each of the critical features. Since two of these are for elaborating upon the parentheses, they are not expanded because no further information is available. So only the goal of describing the data elements remains. A partial representation of the resulting text plan is shown in Fig. 6.9 Data elements can be of three types: numbers, symbols, or lists. The system can either communicate this information by realizing an appropriate sentence, or through examples - since it can generate examples for each of these types, or both. The text type (introductory text) constraints cause the sys- tem to pick examples. (If the text-type had been ‘reference,’ the system would have delayed the presentation of examples, and text would have been generated at that point instead of the examples.) The system posts two goals to illustrate the two dimensions along which the data elements can vary: the number of elements and the type. Information about a particular feature can be communicated by the system through examples efficiently by using pairs (or groups) of examples as follows: e if the feature to be communicated happens to be a critical feature, the system generates pairs of examples, onepositive and one negative, which are identical except for the feature being communicated, and o if the feature to be communicated happens to be a variabZe1o ‘All the text plans shown in this paper are simplified versions of the actual plans generated: in particular, the communicative goals are not written in their formal notation, in terms of the hearer’s mental states, for readability’s sake. “Variable features are features that can vary in a positive example. Changes to variable features creates different positive examples. . DESCRIBE-DATA-ELEbDU’ITS N I LJSTSYNTACI-IC FEATURES I GWWrn GENERA;IE SEQUENCE SEQUENCE Figure 6: Skeletal plan for listing main features of list. feature, the system generates pairs of examples that are both positive and are widely different in that feature Thus, to communicate the fact that there can be any number of elements, the system posts two goals to generate two differing positive examples, one with a single element and another with multiple elements. The example generation algorithm ensures that the examples selected for related sub-goals (such as the two above) differ in only the dimension being highlighted. However, as the examples contain two critical features (i.e., type is illustrated as well), the system generates prompts to focus the attention on the reader on the number feature (“a list of one element” vs “a list of several elements”). The goal to illustrate the type dimension is handled in similar fashion, with four sub-goals (one each for the types: symbols, numbers, symbols and numbers, and sub-lists) being posted. The last data type, sub-lists, is marked by the algorithm as a re- cursive use of the concept, and is handled specially because the text-type is introductory. In the case of an introductory text, such examples must be introduced with appropriate explana- tions added to the text. (If the text-type had been ‘reference,’ the system would have generated a prompt denoting the pres- ence of the sub-list.) The resulting skeletal text-plan generated by the system is shown in Fig. 7. Consider the second case now, when the text-type is spec- ified as being ‘reference.’ In this case, the system starts with the same top-level goal as before, but the text-type constraints cause the planner to select both the structural representation of a list, as well as the syntactic structure for presentation. The system posts two goals, one to describe the underlying structure, and one to describe the syntactic form of a list. The two goals expand into the first two paragraphs in Fig. 2. Note that the examples occur at the end of the description. The two examples generated are much more complex than the previous case, and they contain a number of variable features (the second example shows the variation in element types, as well as the variation in number possible). Since the second example generated contains a list as a data element, the sys- tem generates prompts for the examples. For lack of space, the resulting text plan is not presented here.’ ’ In both of the above cases, the completed discourse tree is passed to an interface which converts the INFORM goals into “See (Mittal, 1993 forthcoming) for more details. (AARDVARQ Alislwilb one chmol Figure 7: Skeletal plan for generating examples of lists. the appropriate input for the sentence generator. The interface chooses the appropriate lexical and syntactic constructs to form the individual sentences and connects them appropriately, using the rhetorical information from the discourse tree. We have presented an analysis of the differences in descrip- tions that integrate examples for introductory and advanced texts. To be able to do this, we first presented a brief descrip- tion of our characterization of examples, explicitly taking into account the surrounding context. Variation along any of these axes causes the explanation generated to change accordingly. This variation occurs not just in the descriptive part of the ex- planation, but also in the examples that accompany it. Since the examples and the descriptive component are tightly inte- grated and affect each other in many ways, a system designed to generate such descriptions must take into account interactions and be able to structure the presentation accord- ingly. We have presented information necessary to generate descriptions for two text-types: introductory and advanced. The algorithm used by the system was illustrated by tracing the generation of two descriptions of the LISP list. The issues we have described are not specific to a particular framework or implementation for genera of either text or examples. In fact, the algorithm descri is imp%emented in our system as constraint specification across different plan operators. We have successfully combined two well-known generators (one for text and one for examples) in our system to produce the explanations described in this paper. Ashley, K. D. and Aleven, V., 1992. Generating Dialectical Examples Automatically. In Proceedings of AAAI-92,654- 660. San Jose, CA. Bateman, J. A. and Paris, C. I., 1989. Phrasing a text in terms the user can understand. In Proceedings of IJCM 89. Detroit, MI. Intelligent User Interfaces 275 Biber, D., 1989. A typology of English Texts. Linguistics 27~3-43. Carnine, D.W., 1980. Two Letter Discrimination Sequences: High-Confusion-Alternatives first versus Low-Confusion- Alternatives first. Journal of Reading Behaviour, XII( 1):4 l-47, Spring. Chandler, P. and Sweller, J., 1991. Cognitive Load Theory and the Format of Instruction. Cognition and Instruction 8(4):292-332. Charney, D. H., Reder, L. M., and Wells, G. W., 1988. Studies of Elaboration in Instructional Texts. In Doheny- Farina, S.(Ed.), Effective Documentation: What we have learnedfrom Research, 48-72. Cambridge, MA. The MIT Press. Clark, D. C., 197 1. Teaching Concepts in the Classroom: A Set of Prescriptions derived from Experimental Research. Journal of Educational Psychology Monograph 62:253- 278. de Beaugrande, R., 1980. Text, Discourse and Process. Ablex Publishing Co. Doheny-Farina, S., 1988. Effective Documentation : What we have learned from Research. MIT Press. Mittal, V. 0. and Paris, C. L., 1993. Categorizing Example Types in Context: Applications for the Generation of Tu- torial Descriptions. To appear in the Proceedings of AI-ED 93. (Edinburgh, Scotland). Mittal, V.O., 1993 (forthcoming). Generating descriptions with integrated text and examples. PhD thesis, University of Southern California, Los Angeles, CA. Moore, J. D. and Paris, C. L., 1989. Planning text for advi- sory dialogues. In Proceedings of ACL 89. Vancouver, B.C. Moore, J.D. and Paris, CL., 1992. Planning text for advisory dialogues: Capturing intentional, rhetorical and attentional information. TR 92-22, Univ. of Pittsburgh, CS Dept., Pittsburgh, PA, 1992. Moore, J. D., 1989. A Reactive Approach to Explanation in Expert and Advice-Giving Systems. Ph.D. thesis, UCLA, CA. Norman, D., 1988 The Psychology of Everyday Things. New York: Basic Books. Paris, C. L., 1988. Tailoring Object Descriptions to theuser’s Level of Expertise. Computational Linguistics 14(3):64- 78. Engelmann, S. and Carnine, D., 1982. Theory of Instruction: Principles and Applications. New York: Irvington Pub- lishers, Inc. Polya, G., 1973. Induction and Analogy in Mathematics, vol- ume 1 of Mathematicsand Plausible Reasoning. Princeton, NJ.: Princeton University Press. Feldman, K. V. and Klausmeier, H. J., 1974. The effects of two kinds of definitions on the concept attainment of fourth- and eighth-grade students. Journal of Educational Re- search 67(5):219-223. Greenwald, J., 1984. How does this #%$! Thing Work? Time. (Page 64, Week of June 181984). Klausmeier, H. J. and Feldman, K. V., 1975. Effects of a Def- inition and a Varying Number of Examples and Non- Examples on Concept Attainment. Journal of Educational Psychology 67(2): 174-178. Klausmeier, H. J., 1976. Instructional Design and the Teach- ing of Concepts. In Levin, J. R.et al.(Eds.), Cognitive Learning in Children. New York: Academic Press. LeFevre, J.-A. and Dixon, P., 1986. Do Written Instructions Need Examples? Cognition and Instruction 3( 1): l-30. Mann, W. and Thompson, S., 1987. Rhetorical Structure Theory: a Theory of Text Organization. In Polanyi, L. (Ed.), The Structure of Discourse. Norwood, New Jersey: Ablex Publishing Co.. Markle, S.M. and Tiemann, P.W., 1969. Really Understand- ing Concepts. Stipes Press, Urbana, IL. Michener, E. R., 1978. Understanding Understanding Math- ematics. Cognitive Science Journal 2(4):361-383. Mittal, V.O. and Paris, C.L., 1992. Generating Object De- scriptions which integrate both Text and Examples. In Proc.9th CanadianAJ. Conference, pp. l-8. Morgan Kauf- mann Publishers. Reder, L. M., Chamey, D. H., and Morgan, K. I., 1986. The Role of Elaborations in learning a skill from an Instruc- tional Text. Memory and Cognition 14( 1):64-78. Rissland, E. L. and Soloway, E. M., 1980. Overview of an Example Generation System. In Proc. AAAI 80, pp. 256- 258. Rissland, E. L., 1980. Example Generation. In Proceedings of the 3rd Conference of the Candian Society for Computa- tional Studies of Intelligence, 280-288. Toronto, Ontario. Rissland, E. L., 1983. Examples in Legal Reasoning: Legal Hypotheticals. In Proc. IJCAI 83, pp. 90-93. Karlsruhe, Germany. Shapiro, S.C., 1986. LISP: An In- teractive Approach. Rockville, MD.: Computer Science Press. Steele Jr., G. L., 1984. Common Lisp: The Language. Digital Press. Suthers, D. D. and Rissland, E. L., 1988. Constraint Manipulation for Example Generation. COINS TR 88-7 1, CIS Dept, Univ. of Massachusetts, Amherst, MA. Swartout, W. R., Paris, C. L., and Moore, J. D., 1992. Design for Explainable Expert Systems. IEEE Expert 6(3):58-64. Touretzky, D. S., 1984. LISP: A Gentle Introduction to Sym- bolic Computation. New York: Harper & Row Publishers. Ward, M. and Sweller, J., 1990. Structuring Effective Wor- ked Examples. Cognition and Instruction 7( 1): I - 39. 276 Mittal | 1993 | 41 |
1,367 | esis R. Bharat Rao Stephen C-Y. Lu Learning Systems Department Siemens Corporate Research, Inc. Knowledge-based Engg. Systems Res. Lab University of Illinois at Urbana-Champaign Abstract Current computer-aided engineering paradigms for supporting synthesis activities in engineering design require the designer to use analysis simu- lators iteratively in an optimization loop. While optimization is necessary to achieve a good final design, it has a number of disadvantages during the early stages of design. In the inverse engi- neering methodology, machine learning techniques are used to learn a multidirectional model that provides vastly improved synthesis (and analy- sis) support to the designer. This methodology is demonstrated on the early design of a diesel en- gine combustion chamber for a truck. Introduction A design engineer’s primary task is to develop designs that can achieve specified performances. For example, an engine designer may be required to design a “com- bustion chamber that delivers at least 600hp while min- imizing the fuel consumption.” The horsepower and fuel consumption are the performance parameters. In a parameterized domain, the designer sets the values of the decision/design parameters (e.g., engine rpm) so as to meet the performance. Under current computer- aided engineering (CAE) p aradigms, design support is typically provided by computer simulators. These sim- ulators are computerized analysis models that analyze a design and map decisions to performances. How- ever, engineering design is largely a synthesis task that requires mapping the performance space, P, to the de- cision space, D. In the absence of models that provide synthesis support, the designer must use the simulator, F, in an iterative generate and test paradigm, namely, in an optimization loop. The designer begins with an initial design (a point in D), evaluates the design with F, and moves to a new design, based on the differ- ence between the actual and required performances. A common way of moving within D is by response sur- face fitting, where the designer exercises F repeatedly in the neighborhood of the current design, fits a surface to the performances, and moves based on this surface. While the above methodology is essential for the fi- nal stages of design, it has serious drawbacks during early design. First, a poor starting design can result in a large number of optimization steps, which can be very time-consuming. The designer would prefer to rapidly develop a good initial design to use as the starting point in the optimization. Second, the simu- lator is a point to point simulator. This means that the designer must assign values to every decision vari- able at the outset of the design. Ideally, the designer would specify or restrict only the variables in which he was interested, leaving the others to be automati- cally specified as the design progresses. Third, every new performance objective must be set up as a sepa- rate design problem. For example, instead of design- ing an engine that generates 600hp, perhaps there ex- ists an engine that delivers 590hp but with markedly improved fuel consumption, or another that delivers 640hp with slightly less efficient fuel consumption. If the designer had a synthesis model, he could apply and retract conditions on some of the performance vari- ables and quickly determine their effects on the other variables. Similarly he could constrain the decision variables to reflect cost concerns, inventory stocks, or simply as part of a “What-if” analysis. The ability to treat decision and performance variables more or less identically would prove extraordinarily valuable during early design. The synthesis model, once learned, could be used repeatedly for different designs. The inverse engineering methodology [Rao, 19931 provides solutions to all the above problems by build- ing an accurate multidirectional model of the problem domain. The designer uses the model directly to proto- type a design quickly by successively refining the prob- lem space, D U P (as opposed to the traditional CAE paradigm where the designer works only in 0). A sin- gle invocation of F at the end of the process is suffi- cient to check the design. The foundation of inverse engineering is KEDS, the Knowledge-based Equation Discovery System [Rao and Lu, 19931. KEDS ability to learn accurate models in representations that can be converted into constraints (i.e., as piece-wise linear models) makes inverse engineering a viable proposi- Intelligent User Interfaces 277 From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. Decisidn Space Performance Space Table 1: Decision and Performance variables in DESP z 0 Liip gzj Y7 5 0 ~6 Multidirectional Model Figure 1: The Inverse Engineering Methodology tion. This rest of this paper describes this methodol- ogy, and demonstrates (through an example) how these techniques provides improved support for early design. The Inverse Engineering Methodology The essential problem with current CAE paradigms for design is the lack of synthesis support. The barrier be- tween analysis and synthesis activities is especially un- bearable in a concurrent engineering framework, where speed and timely execution of tasks is paramount. As directly learning synthesis models is a very hard task [Rae, 19931, the inverse engineering approach is to learn analysis models in representations that provide synthesis support. For example, if the analysis model for yj E P can be accurately represented as a linear function of some zi E D (i.e., yj = x(aizi) + b), it can then be converted into a constraint that provides BSFC Brake Specific Fuel Consumption ENBHP Engine Brake Horsepower (HP) both analysis and synthesis support. The 4 phases of inverse engineering (see Figure 1) are described below. Example Generation Phase The Diesel Engine Simulation Program, DESP [Assanis and Heywood, 19861, provides the data for KEDS. DESP solves mass and heat balance equations from Thermodynamics and uses finite difference techniques to provide data that is representative of real-world engines. The 6 deci- sion and 2 performance (real-valued) variables for a 6-cylinder, diesel combustion engine are shown in Ta- ble 1. The decision variables are randomly varied to generate 145 events. This results in two data sets (one for each performance variable, BSFC and ENBHP in Table l), such that each event is a two-tuple of a de- cision vector X E D and a corresponding performance variable, yj E P. These data sets are input to KEDS to learn multidirectional models. Model Formation Phase KEDS is a model-driven empirical discovery system that learns models in forms restricted to F , a user-defined class of parameter- ized model families (both linear and non-linear F are permitted). For the purposes of inverse engineer- ing, F is restricted to the class of linear polynomials, y = C(uix;) + b. However, it is unlikely that a sim- ple linear representation will be sufficient, to accurately model most real-world domains. KEDS can simulta- neously be viewed as a conceptual clustering system, which partitions the data based upon the mathemat- ical relationships that it discovers between the vari- ables. Each call to KEDS results in a single partial model (R,f), th a consists of a region (hyperrectan- t gle), R c D, associated with an equation, f E F , that predicts y for all X E R . The KEDS algorithm (described in [Rao and Lu, 19931) involves recursing through equation discovery (fitting) and partitioning (splitting) and combines aspects of fit-and-split [Lan- gley et al., 19871 and split-and-fit [Friedman, 1991; Quinlan, 19861 modeling systems as KEDS refines both the region and the equation. A sample partial model is shown below. [321.0< TINJ] [.2482< FMIN <.3941] [13.16< CR <16.8] [1074< RPM][.0103< VOL] [.813< STBR] ::> ::> BSFC = 1.3 FMIN -.003 TIM -.008 CR +1.5E-5 RPM -27. VOL +1.5 278 Rao Model Selection Phase KEDS is invoked repeat- edly to generate a collection of overlapping partial models. KEDS-MDL [Rao and Lu, 19921 is a resource- bounded incremental algorithm that uses the minimum description length [Rissanen, 19861 principle to select partial models to build a piece-wise complete model. This is a collection of disjoint partial models that de- scribes the entire decision space. Model Utilization Phase Each partial model (region-equation pair) is equivalent to a linear con- straint that maps a region in D to an interval in Yj - KEDS-MDL learns piece-wise linear models for ENBHP and BSFC. The constraints for ENBHP are intersected with the constraints for BSFC to produce a set of intersections. An intersection maps a region in D to a region in P, and also supports reasoning from P to D. No two intersections overlap within the decision space, but the regions in performance space do typi- cally overlap (as several different designs can achieve the same performance). Unlike the traditional CAE paradigm, the designer works in the problem space, D U P. The designer can refine any intersection by refining a variable, i.e., by shrinking the interval asso- ciated with that variable. Refining a decision variable leads to forward propagation along a constraint and the new intervals for the other variables can be deter- mined in a straightforward fashion. Refining a per- formance variable requires inverse propagation along constraints. One possibility is to solve the intersection to find the new feasible region in D (for example, by using Simplex). Instead, this is done by computing the projection of the feasible region onto the decision variables (i.e., the enclosing hyperrectangle) in a sin- gle step computation [Rao, 19931. Inverse propagations can lead to forward propagations, and vice versa. The DESP domain has 15-30 intersections, depend- ing upon the model formation parameters used in KEDS. While it would be a great strain, it is remotely possible that a designer would be able to work individ- ually with each intersection. However, other domains can give rise to many more intersections (a process planning application for a turning machine has lOOO+ intersections). Instead of working with each individual intersection, the designer refines a single composite re- gion that consists of the union of the intervals for all intersections. A truth maintenance system keeps track of the effects of refinements on each intersection, and the designer only sees the composite interval for each variable. This occasionally leads to gaps in the prob- lem space, when two or more disjoint decision regions have similar performance. A number of CAD/CAM [Finger and Dixon, 1989; Vanderplaats, 19841 and AI [Dixon, 1986] techniques have been developed to support engineering design. A complementary approach to inverse engineering for breaking the analysis-synthesis barrier for early design is to develop representations and theories for multi- directional models [Herman, 19891 that could replace existing analysis simulators. However, this fails to take advantage of past research efforts in developing com- puter simulators. Another approach is to speed up the iterative optimization process by replacing slow computer simulators with faster models [Yerramareddy and Lu, 19931. F or a detailed review of related machine learning and design research, see [Rao, 19931. NdLKt E!Si@-l emonstration The inverse engineering interface is shown in Figure 2. There are 6 function windows. The Control Panel is used primarily to initialize the domain by loading mod- els created offline by KEDS-MDL , and to simulate the final design. The original intervals of the multidirec- tional model are displayed in the Original Model Win- dow. The Messages Window displays detailed domain information. The Lisp Listener is for development. The Decision Panel is the window in which the de- signer does virtually all his work. Clicking on the “Re- fine” button brings up a pop-up menu of the variable names. Clicking on a variable (e.g., ENBHP) brings up an Emacs window titled “Ranges for parameter: ENBHP” that displays the current ranges for that variable. After refining the values with Emacs com- mands, hitting the return key causes the refinement to be quickly propagated through all of the intersec- tions creating a new world. In Figure 2 the designer has just refined the ENBHP variable in the Decision Panel to demand that the engine deliver at least 600hp. The Worlds Display Panel (WDP) shows a world view reflecting the state of the world after the ENBHP re- finement. The first three columns in the world view show the names of the variables and the current inter- vals. The last two columns, “Dmin” and “Dmax,” in the world view represent the change (i.e., the delta) in the intervals relative to the previous world. Figure 2 shows that after the ENBHP refinement both bound- aries of the compression ratio were moved inwards, the lower bound of the engine speed was increased, and there was no influence on the fuel consumption (see Table 1 for acronyms). Successive world views occlude previous views in the WDP. The Messages Window indicates that the designer cannot refine STBRAT to fall completely within the gap, ]0.969,0.988[. Clicking on the “Retract” button in the Decision Panel retracts the last refinement, and uncovers the previous world view. This interface was built on the interfaces for the HIDER [Yerramareddy and Lu, 19931 and IDEEA [Herman, 19891 systems. Forming an Early Design The designer’s task is to design a combustion cham- ber for a &cylinder diesel engine for a truck. The en- gine should deliver at least 6OOhp, though this could be slightly relaxed based on the designer’s judgment. In general, good designs have high ENBHP with a low BSFC. There are other cost concerns that may come telligent User Interfaces 27 IINVERSE ENGINEERING INTERFACE 11 Add Custom Menus Remove Custom Menus STBRAT .80546 1.2 TIM 320.0 335.0 FMIN .10032 .39736 EEPM 13.068 1000.0 2400.0 17.0 DVOL .00516 .01469 ENBHP 48.5 BSFC 0.32 2.46 nverse> STBRAT .81343 TIM 321.02 335.0 +I.0175 0 FMIN 0.1266 .3272% +.02628 -.01659 CR 13.162 16.917 +.09368 -.08254 ERPM 1374.0 2400.0 +374.01 0 DVOL .01028 .01469 + .00512 0 _____-_____--____------------------------------- I ENBHP 600.0 STBRAT 1.969~5 .98819[ TIM I FMIN CR ERPM DVOL ____--____----__------------------------------------ ENBHP Figure 2: User Interface for Inverse Engineering Environment into play as the designer applies his background knowl- edge. In this section we follow a designer step by step, as he uses the inverse engineering interface to come up with a complete early design. Each refinement step is indexed by a number indicating the level of refinement. o (1) Refine BSFC to a max of 0.33. The designer ex- ploits the synthesis support to set fuel consumption to a low value. The screen bitmap of the correspond- ing world view is shown in Figure 3(a). o (2) Refine ENBNP to a min of 600. Figure 3(b) shows that this refinement influences many other variables (see “Dmax” and “Dmin” fields). While the designer does not have to begin all designs by restricting P, the importance of being able to di- rectly constrain the performance parameters is tremen- dous. From this point onwards the designer can make any changes in D, and is assured that the propagation mechanisms will constrain the remaining variables to meet the performance specifications. e (3) Refine ERPM to a max of 1400. Engines that run at lower speeds have higher manufacturing tol- erances and thus lower costs associated with them. Unfortunately, restricting the speed to a very low value adversely affects other decision variables as shown in Figure 3(c). In order to deliver 600hp with BSFC< .33, the CR must be a minimum of 16.3. I Higher CR’s requires thicker engine cylinder walls, increasing the cost of the engine. l Retract Refinement 3. The system returns to the state shown in Figure 3(b). o (3) Refine CR to a value of 15.0. See Figure 3(d). While the designer can restrict the CR to a range, the ability to set a variable to an exact value is very useful. Typically, the values of the variables are opti- mized by exploring the terrain in the problem space. Even though decision variables, such as STBRAT and CR, are continuous-valued, the engine is most easily manufactured if these variables are set to values that can be easily machined. These settings could also cut down on manufacturing costs and time by using exist- ing inventory and machine setups, rather than retool- ing factories for every new design. e (4) Refine DVOL = 0.0145 (cylinder displacement volume’ is 14.5 liters). e (5) Refine STBRAT = 1.0. The designer notices a gap in the range ]0.969,0.988[. He chooses 1.0 as an easily machined value of STBRAT. e (6) Refine TIM = 334.5. See Figure 3(e). e (7) Refine ERPM = 2060. The designer conserva- tively picks central values that meet manufacturing requirements for the last two unspecified variables. Parameter STBRAT TIM FMIN CR ERPM DVOL _--------- ENBHP BSFC Min .80546 320.0 .10032 13.068 1000.0 .00516 48.5 0.32 Max 1.2 335.0 .34388 17.0 2400.0 .01469 ------------ 821.0 0.33 Dmin Dmax 0 0 0 0 0 -.05348 0 0 0 0 0 0 0 0 0 -2.13 (a) Refining BSFC (b) Refining ENBHP larameter Min Max Dmin Dmax Parameter Min Max Dmin Dmax STBRAT .81343 1.2 0 0 TIM 321.02 335.0 0 0 FMIN .16437 .31547 t.03778 -.01181 CR 15.0 15.0 cl.8382 -1.9175 ERPM 1516.2 2400.0 tl42.18 0 DVOL .01028 .01469 0 0 _____-______________---------------------------- ENBHP 600.0 797.22 0 -23.785 BSFC 0.32 0.33 0 STBRAT .81343 .96925 0 - .23075 TIM 333.98 335.0 +12.959 0 FMIN .31305 .32728 t.18645 0 CR 16.384 16.694 t3.2224 -.22374 ERPM 1374.0 1400.0 0 -1000.0 DVOL .01458 .01469 t0.0043 0 ____________________---------------------------- ENBHP 600.0 605.27 0 -215.73 (c) Refining ERPM (d) Refining CR Parameter Min Max Dmin Dmax STBRAT 1.0 1.0 0 0 TIM 334.5 334.5 t12.151 -.00858 FMIN .16461 0.3071 t.00001 -.00002 CR 15.0 15.0 0 0 ERPM 1809.6 2380.9 t0.1423 0 DVOL 0.0145 0.0145 0 0 -------------------_----------------------- ----- ENBHP 600.0 783.3 0 -.04468 BSFC 0.32 0.33 0 0 Parameter Min Max Dmin Dmax Parameter STBRAT TIM FMIN CR ERP?4 DVOL STBRAT TIM FMIN CR ERPM DVOL ---------- ENBHP BSFC .81343 321.02 0.1266 13.162 1374.0 .01028 600.0 0.32 1.2 335.0 .32728 16.917 2400.0 .01469 __-------- 821.0 0.33 + .00797 t1.0175 t.02628 t .09368 t374.01 t .00512 0 0 -.01659 -.08254 0 0 ENBHP Min Max 1.0 1.0 334.5 334.5 0.247 0.247 15.0 15.0 2860.0 2060.0 0.0145 0.0145 ---------------- ---we 603.03 603.03 Dmin 0 0 t .00235 0 0 0 --------- t3.0258 Dmax 0 0 -.05719 0 0 0 --------. -73.573 BSFC .32225 .32225 t.00225 -ii775 (e) Refining TIM (f) Refining FMIN Figure 3: Engine Design Example: World views from the Inverse Interface Intelligent User Interfaces 281 o (8) Refine F&UN = 0.247. The initial design (hence- forth, Dl) is complete. The world-view in Fig- ure 3(f) indicates that according to the model, Dl delivers 603hp at a fuel consumption of 32.3%. The designer uses the ‘Simulate Design” option in the Control Panel to run DESP on the design. The per- formance of Dl is computed to be 612.55 hp at 32.9% fuel consumption, which meets the performance con- straints of Refinements 1 and 2 above. Note that any optimization of Dl with DESP will almost certainly result in a superior design in the neighborhood of Dl. Exploring alternate designs The designer chooses the “Retract Many” option to retract Refinement 1 limiting the BSFC to 0.33. The designer is willing to loosen up slightly on the BSFC requirement if improvements appear elsewhere, for in- stance in the form of increased horsepower. The de- signer now sets the minimum ENBHP to 650hp and proceeds in a similar fashion to that described in the previous section. The resulting engine, parameterized by D2=(1.0 334.5 0.247 13.5 2380.0 0.0145), delivers 681hp at 34.2% consumption. The designer had ear- lier (while designing Dl) unsuccessfully tried to lower the engine speed so as to reduce manufacturing costs (see Figure 3(c)). In a further attempt to achieve this, the designer relaxes the ENBHP constraint (Refine- ment 2 above) while imposing low BSFC and RPM constraints. The resulting design, D3=(1.0 334.5 0.247 17.0 1800.0 0.0145), has 32.09% consumption but de- livers only 555hp. Another design, D4=(0.85 334.5 0.247 15.0 2000.0 0.0145), is created when the designer constrains STBRAT=0.85, CRs15, and BSFCs0.33. This design delivers 592hp at 33.0% consumption. Of the 4 designs, Dl-4, D3 is discarded because the horsepower delivered by that engine is too low (555hp), and D4 is eliminated because its performance is worse than Dl for both horsepower (590hp versus 603hp) and fuel consumption (33.0% versus 32.9%). The designer can make a choice between Dl and D2 at this point; for example, he can eliminate D2 if he deems that the ex- tra 69hp (=681-612) is not worth the 1.3% drop in fuel efficiency. Alternatively, he could choose to optimize both Dl and D2 using the traditional CAE paradigm and defer the decision. He could then decide to man- ufacture two lines of trucks or search for more designs with the user interface. Whichever option the designer chooses, his choice is likely to be more informed, than would have been the case had he worked with the tra- ditional CAE paradigm. Conclusions This research demonstrates that machine learning techniques can be used to provide vastly improved de- sign support in parameterized domains. The designer is able to refine both decision and performance vari- ables and can reuse the model for new performance specifications. The inverse engineering methodology has also been applied to process design as a model translator to convert a point-to-point simulator into a region-to-region model in a process planner for a turn- ing machine. In a few design scenarios the design task is precisely defined and can be automated. This is the approach we are applying to support “worst-case” de- sign of analog MOS circuits. The inverse engineering methodology opens up unexplored paradigms in knowl- edge processing by harvesting existing analysis-based simulators to ease the knowledge acquisition bottle- neck. This methodology shows tremendous promise for solving a wide variety of problems in engineering decision making. Acknowledgments This work was begun while R. Bharat Rao was at the Uni- versity of Illinois at Urbana-Champaign (UIUC) and was partially supported by the Department of Electrical Engi- neering. We are grateful to Sudhakar Yerramareddy, Allen Herman, and Prof. Dennis Assanis, all from the Depart- ment of Mechanical Engineering, UIUC. eferences Assanis, D.N. and Heywood, J.B. 1986. The Adiabatic Engine: Globad Developments. 95-120. Dixon, J.R. 1986. Artificial intelligence and design: A mechanical engineers view. In AAAI-86. 872-877. Finger, S. and Dixon, J.R. 1989. A review of research in mechanical engineering design. part i: Descriptive, pre- scriptive, and computer-based models of design processes. Research in Engineering Design 1( 1):51-67. Friedman, J.H. 1991. Multivariate splines. AnnaEs of Statistics. adaptive regression Herman, A. E. 1989. An artificial intelligence based mod- eling environment for engineering problem solving. Mas- ter’s thesis, M&IE, University of Illinois, Urbana, IL. Langley, P.; Simon, H.A.; Bradshaw, G.L.; and Zytkow, J.M. 1987. Scientific Discovery: Computational Explo- rations of the Creative Processes. MIT Press. Quinlan, J.R. 1986. Induction of decision trees. Machine Learning 1(1):81-106. Rao, R. B. 1993. Inverse Engineering: A Machine Learn- ing Approach to Support Engineering Synthesis. Ph.D. Dissertation, ECE, University of Illinois, Urbana. Rao, R. B. and Lu, S. C-Y. 1992. Learning engineering models with the minimum description length principle. In AAAI-92. 717-722. Rao, R. B. and Lu, S. C-Y. 1993. KEDS: A Knowledge- based Equation Discovery System for learning in engineer- ing domains. IEEE Expert (to appear). Rissanen, J. 1986. Stochastic complexity and modeling. Annals of Statistics 14(3):1080-1100. Vanderplaats, G. N. 1984. Numerical Optimization Techniques for Engineering Design - With Applications. McGraw-Hill. Yerramareddy, S. and Lu, S.C-Y. 1993. Hierarchical and interactive decision refinement methodology for engineer- ing design. Research in Engineering Design (to appear). 282 Rao | 1993 | 42 |
1,368 | Adelheit Stein, Ulrich Thiel German National Research Center for Computer Science Integrated Publication and Information Systems Institute (GMD-IPSI) Dolivostrasse 15,610O Darmstadt, Germany Email: (stein, thiel} @darmstadt.gmd.de Abstract We propose a comprehensive framework for modeling and specifying multimodal interactions. To this end, we employ an extended notion of ‘dialogue acts’ which can be realized by linguistic and non-linguistic means. First, a set of con- straints is presented that describes the temporal structure and all patterns of exchange during a cooperative information- seeking dialogue. Second, we introduce a strategic level of description which allows the specification of the topical struc- ture according to an information-seeking strategy. The model was used to design and implement the MERIT system, and led to a reduction in the complexity of the user interface while preserving most of the useful, but sometimes confusing, dia- logue options of advanced direct manipulation interfaces. Introduction Multimodal user interfaces of contemporary information systems use various means of conveying information, e.g. text, graphics, forms, tables, and pictures. If the presented in- formation items become complex, in most systems users are allowed to investigate the items directly. Thus, the distance between the users’ intentions and the objects is reduced to a minimum (cf. Hutchins, Hollan & Norman 1986). However, this exploratory approach to information re- trieval has to overcome the problems arising from browsing in a large information space. These difficulties, which are well known in hypertext applications, can be attacked by combining the direct manipulation of objects by the user with cooperative system responses. To date, cooperation has mostly been investigated in the context of natural language interfaces which regard user-machine interaction as a dia- logue between two partners. As this notion is often referred to as the conversation metaphor (cf. Reichman 1986,1989), the proposed hybrid interaction style, which integrates graphical and natural language components in a multimedial environment, will be called multimodal conversation in the following. In this paper, we will introduce a comprehensive model of multimodal conversations. The next section outlines the no- tion of multimodal dialogue acts, whereas in the third section the constraints that govern the structure of a cooperative mul- timodal dialogue are discussed. In the fourth section, we present an overview of the prototypical information system MERIT, which was designed and implemented as an applica- tion of the conversational model. Some of the benefits of this approach are sketched in the concluding part of the paper. In the conversational approach, user inputs such as mouse clicks, menu selections, etc. are not interpreted as invoca- tions of methods that are executed independent of the dia- logue context. Instead, the direct manipulation of an object is considered to be a dialogue act expressing a discourse goal of the user. Therefore, the system can respond in a more flex- ible way by taking into account the illocutionary and seman- tic aspects of the user’s input. Additionally, this approach to human-computer interaction provides a basis for the integra- tion of different interaction styles, such as natural language and graphics, in a multimodal information system (cf. e.g. Arens & Hovy 1990, Feiner & McKeown B990, Maybury 1991, Oei et al. 1992). Based on a representation of the dialogue history, dialogue acts of the user, which are performed by directly manipulat- ing the display structure, can be identified. For this reason, the user’s manipulations (e.g. mouse clicks) are not only executedas methods sent to the graphical surface objects, but they affect transformations of the underlying internal repre- sentation of the ongoing dialogue. The reactions of the system are instances of generic dia- logue acts like “inform’, ‘offer’, or ‘reject’, which are mod- eled as frames. Since we use an object-oriented presentation style, a system’s dialogue act is performed by creating or changing informational objects. An informational object represents a fragment extracted from the underlying database together with a specification of its graphical and/or textual presentation. In general, the system responds by visualiza- tion of new graphical objects on the screen in combination with the generation and presentation of a ‘comment’ in natu- ral language. While the notion of dialogue acts captures the essence of single contributions to a multimodal interaction, a conversa- tional model has also to address the problem of combining these actions to coherent sequences. Our approach takes local as well as global patterns of dialogues into account. The local structures are governed by the interrelations of dialogue roZes and tactics of the information seeker and the information pro- vider, and described by a complex network of interrelated ge- neric dialogue acts. The global aspect relates the sequence of dialogue contributions to information-seeking strategies. Intelligent User Interfaces 283 From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. Thus, a system is able to plan the content of subsequent dia- logue steps according to principles of topical coherency. The Conversational Roles Network A formal schema of information-seeking dialogues allows dialogues to be modeled at the discourse act level or - in terms of Speech Act Theory - the illocutionary level (cf. Searle & Vandervelcen 1985). In order to capture the temporal structure of the dialogue, we introduce the notion of dialogue states. During a dia- logue, the dialogue acts performed by the dialogue partners change the dialogue state. Since there may be several pos- sible continuations, our model for possible dialogues re- sembles a network, whereas each actual dialogue is repre- sented as a sequence of singular acts. The network we developed for our problem domain of information-seeking dialogues is called COR (modeling “~nversational Roles”). For a detailed description of the formalism and the theoreti- cal framework we refer to Sitter & Stein (1992), and Maier & Sitter (1992). COR can be regarded as a recursive state-transition net- work like the “Conversation for Action Model” of Winograd and Flores (1986). In addition, our model for information- seeking processes adopts some concepts from Systemic Lin- guistic approaches to discourse modeling (cf. Fawcett et al. 1988, Halliday 1984) and Rhetorical Structure Theory (Mann & Thompson 1987). Basically, the COR network de- fines the generic dialogue acts available (e.g. asking, offer- ing, promising, answering, evaluating), their possible inter- relations, and the mutual role-changes of speaker and addressee. Figure 1 shows the basic schema of COR: the circles repre- sent states on the top-level of the dialogue, the squares termi- nal states. Arrows represent transitions between two states, i.e. the dialogue contributions. Parameter A refers to the in- Dialogue (A,B) formation seeker, B to the information provider. The order of the parameters indicates the speaker - hearer roles; the first parameter indicates the speaker, the second the addressee. Note$rst, that the dialogue contributions (transitions) are themselves transition networks which may contain sub-dia- logues of the type of the basic schema. We distinguish be- tween two types of networks of dialogue contributions (cf. fig. 2 and fig. 3) which are described below. Second, one should keep in mind that in COR all dialogue contributions - except the inform contribution - can bc ‘implicit’, i.e. they may be omitted (a ‘jump’) when the implicit intention can be inferred from the current context. The bold arrows between the states cl> and <5> represent two ‘idealized’ straightforward courses of a dialogue: 0 A utters a request for information, B promises to look it up (possibly skips the promise) and presents the information, A is satisfied and finishes the dialogue. * B offers to provide some information (anticipating an in- formation need of A), A accepts the offer (or part of it), B provides the information, A finishes the dialogue. However, such simple courses of actions are very rare in more problematic situations. Participants often depart from such a straight course, and perceive their departure as quite natural. Information-seeking dialogues are also highly struc- tured and normally contain a lot of corrections and clarifica- tion sequences. The interactions of the participants build a complex net of mutually related commitments and “role ex- pectations” (cf. e.g. Halliday 1984). Simple question-answer dialogue models (like those applied in most of the classical interfaces to information systems) cannot cover this com- plexity. Instead, we need a description of a flexible interaction which allows both dialogue partners to correct, withdraw, or confirm their intentions, and to insert clarifying sub-dia- logues. To this end, we invented several transitions for with- withdraw request (A, reject request (B,A) withdraw offer (F3,A) iv e A,B) Figure 1: The basic COR ‘dialogue’ schema directives: request, accept; conwtissives: offer, promise 284 Stein drawing or rejecting a contribution. In every dialogue state <from 2 to 4>, A and B may return either to state al>, thus preparing a new dialogue cycle, or they may quit the current dialogue (states <5-I 1~). The embedding of clarifying sub-dialogues is described by the two networks below: Figure 2 displays the schema for the ‘inform’ contribution, i.e. the transition between c3> and c4> in the basic dialogue network. A more general term is ‘assert’, indicating an asser- tion or a statement, but in our context of information-seeking dialogues we use the more specific term. Inform/ Assert (A,B) solicit context information) Figure 2: Schema of an ‘inform’ (‘assert’) contribution A starts with an atomic inform (atomic acts are expressed by the notation A: . . . . ). This inform act (its locution) could be quite along monologue, i.e. a text, or a graphical presentation comprising several propositions. Of course, in that case it would have a semantic sub-structure and consist of several elements, such as sentences. But the illocutionary point would not change, i.e. no new commitments or expectations are expressed or imposed on the addressee. The atomic in- form act can either lead directly to state XC> (jump to cc> and at the same time in the dialogue network to c4>). Or B may decide to initiate a sub-dialogue to solicit more context in- formation about A’s inform act, e.g. asking a question related to the inform act. This transition between cb> and cc> is a traversal of a basic dialogue network. The network in figure 3 is more complex. All dialogue contributions, except ‘inform’, follow this pattern. When A intends, for instance, to make a request, she can follow one of the two possible paths: <a-b-o or ca-b’-o. On the first path A formulates the request and either ‘jumps’ to ec>, or appends an ‘assert’ (network of fig. 2), supplying voluntarily some context information related to the request. If this con- ecpest (A,B) dialogue (B,A, solicit context information) jump dialogue (B,A, identify request) Figure 3: Schema of a ‘request’ contribution text information is not given by A, B might decide to start a sub-dialogue to solicit such context information, e.g. asking for details, or the background of the request. The other path is similar, but with a revised order. After an assert of A sup- plying some context information about the intended request (or jump), she can either formulate the request now explicit- ly, but also skip it (jump). The latter would create the situa- tion of an indirect request which is reminiscent of the term “indirect speech act” coined in Speech Act Theory (cf. Searle 1975). Asimple example is: A: “I don’t know much about this RACEfunding program. ” Even if A does not then ask direct- ly “What is it about. 7”) B might infer that A has expressed an indirect request by the first statement. We distinguish between two main types of components within these dialogue contribution networks (cf. in detail Sit- ter & Stein 1992). The first expresses the function (illocu- tionary point) of the whole dialogue contribution. This is nor- mally an atomic dialogue act, e.g. the request or inform act in our example. The second serves for exchanging additional context information which is either supplied voluntarily (as- sert contributions) or requested in a sub-dialogue. Using the terminology of Rhetorical Structure Theory - RST (cf. Mann & Thompson 1987) we call a component of the first type a “nucleus” and a component of the second type a “satellite”. Both, nucleus and satellite, are related to one another by rhetorical or semantic relations. The set of rela- tions described by RST was developed for the analysis of written texts, i.e. monologues, and has been extended in other approaches in the field of Computational Linguistics (over- view in: Maier & Hovy 1993). However, there exist some re- cent attempts to combine research done in the text-linguistic field with dialogue modeling approaches (cf. Maier & Sitter 1992, Fawcett & Davis 1992). Maier and Sitter, for instance, extended the set of necessary relations, especially “interper- sonal relations”, for the dialogue situation and combined it with the COR approach. This proved to be very useful in our context of human-computer retrieval dialogues, because their specifications can be directly integrated in our applica-’ tion, the MERIT system. The COR network covers the illocutionary structure of dialogues, but does not supply means for a specification on the thematic level. Since the thematic level governs the selec- tion of the contents communicated in the dialogue acts, it plays an essential role in dialogue planning. If we want the system to engage in a meaningful cooperative interaction with the user, we have to address this question by supplying a prescriptive addition to the - thus far descriptive -dialogue model. We perceive actual dialogues as instantiations of more abstract entities, each of which represents a class of concrete dialogues. This is similar to approaches to the gen- eration of multimodal utterances using plan operators (e.g. Maybury 1991, Rist & Andre 1992). However, the abstract representations of dialogue classes are more complex. The classes comprise dialogues that show a similar basic pattern. Usually these patterns are closely related to certain strategies which are pursued by the user during her interaction. In the case of information systems, Belkin, Marchetti, and Cool Intelligent User Interfaces 285 (1993) suggested a classification of dialogues with respect to information-seeking strategies. Based on this approach, we developed a set of typical dialogue plans or “scripts” (cf. Schank & Abelson 1977) for a given domain and task (cf. Belkin, Cool, Stein & Thiel 1993). A collection of dialogue plans is the basis for selecting an appropriate plan for a given information need. Once a plan has been chosen, it provides suggestions to the user on how to continue in a given dialogue situation, and specifies cooperative system reactions. On the implementation level, we represent dialogues as se- quences of dialogue steps. The internal structure of a dia- logue step is given by two parameters: the perspective of the step, and its implementation. The perspective determines the topical spectrum that can be addressed in this step without de- stroying the thematical coherence of the dialogue in general. Similar notions have been proposed by McCoy (1986) in the area of natural language interfaces and Reichman (1986) who takes a discourse analysis approach to multimodal dia- logues. The second component of a step describes the pos- sible and actual ways to implement the corresponding dia- logue step. It may be implemented by a single dialogue act. In this case the variety is provided by the different forms the utterance may have. For instance, the presentation of a cer- tain set of data may take the form of a list of the data records, a table, or a graphical presentation. However, the step may also be a certain sequence of dialogue acts which then build a sub-dialogue that may - in accordance with the COR mod- el -replace the single act. Thus, we have a means to prescribe a certain act as appropriate in the given situation, which al- lows the user to perform it in a way she prefers, e.g. by re- questing context or help information. An approach to problem solving based on past experiences is pursued in the area of case-based reasoning (CBR). In our experimental work, we adapted the ideas of CBR to the re- quirements of a user-guidance component (for details cf. Tiljen 1991,1993, Stein, Thiel& Til3en 1992) which was de- veloped as part of a prototypical information system. An Application - the MERIT System MERIT (Multimedia Extensions of Retrieval Interaction Tools) is a prototypical knowledge-based user interface to a relational database on European research projects and their funding programs (cf. Stein, Thiel & T&n 1992). The data- base contains textual and factual information (a subset of the CORDIS databases which are offered online by the ECHO host). These data were extended by interactive maps and scanned-in documents and pictures the user may request as additional context information in certain situations (cf. for example fig. 5 below). The system features form-based query formulation (various form sheets for different ‘perspectives’ on the data), and the visualization of retrieval results in differ- ent situation-dependent graphical presentation forms. One major system component is a case-based dialogue manager (CADI, cf. Til3en 199 1) which controls the retrieval dialogue and provides a library of “cases” stored after previous ses- sions with MERIT. The cases are used to guide the user through the current session, proposing thematic progression basically by suggesting a specific order of query and result steps focusing on a specific perspective in each step. 286 Stein MERIT offers The followfng “cases” of typical dialogues are available. Please select the case that fits your current information need most. .:.:.:.:.:. . . . . . . . . . @ a A person’s profile ::i:;:>i :.:.:.:.: ::::::::::: ‘.:.:.:.:. . . . . . . . . . . . . . . . . A’.:.:.:.: .,......... >:.:...: .,.,. :.:;.:.: .,... :.:...:.,.: . . . . . . . . . . . . . :;.:.:.:.: .,.,.I .,.,.,.,. :.:.;.:.,.:.:.,.:.: .,.,. :...:.:.:.: .,.,. :...: .,.............,., .,.... . . . . . . . . . . . . . . . . . . . .,...,.,...i~.....,.,.,.....,.,...,.~.....~.~...,.,.....,.,...~.,... ‘,....“.‘.‘.‘.‘.‘.‘.‘.........:.~:.:.:.:.:.:.~:.:.:.:.:.:.:.:.:.:.:.~:.:.:.:.:.:.:.:.:.:.:.: . . . . . . . . . . . . . . . . . ..i................................................ .i...... . . . . . . . . . . . . . . . . ..i... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 4: A system’s ‘offer’ The example in figure 4 shows an offer made at the begin- ning of a dialogue session. Here the user is asked to choose the case that suits her information need best. The graphical presentation of complex dialogue contributions - like offers, requests, inform acts - is composed of several distinct ele- ments: the label in the upper left comer identifies the contrib- utor and dialogue act type (e.g. “MERIT offers”, “user re- quests”, “MERIT presents”). A short text, mostly elliptic, summarizes the content of the dialogue act or gives some meta-information (“Please select . ..“). Further, the concrete proposition is displayed below (here in the form of several al- ternatives among which the user may choose). The right bar is reserved for icons representing possible (local) user ac- tions that refer to the current system’s contribution. The user may, for instance, reject the offer, request help, or pose clari- fying questions (checkback-icon). A Coherent Interaction Like most advanced graphical interfaces, MERIT offers a wide variety of dialogue control options to the user (cf. icons in fig. 4 and fig. 5). In a given situation, the user may proceed in the current case, start sub-dialogues within the current step, switch to another case, etc. Usually, in conventional in- terfaces dialogue options are presented to the user as addi- tional components of the interface. However, such additions are disturbing in a direct manipulation interface, since they require context-dependent method evaluation. In the following, we outline how our conversational model allows us to integrate even complex meta-dialogic options into a coherent interpretation of the multimodal interaction. Local, i.e. case- or situation-dependent, options are: help, check buck: From the conversational perspective the user engages in a sub-dialogue related to the system’s current dialogue contribution (soliciting context information). Thus, the parameter setting of the meta-function is determined in a natural way. reject offer, withdraw . . . . continue: The meaning of these icons is intuitively grasped. They comply with transitions in the basic COR schema (cf. fig. 1). For instance, ‘continue’ would lead to state xl>, whereupon the user either formu- Figure 5: Example screen of MERIT with dialogue control objects lates a new request (a query), or the system comes up with a new offer and/or information (presentation of data). change contentlpresentation: Here the user can enter a sub-dialogue related to the system’s inform act and request a paraphrase and/or solicit context information (cf. fig. 2). The available options in a given dialogue state are, for instance, to ask for more detailed information about the cur- rently presented items, to restrict the presentation to a subset of objects and attributes, or simply to replace a given presen- tation form by another one (e.g. a table by a graph). query or the retrieved data) will be displayed. The user can then compare it to the current state and decide whether to re- turn to the current state or to go back in the history. query on . . . : At any time the user has the opportunity to in- sert a short retrieval dialogue. She can pose a query and in- spect the retrieved data, then return to the current path/ case. change case: The user finishes her current path, returns to the top-level dialogue (state cl>) and initiates a new dia- logue cycle to choose a new case. The following actions are interpreted as user requests that initiate inserted meta-dialogues: Conch.lsions info on next step: Before the user decides whether to con- The outlined comprehensive model of multimodal interac- tinue she may click on this icon and start a meta-dialogue tion is based on an extended notion of dialogue acts which about the system’s strategy. MERIT generates situation-de- can be realized by linguistic or non-linguistic means. The pendent information (a text), describing the next and subse- structure of multimodal interaction is considered under local quent steps proposed in this situation by the current case. as well as global aspects. A comprehensive set of constraints history: By clicking one of the history icons the user starts a short me&dialogue referring to inform or query states of in terms of a recursive transition network describes all (local) previous dialogue cycles. The respective information (her patterns of exchange which can occur during interaction. The (global) topical structure of the dialogue is defined according Intelligent User Interfaces 287 to a selected information seeking strategy. The model was ap- plied to design and implement the MERIT system, and led to a reduction of the complexity of the user interface while pre- serving most of the useful, but sometimes confusing, dia- logue options of advanced direct manipulation interfaces. The conversational approach permits dialogue features to be handled in an integrative manner, but not as separated exten- sions such as undo, history, and help functions. References Arens,Y. & Hovy, E. 1990. How to Describe What? Towards a Theory of Modality Utilization. In: Proc. of the 12 thAnnua1 Conference of the Cognitive Science Society, 487-494. Hills- dale, NJ: Erlbaum. Belkin, N.J., Cool, C., Stein, A. & Thiel, U. 1993. Scripts for Information Seeking Strategies. Paper presented at: AAAI Spring Symposium ‘93 on Case-Based Reasoning and In- formation Retrieval, Stanford University, CA, March 23-25. Belkin, N.J., Marchetti, PG. & Cool, C. 1993. BRAQUE: Design of an Interface to Support User Interaction in In- formation Retrieval. Information Processing & Manage- ment. Special Issue on Hypertext 29(4) (in press). Fawcett, RX, van der Mije, A. & van Wissen, C. 1988. To- wards a Systemic Flowchart Model for Discourse. In: Faw- cett, R.P. & Young, D. (eds.): New Developments in Systemic Linguistics. Vol. 2, 116-143. London: Pinter. Fawcett, R.P. & Davies, B. 1992. Monologue as Turn in Inter- active Discourse: Towards an Integration of Exchange Struc- ture and Rhetorical Structure Theory. In: Proc. of the 6th In- ternational Workshop on Natural Language Generation, Trento, Italy, 151-166. Berlin: Springer. Feiner, SK. & McKeown, K.R. 1990. Coordinating Text and Graphics in Explanation Generation. In: Proc. of the 8th Na- tional Conference on Artificial Intelligence, Vol. I, 442-449. Menlo Park AAAI Press / MIT Press. Halliday, M.A.K. 1984. Language as Code and Language as Behaviour: A Systemic-Functional Interpretation of the Na- ture and Ontogenesis of Dialogue. In: Fawcett, R.P. et al. (eds.): The Semiotic of Culture and Language. Vol. I, 3-35. London: Pinter. Hutchins, E.L., Hollan, J.D. & Norman, D. 1986. Direct Ma- nipulation Interfaces. In: Norman, D.A. & Draper, S.A. (eds.): User Centered System Design: New Perspectives on Human-Computer Interaction, 87-124. Hillsdale, NJ: Erl- baum. Maier, E. & Hovy, E. 1993. Organising Discourse Structure Relations Using Metafunctions. In: Horacek, H. & Zock, M. (eds.): New Concepts in Natural Language Processing, 69-86. London: Pinter. Maier, E. & Sitter, S. 1992. An Extension of Rhetorical Struc- ture Theory for the Treatment of Retrieval Dialogues. In: Proc. of the 14thAnnual Conference of the CognitiveScience Society, Bloomington, Indiana, 968-973. Hillsdale, NJ: Erl- baum. Mann, W.C. & Thompson, S.A. 1987. Rhetorical Structure Theory: A Theory of Text Organization. In: Polanyi, L. (ea.): Discourse Structure. Norwood, NJ: Ablex. Maybury, M. 199 1. Planning Multimedia Explanations Us- ing Communicative Acts. In: Proc. of the 9th National Con- ference on Artificial Intelligence, Anaheim, CA. McCoy, K.F. 1986. The ROMPER System: Responding to Object-Related Misconceptions Using Perspective. In: Proc. of the 24th Annual Meeting of the Association for Computa- tional Linguistics, New York. Oei, S., Smit, R., Schreinemakers, J., Marinos, L. & Sirks, J. 1992. The Presentation Manager, A Method for Task-Driven Concept Presentation. In: Neumann, B. (ed.): Proc. of the Eu- ropean Conference on Artificial Intelligence, 774-775. Chi- Chester: John Wiley. Reichman, R. 1986. Communication Paradigms for a Win- dow System. In: Norman, D.A. & Draper, S.A. (eds.): User Centered System Design: New Perspectives on Human-Com- puter Interaction, 285-3 13. Hillsdale, NJ: Erlbaum. Reichman, R. 1989. Integrated Interfaces Based on a Theory of Context and Goal Tracking. In: Taylor, M.M., Neel, F. & Bouwhuis, D.G. (eds.): The Structure of Multimodal Dia- logue, 209-228. Amsterdam: North-Holland. Rist, T. & Andre, E. 1992. From Presentation Tasks to Pic- tures: Towards a Computational Approach to Graphics De- sign. In: Neumann, B. (ed.): Proc. of the European Confer- ence on Artificial Intelligence, 765-768. Chichester: John Wiley. Schank, R. & Abelson, R. 1977. Scripts, Plans, Goals and Understanding. Hillsdale, NJ: Erlbaum. Searle, J.R. 1975. Indirect Speech Acts. In: Davidson, D. & Harman, G. (eds.): The Logic of Grammar, 59-82. Encino, CA: Dickinson Publishing Co. Searle, J.R. & Vanderveken, D. 1985. Foundations of Illocu- tionary Logic. Cambridge, GB: Cambridge University Press. Sitter, S. & Stein, A. 1992. Modeling the Illocutionary As- pects of Information-Seeking Dialogues. Information Pro- cessing & Management 28(2): 165-180. Stein, A., Thiel, U. & Ti8en, A. 1992. Knowledge-Based Control of Visual Dialogues in Information Systems. In: Ca- tarci, T., Costabile, M.F. & Levialdi, S. (eds.): Proc. of the Ist International Workshop on Advanced Visual Interfaces, Rome, Italy, 138-155. Singapore: World Scientific Press. TiBen, A. 1991. A Case-Based Architecture for a Dialogue Manager for Information-Seeking Processes. In: A. Book- stein et al. (eds.): Proc. of the 14thAnnual International Con- ference on Research and Development in Information Re- trieval, Chicago, 152- 161. New York: ACM Press. T&n, A. 1993. Knowledge Bases for User Guidance in In- formation Seeking Dialogues. In: Wayne, D.G. et al. (eds.): Proc. of the 1993International Workshop on Intelligent User Interfaces, Orlando, FL, 149-156. New York: ACM Press. Winograd, T. & Flores, F. 1986. Understanding Computers and Cognition. Norwood, NJ: Ablex. 288 Stein | 1993 | 43 |
1,369 | Matching 100,000 Lear Robert . Doorenbos School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213-389 1 Robert.Doorenbos@CS.CMU.EDU Abstract This paper examines several systems which learn a large number of rules (productions), including one which learns 113,938 rules - the largest number ever learned by an AI system, and the largest number in any production system in existence. It is important to match these rules efficiently, in order to avoid the machine learning utility problem. Moreover, examination of such large systems reveals new phenomena and calls into question some common assumptions based on previous observations of smaller systems. We first show that the Rete and Treat match algorithms do not scale well with the number of rules in our systems, in part because the number of rules affected by a change to working memory increases with the total number of rules in these systems. We also show that the sharing of nodes in the beta part of the Rete network becomes more and more important as the number of rules increases. Finally, we describe and evaluate a new optimization for Rete which improves its scalability and allows two of our systems to learn over 100,000 rules without significant performance degradation.’ I. Introduction The goal of this research is to support large learned production systems; i.e., systems that learn a large number of rules. Examination of such systems reveals new phenomena and calls into question some common assumptions based on previous observations of smaller systems. In large systems it is crucial that we match the rules efficiently; otherwise the systems will be very slow. In particular, we don’t want the match cost to increase significantly as new rules are learned. Such an increase is one cause of the utility problem in machine learning (Minton, 1988) - if the learned rules slow down the matcher, the net effect of learning can be to slow down the whole system, rather than speed it up. For example, learned rules may reduce the number of basic steps a system takes to solve problems (e.g., by pruning the ‘The research was sponsored by the Avionics Laboratory, Wright Research and Development Center Aeronautical Systems Division (AFSC), U. S. Air Force, Wright-Patterson AFL& OH 45433-6543 under Contract F3361590- C-l 465, ARPA Order No. 7597, and by the National Science Foundation under a graduate fellowship award. The views and conclusions contained in this document are’ those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Defense Advanced Research Projects Agency or the U.S. government. 290 Doorenbos search space), but the slowdown in the matcher increases the time per step, and this can outweigh the reduction in the number of steps. This has been observed in several machine learning systems (Minton, 1988; Etzioni, 1990; Tambe, Newell, & Rosenbloom, 1990; Cohen, 1990; Gratch & DeJong, 1992). This paper examines several systems which learn a large number of rules, including one which learns 113,938 rules -the largest number ever learned by an AI system, and the largest number in any production system in existence. Recent work on large production systems has investigated their integration with databases (Sellis, Lin, & Raschid, 1988; Acharya & Tambe, 1992; Miranker et al., 1990). Much of this work is aimed at scaling up production systems to have large working memories, but only a relatively small number of rules. Scalability along the dimension of the number of rules, on the other hand, has been largely neglected. This dimension is of interest for machine learning systems, as noted above; it is also of interest for another class of production systems: those used for cognitive models. In such systems, the productions model human long-term memory. Since the capacity of human long-term memory is vast, production systems used for cognitive models may require a very large number of rules. (Doorenbos, Tambe, & Newell, 1992) examined various aspects of a single system, Dispatcher-Soar, which learned 10,000 rules; no increase in match cost was observed. The current paper focuses entirely on matching, and studies three other systems in addition to Dispatcher-Soar; the four systems learn 35,000- 100,000 rules. We begin by showing that the best currently available match algorithms, Rete (Forgy, 1982) and Treat (Miranker, 1990), do not scale well with the number of rules. Section 2 shows that using the standard Rete algorithm, all these systems suffer a substantial increase in match cost. This was previously overlooked in Dispatcher-Soar due to the smaller number of rules learned and the way match cost was measured. Previous studies of smaller production systems suggest that Rete and Treat should scale well with the number of rules, because only a few rules are affected by changes to working memory, no matter how many rules are in the system. Section 3 shows that this does not hold in any of the four systems studied here. Since the work done by the From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. Treat algorithm is at least linear in the number of affected productions, we conclude that Treat would not scale well with the number of rules in these systems. We suggest reasons for this difference in the number of affected productions, and note its possible implications for parallel production systems. In Section 4, we examine the sharing of nodes in the beta part of the Rete network and show that it becomes increasingly important as the number of rules increases. This sharing has previously been considered relatively unimportant, and some match algorithms have been designed without incorporating it. Our results here demonstrate that for certain classes of large learning systems, efficient match algorithms must incorporate sharing. Finally, we show that the scalability of the Rete algorithm can be improved by applying a new optimization, right unlinking, which eliminates one source of slowdown in the match as the number of rules increases. Section 5 describes and evaluates this optimization, which can improve the scalability of Rete and reduce the match cost by a factor of -40. In Section 6, we suggest ways to reduce or eliminate other sources as well. The results in this paper concern several large learning systems, each implemented using Soar (Laird, Newell, & Rosenbloom, 1987; Rosenbloom et al., 1991). Soar provides a useful vehicle for this research because in addition to providing a mechanism for learning new rules (chunking), it incorporates one of the best existing match algorithms (Rete). Soar is an integrated problem-solving and learning system based on formulating every task as search in problem spaces. Each step in this search - the selection of problem spaces, states and operators, plus the immediate application of operators to states to generate new states - is a decision cycle. The knowledge necessary to execute a decision cycle is obtained from Soar’s knowledge base, implemented as a production system. If this knowledge is insufficient to reach a decision, Soar makes use of recursive problem-space search in subgoals to obtain more knowledge. Soar learns by converting this subgoal-based search into chunks, productions that immediately produce comparable results under analogous conditions (Laird, Rosenbloom, & Newell, 1986). Four large learning systems are examined in this paper. The first system, Dispatcher-Soar (Doorenbos, Tambe, & Newell, 1992) is a message dispatcher for a large organization; it makes queries to an external database containing information about the people in the organization. It uses twenty problem spaces, begins with -1,800 rules, and learns -114,000 more rules. The second system, Assembler-Soar (Mertz, 1992) is a cognitive model of a person assembling printed circuit boards. It is smaller than Dispatcher-Soar, using six problem spaces, starting with -300 rules and learning -35,000 more. Neither of these systems was designed with these experiments in mind. In addition to these two “natural” systems, two very simple “artificial” systems, Memory 1 and Memory2, were built for these experiments. They each use two problem spaces (the minimum necessary for learning in Soar) and are based on the memorization technique described in (Rosenbloom & Aasman, 1990); they differ in the structure of the objects being memorized, with the learned rules in Memory1 more closely resembling some of those learned in Dispatcher-Soar, and those in Memory2 resembling Assembler-Soar. The two systems learned - 102,000 rules and -60,000 rules, respectively. For each system, a problem-generator was used to create a set of problem instances in the system’s domain. The system was then allowed to solve the sequence of problems, learning new rules as it went along. ete Before presenting performance results, we briefly review the Rete algorithm. As illustrated in Figure 1, Rete uses a dataflow network to represent the conditions of the productions. The network has two parts. The alpha part performs the constant tests on working memory elements; its output is stored in alpha memories (AM), each of which holds the current set of working memory elements passing all the constant tests of an individual condition. The beta part of the network contains join nodes, which perform the tests for consistent variable bindings between conditions, and beta memories, which store partial instantiations of productions (sometimes called tokens). When working memory changes, the network is updated as follows: the changes are sent through the alpha network and the appropriate alpha memories are updated. These updates are then propagated over to the attached join nodes (activating those nodes). If any new partial instantiations are created, they are propagated down the beta part of the network (activating other nodes). Whenever the propagation reaches the bottom of the network, it indicates that a production’s conditions are completely matched. The figure is drawn so as to emphasize the sharing of nodes between productions. This is because our focus here is on matching a large number of productions, not just a few, and in this case the sharing becomes important, as we demonstrate in Section 4. When two or more productions have a common prefix, they both use the same network nodes. Due to sharing, the beta part of the network shown in Figure 1 forms a tree. (In general, it would form a forest, but by adding a dummy node to the top of a forest, we obtain a tree.) The figure also omits the details of the alpha part of the network (except for the alpha memories). In this paper we focus only on the beta part of the network. Previous studies have shown that the beta part accounts for most of the match cost (Gupta, 1987), and this holds for the systems examined here as well. Figure 2 shows the performance of the basic Rete algorithm on our systems. For each system, it plots the average match cost per decision cycle as a function of the Large Scale Knowledge Bases 291 Rete network for three productions: Pl has conditions ClAC2”C3 P2 has conditions C 1 “C2”C4*C5 P3 has conditions C 1 /\C2”C4”C3 Key: alpha memory 1 1 beta memory 0 beta join node Working AM for Cl Memorv I Rete network for one production with conditions: C 1: (block <x> “on cy>) C2: (block <y> “left-of cz>) C3: (block <z> “color red) Working memory contains: WI: (block bl “on b2) w6: (block b2 “color blue) w2: (block bl “on b3) w7: (block b3 “left-of b4) w3: (block bl “color red) w8: (block b3 “on table) w4: (block b2 “on table) w9: (block b3 “color red) w5: (block b2 Aleft-of b3) matches for C 1 join on values of <y> AM for C3 P3 join on values of <z I Figure 1: Rete network for several productions (left) and instantiated network for one production (right). Pl number of rules in the system.2 It clearly demonstrates that as more and more rules are learned, the match cost increases significantly in all four systems. Thus, the standard Rete algorithm does not scale well with the number of learned rules. 3. Affect Set Size The data shown in Figure 2 may come as a surprise to readers familiar with research on parallelism in production systems. This research has suggested that the match cost of a production system is limited, independent of the number of rules. This stems from several studies of OPS5 (Forgy, 1981) systems in which it was observed that only a few productions (20-30 on the average) were affected by a change to working memory (Oflazer, 1984; Gupta, 1987). A production is afSected if the new working memory element matches (the constant tests of) one of its conditions. This small afect set size was observed in systems ranging from -100 to - 1000 productions. Thus, the match effort is limited to those few productions, regardless of the number of productions in the whole system. *The decision cycle is the natural unit of measurement in Soar; each decision cycle is a sequence of a few match cycles (recognize-act cycles). The numbers reported here are based on Soar version 6.0, implemented in C, running on a DECstation 5000/200. The Rete implementation uses hashed alpha and beta memories (Gupta, 1987); the alpha part of the network is implemented using extensive hashing so that it runs in approximately constant time per working memory change. 292 Doorenbos -0 20000 60000 100000 Number of productions in system Figure 2: Match time with standard Rete. This result does not hold for the systems we have studied. Figure 3 shows a lower bound3 on the number of productions affected per match cycle (recognize-act cycle) for the four systems, plotted as a function of the number of productions. Each point on the figure is the mean taken over 100,000 match cycles. (Recall that the 3The graph shows that the lower bound is linearly increasing. An upper bound on the affect set size is the total number of productions, which of course is also linearly increasing. Measuring the exact affect set size is computationally expensive, but since both the lower bound and upper bound are linearly increasing, one can assume the actual mean affect set size is linearly increasing also - if it weren’t, it would eventually either drop below the (linearly increasing) lower bound, or else rise above the (linearly increasing) upper bound. number of productions is gradually increasing as the system learns. The exact affect set size varies greatly from one cycle to the next.) In each system, the average affect set size increases fairly linearly with the number of productions. Moreover, each system grows to where the average affect set size is -10,000 productions - considerably more than the total number of productions any of them started with. 20000 Number of 60000 productions in 100000 system Figure 3: Affect set size. Why does the affect set size increase in these systems but not in the aforementioned OPS5 systems? (Gupta, 1987) speculates: “The number of rules associated with any specific object-type or any situation is expected to be small (McDermott, 1983). Since most working-memory elements describe aspects of only a single object or situation, then clearly most working-memory elements cannot be of interest to more than a few of the rules.” While this reasoning may hold for many OPS5 systems, it does not hold for systems that extensively use problem space search. In such systems, a few working memory elements indicate the active goal, state (or search node), and operator; these working memory elements are likely to be of interest to many rules. In the Soar systems studied here, many rules have the same first few conditions, which test aspects of the current goal, problem space, state, or operator. This same property appears to hold in Prodigy, another search-oriented problem-solving and learning system, for the rules learned by Prodigy/EBL (Minton, 1988). Whenever a working memory element matching one of these first few conditions changes, a large number of rules are affected. More generally, if a system uses a few general working memory elements to indicate the current problem-solving context, then they may be of interest to a large number of rules, so the system may have large affect sets. (Gupta, 1987) also speculates that the limited affect set size observed in OPS5 programs may be due to particular characteristics of human programmers; but these need not be shared by machine learning programs. For example, people often hierarchically decompose problems into smaller subproblems, then write a few rules to solve each lowest-level subproblem. Consequently, the few working memory elements relevant to a given subproblem affect only those few rules used to solve it. However, if we add a knowledge compilation mechanism to such a program, it may generate rules that act as “macros,” solving many subproblems at once. This would tend to increase the number of rules in the system affected by those working memory elements: they would now affect both the original rules and the new macro-rules. More work needs to be done to better understand the causes of a large affect set size. However, the above arguments suggest that this phenomenon is likely to arise in a large class of systems, not just these particular Soar systems. Note that a limited affect set size is considered one of the main reasons that parallel implementations of production systems yield only a limited speedup (Gupta, 1987). The results here suggest that parallelism might give greater speedups for these systems. However, if sequential algorithms can be optimized to perform well in spite of the large number of affected productions, then speedup from parallelism will remain limited. So the implications of these results for parallelism are unclear. 4, The Importance of Shari Given the increase in the number of productions affected by changes to working memory, how can we avoid a slowdown in the matcher? One partial solution can already be found in the existing Rete algorithm. When two or more productions have the same first few conditions, the same parts of the Rete network are used to match those conditions. By sharing network nodes among productions in this way, Rete avoids the duplication of match effort across those productions. In our systems, sharing becomes increasingly important as more rules are learned. Figure 4 shows the factor by which sharing reduces the number of tokens (partial instantiations) generated by the matcher. The y-axis is obtained by taking the number of tokens that would have been generated if sharing were disabled, and dividing it by the number that actually were generated with sharing. The figure displays this ratio as a function of the number of rules in each of the systems. (As mentioned before, we focus on the beta portion of the match - the figure shows the result of sharing in the beta memories and join nodes only, not the alpha part of the network. CPU time measurements would be preferable to token counts, but it would have taken months to run the systems with sharing disabled.) The figure shows that in the Dispatcher and Memory1 systems, sharing accounts for a tremendous reduction in the number of tokens; at the end of their runs, sharing reduces the number of tokens by two and three orders of magnitude, respectively. In the Assembler and Memory2 systems, sharing is not as effective, reducing the number of tokens by factors of 6 and 8, respectively. (In Figure 4, their points are very close to the horizontal axis.) Why is sharing so important in Dispatcher and Memory 1 ? As mentioned above, in each of these systems, many of the learned rules have their first few Large Scale Knowledge Bases 293 + Assembler 60000 Number of productions in system Figure 4: Reduction in tokens due to sharing. conditions in common. Thus, newly learned rules often share existing parts of the Rete network. In particular, nodes near the top of the network tend to become shared by more and more productions, while nodes near the b&tom of the network tend to be unshared (used by only one production) - recall that the beta portion of the network forms a tree. The sharing near the top is crucial in Dispatcher and Memory1 because nodes near the top are activated more often (on average) than nodes near the bottom. Thus, the token counts are dominated by the tokens generated near the top of the network, where sharing -increases with the number of rules. In the Assembler and Memory2 systems, however, there is a significant amount of activity in lower parts of the network where nodes are unshared. (The difference is because the lower part of the network forms an effective discrimination tree- in one case but not the other.) So sharing is not as effective in Assembler and Memory2. In Section 6, we give ideas for reducing the activity in-lower parts of the network. If they prove effective, the activity near the top of the network would again dominate, and sharing would be increasingly important in the Assembler and Memory2 systems as well. Interestingly, work in production systems has often ignored sharing. Previous measurements on smaller production systems have found sharing of beta nodes to produce only very limited speedup (Gupta, 1987; Miranker, 1990). This is probably due to the limited affect set size in those systems. An important example of the consequences of ignoring sharing can be found in the Treat algorithm. Treat does not incorporate sharing in the beta part of the match, so on each working memory change, it must iterate over all the affected rules. Thus, Treat requires time at least linear in the affect set size. So it would not scale well for the systems examined here - like Rete as shown in Figure 2, it too would slow down at least linearly in the number of rules. Moreover, work on machine learning systems has also often failed to incorporate sharing into the match process. For example, the match algorithm used by Prodigy (Minton, 1988) treats each rule independently of all the others. As more and more rules are learned, the match cost increases, leading to the utility problem in Prodigy. Prodigy’s approach to this problem is to discard many of the learned rules in order to avoid the match cost. The results above suggest another approach: incorporate sharing (and perhaps other optimizations) into the matcher so as to alleviate the increase in match cost. ight Unlinking While sharing is effective near the top of the Rete network, it is, of course, uncommon near the bottom, since new productions’ conditions match those of existing productions only up to a certain point. (In these systems, it is rare for two productions to have exactly the same conditions.) Consequently, as more and more rules are learned, the amount of work done in lower parts of the network increases. This causes an overall slowdown in Rete, as shown in Figure 2. What can be done to alleviate this increase? The match cost (in the beta part of the network) can be divided into three components (see Figure 1): (1) activations of join nodes from their associated alpha memories (henceforth called right activations), (2) activations of join nodes from their associated beta memories (henceforth called left activations), and (3) activations of beta memory nodes. This section presents an optimization for the Rete algorithm that reduces (l), which turns out to account for almost all of the Rete slowdown in the Dispatcher and Memory 1 systems. Some ways to reduce (2) and (3) are discussed in Section 6. Our optimization is based on the following observation: on a right activation of a join node (due to the addition of a working memory element to its associated alpha memory), if its beta memory is empty then no work need be done. The new working memory element cannot match any items in the beta memory, because there aren’t any items there. So if we know in advance that the beta memory is empty, we can skip the right activation of that join node. We refer to right activations of join nodes with empty beta memories as null right activations. We incorporate right unlinking into the Rete algorithm as follows: add a counter to every beta memory to indicate how many items it contains; update this counter whenever an item is added to or deleted from the memory. If the counter goes from 1 to 0, unlink each child join node from its associated alpha memory. If the counter goes from 0 to 1, relink each child join node to its associated alpha memory. On each alpha memory there is a list of associated join nodes; the unlinking is done by splicing entries into and out of this list. So while a join node is unlinked from its alpha memory, it never gets activated by the alpha memory. Note that since the only activations we are skipping are null activations - which would not yield a match anyway -this optimization does not affect the set of complete production matches that will be found. Figure 5 shows the results of adding right unlinking to Rete. Like Figure 2, it plots the average match time per decision cycle for each of the systems as a function of the number of rules. Note that the scale on the vertical axis is 294 Doorenbos different; all four systems run faster with right unlinking. Figure 6 shows the speedup factors obtained from right unlinking in the systems. (This is the ratio of Figure 2 and Figure 5.) + Assembler $0 ad 0 20000 60000 100000 Number of productions in system Figure 5: Match cost with right unlinking. Figure 6: Speedup factors from right unlinking. Why is this optimization effective? At first glance it seems like a triviality, since it merely avoids null right activations, and a null right activation takes only a handful of CPU cycles. What makes right unlinking so important is that the number of null right activations per working memory change can grow linearly in the number of rules - and as the number of rules becomes very large, this can become the dominant factor in the overall match cost in Rete. We first explain why the number of right activations can grow linearly, then explain why almost all of these are null right activations. Recall from Section 3 that the average affect set size increases with the number of rules. For each affected production, there is some join node that gets right activated. Of course it may be that all the affected productions share this same join node, so that there is only one right activation. But as the beta part of the network forms a tree, sharing is only effective near the top; whenever a new working memory element matches the later conditions in many productions, many different join nodes are activated. As an extreme example, consider the case where the last condition is the same in every one of n productions. A single alpha memory will be used for this last condition; but if earlier conditions of the productions differ, the productions cannot share the same join node for the last condition. Thus, the alpha memory will have n associated join nodes, and a new match for this last condition will result in n right activations. Right unlinking is essentially a way to reduce this potentially large fan-out from alpha memories. Although reordering the conditions would avoid the problem in this particular example, in many cases no such reordering exists. Now, as more and more rules are learned, the number of join nodes in the lower part of the network increases greatly, since sharing is not very common there. However, at any given time, almost all of them have empty beta memories. This is because usually there is some join node higher up that has no matches - some earlier condition in the rule fails. To see why, consider a rule with conditions C,, C,, . . . , Ck, and suppose each Ci has probability pi of having a match. For the very last join node to have a non-empty beta memory, all the earlier conditions must match; this happens with the relatively low probability p1p2 . . . pkel. Since most of the lower nodes have empty beta memories at a given time, most right activations of them are in fact null right activations. In the Dispatcher and Memory1 systems, right unlinking yields a speedup factor of 40-50 when the number of rules is -100,000. The factor increases with the number of rules. Comparing Figures 2 and 5, we see that right unlinking eliminates almost all of the slowdown in the Dispatcher and Memory1 systems. Thus, the slowdown in the standard Rete for those systems is almost entirely due to increasing null right activations. In the Assembler and Memory2 systems, however, there are additional sources of slowdown in the standard Rete: left and beta memory activations are also increasing. Right unlinking eliminates just one of the sources, so the speedup factors it yields are lower (2-3). Note that the match cost measurements here are done directly in terms of CPU time. The token or comparison counts commonly used in studies of match algorithms would not reflect the increasing match cost in Rete or Treat, since a null right activation requires no comparisons and generates no new tokens. This prevented (Doorenbos, Tambe, & Newell, 1992) from noticing any increase in match cost in an earlier, shorter run of Dispatcher-Soar. The machine- and implementation-dependent nature of CPU time comparisons is avoided here by running the same implementation (modulo the right unlinking optimization) on the same machine. . Conclusions and We have shown that the Rete and Treat match algorithms do not scale well with the number of rules in our systems, at least in part because the affect set size increases with the total number of rules. Thus, it is important to reduce the amount of work done by the Large Scale Knowledge Bases 295 matcher when the affect set size is large. Both the right unlinking optimization and the sharing of beta nodes can be viewed as members of a family of optimizations that do this. Right unlinking avoids the work for right activations associated with many of the affected rules which cannot match. The sharing of beta nodes allows the matcher to do work once for large groups of affected rules, rather than do work repeatedly for each affected rule. Another possible optimization in this family is left unlinking: whenever a join node’s associated alpha memory is empty, the node is unlinked from its associated beta memory. This is just the opposite of right unlinking; while right unlinking reduces the number of right activations by reducing the fan-out from alpha memories, left unlinking would reduce the number of left activations by reducing the fan-out from beta memories. Unfortunately, left and right unlinking cannot simply be combined in the same system: if a node were ever unlinked from both its alpha and beta memories, it would be completely cut off from the rest of the network and would never be activated again, even when it should be. Another optimization in this family is incorporated in Treat. Treat maintains a rule-active flag on each production, indicating whether all of its alpha memories are non-empty. If any alpha memory is empty, the rule cannot match, so Treat does not perform any of the joins for that rule. This essentially reduces the number of left activations and beta memory activations. But since Treat must at least check this flag for each affected rule, the number of right activations remains essentially the same. Right and left activations and beta memory activations together account for the entire beta phase of the match, so this suggests that the slowdown observed in the Assembler and Memory2 systems might be eliminated by some hybrid of Rete (with right unlinking) and Treat, or by adding to Treat an optimization analogous to right unlinking. Of course, much further work is needed to develop and evaluate such match optimizations. 7. Acknowledge Thanks to Anurag Acharya, Jill Fain Lehman, Dave McKeown, Paul Rosenbloom, Milind Tambe, and Manuela Veloso for many helpful discussions and comments on drafts of this paper, to the anonymous reviewers for helpful suggestions, and to Joe Mertz for providing the Assembler-Soar system. 8. References Acharya, A., and Tambe, M. 1992. Collection-oriented Match: Scaling Up the Data in Production Systems. Technical Report CMU-CS-92-218, School of Computer Science, Carnegie Mellon University. Cohen, W. W. 1990. Learning approximate control rules of high utility. Proceedings of the Sixth International Conference on Machine Learning, 268-276 . Doorenbos, R., Tambe, M., and Newell, A. 1992. Learning 10,000 chunks: What’s it like out there? Proceedings of the Tenth National Conference on Artificial Intelligence, 830-836 . Etzioni, 0. 1990. A structural theory of search control. Ph.D. diss., School of Computer Science, Carnegie Mellon University. Forgy, C. L. 1981. OPS.5 User’s Manual. Technical Report CMU-CS-8 l-l 35, Computer Science Department, Carnegie Mellon University. Forgy, C. L. 1982. Rete: A fast algorithm for the many pattern/many object pattern match problem. Artificial Intelligence 19(l), 17-37. Gratch, J. and DeJong, G. 1992. COMPOSER: A probabilistic solution to the utility problem in speed-up learning. Proceedings of the Tenth National Conference on Artificial Intelligence, 235-240 . Gupta, A. 1987. Parallelism in Production Systems. Los Altos, California: Morgan Kaufmann. Laird, J.E., Newell, A., and Rosenbloom, P.S. 1987. Soar: An architecture for general intelligence. Artificial Intelligence 33( l), l-64. Laird, J. E., Rosenbloom, P. S. and Newell, A. 1986. Chunking in Soar: The anatomy of a general learning mechanism. Machine Learning I(l), 1 l-46. McDermott, J. 1983. Extracting knowledge from expert systems. Proceedings of the Eighth International Joint Conference on Artificial Intelligence, 1 OO- 107 . Mertz, J. 1992. Deliberate learning from instruction in Assembler-Soar. Proceedings of the Eleventh Soar Workshop. Minton, S. 1988. Learning Effective Search Control Knowledge: An Explanation-Based Approach. Ph.D. diss., Computer Science Department, Carnegie Mellon University. Miranker, D. P. 1990. TREAT: A New and Eficient Match Algorithm for AI Production Systems. San Mateo, California: Morgan Kaufmann. Miranker, D. P., Brant, D. A., Lofaso, B., and Gadbois, D. 1990. On the performance of lazy matching in production systems. Proceedings of the Eigth National Conference on Artificial Intelligence, 685-692 . Oflazer, K. 1984. Partitioning in parallel processing of production systems. Proceedings of the IEEE International Conference on Parallel Processing, 92- 100 . Rosenbloom, P.S. and Aasman J. 1990. Knowledge level and inductive uses of chunking (EBL). Proceedings of the National Conference on Artificial Intelligence, 821-827 . Rosenbloom, P. S., Laird, J. E., Newell, A., and McCarl, R. 1991. A preliminary analysis of the Soar architecture as a basis for general intelligence. Artificial Intelligence 47( l-3), 289-325. Sellis, T., Lin, C-C., and Raschid, L. 1988. Implementing large production systems in a DBMS environment: Concepts and algorithms. Proceedings of the ACM-SIGMOD International Conference on the Management of Data, 404-412 . Tambe, M., Newell, A., and Rosenbloom, P. S. 1990. The problem of expensive chunks and its solution by restricting expressiveness. Machine Learning 5(3), 299-348. 296 Doorenbos | 1993 | 44 |
1,370 | Department of Computer Science University of Maryland College Park, MD 20742 Abstract PARKA, a frame-based knowledge representation system implemented on the Connection Machine, provides a representation language consisting of concept descriptions (frames) and binary relations on those descriptions (slots). The system is designed explicitly to provide extremely fast property inheritance inference capabilities. PARKA performs fast “recognition” queries of the form “find all frames satisfying p property constraints” in O(d+p) time-proportional only to the depth, (i, of the knowledge base (KB), and independent of its size. For conjunctive queries of this type, PARKA’s performance is measured in tenths of a second, even for KBs with 100,000+ frames, with similar results for timings on the Cyc KB. Because PARKA’s run-time performance is independent of KB size, it promises to scale up to arbitrarily larger domains. With such run-time performance, we believe PARKA is a contender for the title of “fastest knowledge representation system in the world”. I. Introduction Currently, AI is experiencing a period of soul-searching. Critics contend that the promise of the AI techniques of the 80’s evaporated because those techniques did not deliver. It wasn’t that their formalisms and theory were unacceptable (in fact, they worked fine for relatively small, contrived domains). Real-life domains, however, are orders of magnitude larger, and the run-time performance of these earlier AI techniques is often completely unacceptable for such domains (and even much smaller ones); that is, these techniques, while quite useful, are computationaZZy hefictive (Shastri 1986). The field of knowledge representation (KR) offers many problems for which these classic AI techniques have yielded unacceptably slow performance. One example is recognition, the problem of identifying those frames that satisfy a given set of property constraints. For example, “Find all x such that x is yellow, alive, and flies.” Existing KR systems efficiently solve the converse problem-retrieving the properties of any given frame-but cannot do the same for recognition. In general, they answer recognition queries by traversing the entire knowledge base (KB), collecting the set of frames satisfying the given constraints. Their run-time for recognition queries is no better than linear in the size of the KB, i.e., O(n). It has been our goal to design a KR system fast enough to provide computationally effective recognition queries (and ‘Email: evett@cs.umd.edu 2Email: hendler@cs.umd.edu 3Email: waander@cs.umd.edu other types of queries) on KBs large enough to support real world commonsense reasoning. Such a system will serve as a foothold for the development of realistic AI applications requiring rapid response time. Our system, PARKA, is a symbolic, frame-based KR system that takes advantage of the Connection Machine’s (CM) massive parallelism to deliver high run-time performance. It effects recognition queries in time virtually independent of the size of the KB, and dependent only on the KB’s depth, d (For large KB’s, usually d = /g(n).) In the remainder of this paper, we discuss the design and implementation of PARKA, and present a short analysis of the expected and observed run-time performance of those operations used in answering recognition queries. To validate PARKA’s inference mechanisms on a KB of realistic size and topology, we test PARKA’s performance on the Cyc KB and argue that the performance shows that PARKA offers computationally effective recognition queries on realistically large KBs. 2. PARKA was designed a general-purpose KB, for use by other AI systems. We chose a frame-based symbolic representation paradigm for two main reasons: first, fmme systems have been used in KR for year and remain a common paradigm in contemporary AI research (Sowa 1991; J.CMA 1992, e.g.); second, semantic nets readily lend themselves to the data-level parallelism design so important in efficient parallel implementation. Most KR systems have two primary goals: expressiveness and formality. The authors of these systems want to express as many semantic concepts as possible with an unambiguous, rigorous formalism. For them, run-time efficiency is of secondary importance. They emphasize classic search and rule-based approaches, consequently suffering run-time that is at best linear, and at worst exponential, in the size of the problem. While these methodologies have a place in AI, their application to large KB’s leads to unacceptable run-time performance, and it is expected that realistic KB’s will be very large, indeed-on the order of lo’s or 100’s of millions of frames (Lenat & Guha 1990; Stanfill & Waltz 1986, etc.) Such KB’s would be orders of magnitude larger than any existing today. To achieve computational effectiveness on large KBs, we designed PARKA with run-time performance as a primary goal of the system. Thus, we somewhat constrained PARKA’s expressiveness-this was unavoidable because many operations, are, in general, NP or even undecidable for term-subsumption languages that are sound and complete. Large Scale Knowledge 297 From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. Recognition, in particular, is NP, though it is a special case of classification, which can be undecidable (Rebel 1990). So, although PARKA’s semantics are roughly based upon those of NETL (Fahlman 1979) and KL-0NlE (Bra&man & Schmolze 1985), we avoided semantic constructs lacking a computationally effective implementation. We believe PARKA’s run-time performance on even very large KBs more than compensates for its slighlty restricted expressiveness when compared to that of other, serial, KR systems. 2. I Design specifics PARKA is a basic frame system: each fkame corresponds to a concept represented in the KB and the collection of relations to which it belongs. Relations among concepts are represented as directed graphs whose arcs (or links) are stored as frame pointers in the slots of frames. Properties of frames are represented as relations for which the domain is the frames having the property, and the range is the corresponding property values. The KB can be viewed as the network formed by the frames and the links that connect them, similar to a semantic network (Fahlman 1979, e.g.) Ontological relations among frames are encoded via the IS-A (“is a”) relation, which is intimately involved in the calculation of property inheritance inferences (see below) As such, it has special status in PARKA. The subgraph consisting of all IS-A links and frames is referred to as the IS-A hierarchy of the net. All PARKA IS-A hierarchies are rooted and acyclic. A small subset of a frame network is shown in Figure 1. The frames are shown as ovals. The properties of each frame are represented by the arcs emanating from them. Unlabelled arcs are IS-A links. Figure I: Subset of a Frame Network 2.2 Inheritance PARKA employs a property inheritance mechanism on its IS-A hierarchy. A frame is said to be explicitly-valued for a given property if it is incident on a property link of that type-that is, if the frame contains a slot by that name. Any frame not explicitly-valued for a given property inherits the value of its nearest ancestor(s) (using a metric based on Touretzky’s inferential distance ordering (IDO)) that is explicitly-valued for that property. Because PARKA supports multiple inheritance, it is possible to have more than one such node. In that case, PARKA disambiguates among viable ancestors. The first version of PARKA (Evett, Spector & Hendler 1993; Evett & Hendler 1992) handled multiple- inheritance-as have other frame-based KR systems (i.e., Brachman & Schmolze 1985, e.g.)-by using inheritance path length to disambiguate multiple inheritance paths. The system chooses the ancestor having the shortest IS-A path from the inheritor. This inheritance paradigm suffers from redundancy, ambiguity, and other problems, and has been soundly criticized in the philosophical community as being epistemologically inadequate. Many of these criticisms are detailed in (Touretzky 1986) and (Brachman 1985). The current implementation uses a top-down, path-based, credulous inheritance mechanism based on Touretzky’s IDQ metric to disambiguate multiple inherited values that works like this: assume the frame in question, X, is not explicitly valued for the given property, P. Let B be the set of ancestors (B,, B,, . ..) of X that are explicitly valued for P. X takes B,‘s value for P as its own, provided B,. is an element of B such that there is no B. 0’ f i) such that B. is an IS-A descendant of Bi. If more than one element of B meets this criterion, X is said to be ambiguously valued for Property p. Unfortunately, many retrieval operations involving top- down, path-based inheritance mechanisms, including IDO, have been shown to be m-hard (Selman & Levesque 1989). To calculate these operations in a timely manner, we adopted a slightly weaker ordering scheme for inheritance disambiguation. Again, let B be the set of ancestors of X that are explicitly valued for property P. X takes Bi’s value for P as its own, where Bi is that element of B with the largest topoZogica2 number. The topological number, topo(Z), of a frame Z, is defined inductively: topo(rootNode)=O and for all other frames, Z, topo(Z) = I+ max(topo(y)) where C is the set of frames YEC that are parents of Z. Though PARKA’s disambiguation mechanism is not quite as powerful as complete IDO, it enjoys many of the same advantages and is considerably stronger than a simple path- length based scheme. It does not suffer from the problems of redundancy noted in (Touretzky 1986) and in only one case does our inheritance scheme differ from IDO: in full IDO, if there are two explicitly valued ancestors, X and Y, but X is also an ancestor of Y, then Y is “more specific” than X, and so its property value is chosen. Our topological disambiguation scheme, however, may arbitrarily disambiguate among the two ancestors, even when neither is an IS-A ancestor of the other. In the vast majority of cases, though, PARKA’s mechanism is equivalent to IDO. PARKA’s encoding of the Cyc commonsensc KB (see Section 3.4) revealed no cases in which PARKA disambiguates an inheritance relation that is ambiguous via IDO. If Cyc turns out to be typical of future KBs, this shortcoming of topologicial disambiguation will have little impact on most inferences. 298 Evett 2.3 Implementation PARKA’s internal representation of a frame consists of a block of processors, one for each of that frame’s IS-A parents. These processors are contiguous across the CM’s processor address space. One processor of each block is distinguished as a referent for all other frames pointing to the represented frame. The slots of each frame are encoded in a slot table, stored in one of the processors of that frame’s block. The table is a list of pairs (<property>, <property-value>), each component being a processor address. Figure 2 illustrates the internal representation on the CM of part of the net shown in Figure 1. The large rectangles represent the distinguished processor of each block of processors corresponding to a particular frame. The smaller rectangles represent the remaining processors of those blocks. The square-bracketed values represent the address of the processor representing the property of that corresponding name. processor #20 processor #21 processor #140 name: IS-A: slot-table: 1 1 processor segment for frame “Barney” Figure 2: Internal CM representation of a small subset of a frame network To determine if a frame, Y, is explicitly valued for a given property, P, PARKA determines the (explicit) value of P for every frame in the KB by scanning through every frame’s slot table in parallel, seeking an entry corresponding to P. If a matching entry is found in frame Y’s slot table, the corresponding value stored there is the address of the processor representing the frame that is the value of property P for frame Y. In general, slot retrieval is proportional to the size of the largest slot table in the KB. Because property values tend to be scattered across inheritance paths, these tables are typically quite small. The largest slot table in our implementation of the Cyc KB had only 43 entries. Even so, PARKA maintains a cache of the most recently accessed properties to accelerate explicit property look-up. 3. esformance To demonstrate that PARKA provides computationally effective KR, we implemented and timed several retrieval operations on very large KBs which included inheritance queries on very large, pseudo-random KB’s and recognition queries on the Cyc KB. 3. I Inheritance Almost all KB queries involve some inferencing along the IS-A hierarchy because almost all involve calculating the inherited value of a set of properties for a set of frames. PARKA uses several data structures to make such inheritance inferencing very fast, including using multiple processors to represent frames having multiple parents. PARKA uses an activation wave propagation algorithm to calculate the value for a given property of every frame in the KB in time independent of the size of the network. First, the IS-A root frame is “activated”, forming a nascent activation wave. At each iteration, this wave is passed downward along IS-A links. The activation wave propagates synchronously; all nodes in the current “wave front” simultaneously activating the incident nodes that have not yet been activated. Each such iteration is a propagation step. Because PARKA is implemented on the CM, each propagation step is accomplished with a single parallel operation. In detail, at propagation step i: 1. 2. 3 Frames with a topological value of i (i.e., those at topological level i) that are explicitly valued for the property set their wave value to k, where k’s high order bits are i, and k’s low order bits are the frame’s property value. Every frame (processor) not explicitly valued for the property in question that has parent nodes at topological level i-l “pulls” down the value of the activation wave from those parents. Non-explicitly valued frames at topological level i choose as their own activation wave value the largest of those pulled down from the parent frames. Because the high order bits of each wave value arc the topological level of the origin node, the selected value conforms to the inheritance scheme outlined in section 2.2. Figure 3: The Basic Inheritance Algorithm The number of propagation steps required to calculate a property value is equivalent to the depth, d, of the network’s IS-A hierarchy. Consequently, PARKA’s run-time for queries such as “what things are black?“, is O(d), and is independent of the size of the KB. We refer to such queries as “top-down”, and they are the bane of most serial KR systems, requiring O(n) time to effect, where n is the size of the KB. Serial systems use indexing schemes to mitigate this computational morass, but indexing can be unsatisfactory for a variety of reasons (as we discuss in (Kettler, Hendler & Andersen 1993b)) including that it is typically infeasible to explicitly index all properties. The comparison between serial and parallel run-times is more striking when realizing that for realistic networks d = Zg(n). It is commonly believed that such network shallowness will persist and probably be accentuated as net size increases. Our PARKA implementation of the Cyc commonsense MB (see section 3.4), enjoys a similarly shallow IS-A topology. To compare PARKA against a serial representation system, we created a serial version of PARKA, called SPARKA Large Scale owledge Bases 299 (“serial-PARKA”). To make the comparison as fair as possible, we implemented SPARKA as a severely stripped- down version of a more complete serial implementation (detailed in (Spector, Evett & Hendler 1990)). It has very little functionality other than for simple property inheritance calculations, but is optimized to effect those calculations as quickly as possible. We tested our analytical predictions of PARKA’s run-time performance of simple property inheritance queries by timing PARKA’s response to example queries on topologies of varying size and depth. Then, we timed SPARKA on the same queries and networks. The networks were quite large-up to 128K nodes4. Because encoding such large networks by hand was not possible, we developed algorithms for generating pseudo-random networks with certain topological characteristics. These techniques are described in (Evett, Spector & Hendler 1993; Evett & Hendler 1992). Our experience with the Cyc KB (see section 3.4) has affirmed our belief that the topologies used to measure PARKA’s performance reflect those of realistic KBs. Cl 1 parent, serial A 1 parent, PARKA 0 4 parents, serial 0 4 parents, PARKA 1.8 1.6 El a 0.8 . $ 0.6 d---d- 04 “Ejgmgm \D Knowledge Base Size (proc&ors) 2 3 Figure 4: Run-time performance of inheritance queries on 8-level networks of varying sizes. Figure 4 shows PARKA’s run-time for frame networks of depth 8, and of varying size. This timing suite isolates the effect of network size on run-time. These timings support our supposition that PARKA’s computation of inheritance queries is independent of network size’. Thus, %mings were made on a “quarter” CM-2, consisting of only 16K processors. ‘Actually, we observed a correlation between PARKA’s run- time and the size of the networks. We examined this degradation away from our theoretical performance predictions (the results of this study are detailed in (Evett & Hendler 1993)). The degradation is completely accounted for by the performance of PARKA’s performance should scale up to arbitrarily larger KBs. The serial system’s run-time, on the other hand, was linear with respect to network size. The figure contains best-fit curves to highlight this linear relation. The networks used in the timings were of two topological types: trees (each frame with exactly one parent) and directed graphs (each frame with between one and four parents). We used different topologies to demonstrate that PARKA’s run- time is independent of upward IS-A fan-out. A comparison between the performance figures of SPARKA and PARKA in Figure 4 demonstrates that the latter remains computationally effective even for very large KBs, while the former’s performance is unacceptable for large networks. We anticipate that this contrast will become increasingly stark for much larger KBs of applications in real-world domains. 3 2 Recognition Queries The ability to solve recognition queries has driven much of PARKA’s design. The problem of recognition is well-known in the field of KR (Wilensky 1986, e.g.) and is the problem of answering KB queries of the form: “find all frames x such that P,(x,c,) A P2(x,c,) A . . . A P (x,cJ, where Pi(x,ci), Vi, is a unary predicate true for alf frames, X, that have value ci for property Pi. E.g.: “What object is most characterized by this list of property values? . ...” and “what have trunks, tusks, and are big and gray?” Such queries are extremely time-consuming for serial-based systems, often running in time no better than O(pn), even on systems employing a highly constrained description language. PARKA’s ability to execute inheritance inferences quickly makes it particularly suitable to recognition. Because PARKA determines which objects have a given value for a given property in O(d) time, PARKA determines which objects satisfy a set of p property constraints in no more than 0(&J time, where d is the depth of the net. But PARKA does even better, using a pipelining technique to evaluate recognition queries in time O(d+p). Because each wave propagation step of an inheritance inference occurs “in lockstep” -all frames at the same topological level calculating their property value simultaneously-pipelining can be added to the basic process outlined in Figure 3. At each propagation step i, all frames at topological level j, such that p2j>i , retrieve from their parent(s) the wave activation value having to do with the (j-i)-th element of the set of properties being inferred. Thus, the complete propagation requires a+-1 propagation steps. 3.3 Using Cyc for Validation of PARKA We tested our run-time predictions by timing PARKA’s performance for recognition queries on an implementation of the Cyc KB (Lenat & Guha 1990). Our motivation for using Cyc to evaluate PARKA’s performance is twofold. First, we want to validate PARKA’s inference mechanisms on a KB of large size and realistic topology. Because Cyc the CM’s interprocessor communication operations. The run- time of these operations degrades proportionally with router network load. 300 Evett is the largest and most comprehensive commonsense KB in existence, it is an obvious choice. A second and more exciting motivation is that we envision some future version of Cyc being built on top of a massively parallel substrate, like PARKA, to make its reasoning services fast enough to be used by an intelligent agent operating in the world in real time. On the lowest level, Cyc consists of a frame system (frames are called “units” in Cyc), representing assertions (in the form of binary relations, or slots) about entities in the world. Above that level is the CycL “constraint language”, which allows the specification of inferences to be made about units in the KB. The inference mechanisms provided by CycL range from the very simple, such as the slot inverse mechanism (e.g. father(John,Mary) + fatherOf(Mary,John)), to theorem proving using general “wffs”, and to unsound inference methods such as analogy. PARKA implements only some of the inference capabilities provided by CycL, particularly those having to do with property inheritance. For our tests we represented only that subset of Cyc that involved IS-A based property inheritance and ontologies. This subset contained a total of 26,214 units, 8591 (33%) of which were collections, and 17,623 (67%) instances. Of the instances, 403 1 (15% of the total) were slots (slots are explicitly represented in the Cyc ontology). To accommodate a KB of this size, we used a 16,384 processor CM-2 with a virtual processor ratio of 4:l. The maximum depth of the KB along the IS-A relation was 23, (i.e., shallow relative to KB size, as expected.) 3.4 Performance of ecognition Queries To test recognition query performance, we timed queries similar to those usedby CycL to find units “similar” to a given unit. Units are considered similar if they share the same values for a number of properties exceeding some threshold. First, we selected a Cyc unit (#%Bumta-1986) with a relatively large number (22) of local assertions (i.e., explicitly-valued properties), assigning an arbitrary ordering to those properties. We then ran recognition queries in PARKA to identify those frames that matched at least 50% of the first n slots (II n 122) of #%Burma-1986. The recognition queries themselves, then, involved between 1 and 22 conjuncts. The run-time performance of these queries is plotted in Figure 5. As Figure 5 clearly shows, the time required to perform recognition queries grows only linearly in the number of conjuncts, p, and overall performance, even for a query of 22 conjuncts, is excellent. The run-time matches the Q(d+p) performance predicted by analysis. This performance compares very favorably with recognition queries on serial systems, which require O(pn) time for the same queries, where n is the size of the MB. Recognition queries in PARKA are independent of KB size, and should scale up to arbitrarily larger domains. Indeed, in (Kettler et al 1993a) we report sub-second run-time performance of recognition queries in a case-based planning system using KB’s of over 100,000 frames. This is a speed-up of more than 10,000 over the highly optimized serial version of PARKA. O.OUOO 0 5 10 15 20 25 Number of Conjuncts Figure 5: Run-time of recognition queries of various sizes on the PARKA implementation of the Cyc KB This experiment was designed not ony to demonstrate the O(d+p) complexity of conjunctive recognition queries in PARKA, but also to show that PARKA can supply fast matching for analogy-related functions, a task that traditionally has been difficult for serial systems. For example, the CycL query that most nearly corresponds to those of Figure 5 finds only an arbitrary subset of the matching units. By exhaustively matching the probe against the entire KB simultaneously, PARKA finds all appropriate matches. elate ark PARKA is intended as a basis for large AI systems. The implementation of the Cyc KB in PARKA is the first of a series of uses of PARKA in other large AI systems. Potential uses for PARKA include case-based AI systems, as the basis of a massively parallel knowledge server, and as part of a knowledge-mining system. We plan to examine how PARKA might be more fully integrated into the Cyc representation system, proper. Also, we implemented a simple version of PARKA on the MIMD CM-5. Preliminary results show that we should be able to represent KBs of over 1M fi-ames on a lK-processor CM-5 and obtain run-time performance nearly an order of magnitude better than the results in this paper. Because the C&I-5 is a MIMD machine (though we use it as a SPMD machine), we can use several inferencing techniques that aren’t possible on the SIMD CM-2. In particular, we plan to use an active messaging scheme (Von Eicken et al 1992) to increase the flexibility of PARKA’s memory association schemes, and to increase the use of pipelining in inferencing. There are a few other parallel KR systems (Geller 1991; Moldovan, Lee & Lin 1989 to name two) and these are discussed more fully in (Evett 1993). Large Scale Knowledge Bases 301 5. Conclusion Using KBs with over 100,000 frames, we have shown that PARKA computes property inheritance and recognition queries in time independent of the size of the KB and dependent only on network depth. The run-time of these operations is in the tenths of seconds. This performance compares very favorably to serial representation systems. Because empirical evidence to date supports our analytical claims, we believe that PARKA’s performance will scale up to larger KBs, even to the those necessitated by memory based reasoning6 technology. Thus, we argue that PARKA can supply computationally effective recognition queries for realistic KBs. Acknowledgments The author wishes to thank the Systems Research Center at the University of Maryland, for their early support in this research, and for the continuing use of their hardware in the development of PARKA. The authors also wish to thank the University of Maryland Institute for Academic Computing Services (UMIACS) and Thinking Machines Corp. for the use of their Connection Machines. The support staff at both institutions was very helpful during the development of PARKA. This work has been supported by AFOSR grant 01-5- 28180, ONR grant NOO14-88-K-0560, and NSF grant lRl- 8907890. eferences Brachman, R.J. I Lied about the Trees. AI Msg. 6, 3 (Fall, 1985). Brachman, R.J. and Schmolze, J.G. An Overview of the KL-ONE Knowledge Representation System. Cog. Sci., 9,2 (April-June 1985). J. Computers and Mathematics with Applications - special issue on semantic networks, 2 3(2-5), 1992. Evett, M.P. PARKA: A System for Massively Parallel Knowledge Representation. Ph.D. diss., Dept. of Computer Science, Univ. Maryland, College Park, 1993. Forthcoming. Evett, M.P., Spector, L. and Hendler, J.A. Massively Parallel Frame-Based Property Inheritance in PARKA. To appear in Journal for Parallel and Distributed Computing. Evett, M.P. and Hendler, J.A. Degradation of Interprocessor Communication Operations on the Connection Machine. Tech. Rep., Department of Computer Science, Univ. Maryland, College Park, March, 1993. Evett, M.P. and Hendler, J.A. An Update of PARKA, a Massively Parallel Knowledge Representation System. Tech. Rep., CS-TR-2850, Department of Computer Science, Univ. Maryland, College Park, February, 1992. Fahlman, S.E. NETL: A System for Representing and %Ve use the term “MBR” in a broader sense than in (Stanfill & Waltz 1986) to include such paradigms as case-based reasoning (Hammond 1989; Kettler et al 1993; Kitano & Higuchi 1991.) Using Real World Knowledge. MIT Press, Cambridge, MA, 1979. Operations in Massively Parallel n. Tech. Rep. CIS-91-28, Dept. Computer and Information Science, New Jersey Institute of Technology, Newark, NJ, 199 1. Hammond, K. Case-Based Planning: Viewing Planning as a Memory Task. Academic Press, 1989. Kettler, B .P., Hendler, J.A., Andersen, W.A., and Evett, M.P. (1993a) “Massively Parallel Support for a Case-based Planning System”. In Proceedings of the Ninth IEEE Conference on AI Applications, IEEE 1993. Kettler, B.P., Hendler, J.A. and Andersen, W.A. (1993b) Why Explicit Indexing Can’t Work. Tech. Rep., Department of Computer Science, Univ. Maryland, College Park, April, 1993. Kitano, H. and Higuchi, T. Massively Parallel Memory-Based Parsing. Proceedings of IJCAI-91, 199 1. Lenat, D.B. and Guha, R.V. Building Large Knowledge-Based Systems. Addison Wesley, Reading, Mass., 1990. Moldovan, D., Lee, W., and Lin, C. SNAP: A Marker- Propagation Architecture for Knowledge Processing. Tech. Rep. CENG 89-10, Dept. Electrical Engineering-Systems, Univ. of Southern California, Los Angeles, CA, 1989. Nebel, B. Terminological Reasoning Is Inherently Intractable. AIJ, 43,2 (May, 1990). Selman, B. and Levesque, H. The Tractability of Path-Based Inheritance. Proceedings of IJCAI-89, Morgan-Kaufman, San Mateo, CA, 1989. Shastri, L. Massive Parallelism in Artificial Intelligence. Tech. Rep. MS-CIS-86-77 (LINC LAB 43), Dept. of Computer and Information Science, University of Pennsylvania, Philadelphia, PA, 1986. Sowa, J. (ed.) Principles of Semantic Networks. Morgan- Kaufman, San Mateo, CA, 1991. Spector, L., Evett, M. and Hendler, J.. Knowledge Representation in PARKA. Tech. Rep. TR-2409, Department of Computer Science, University of Maryland, College Park, MD, Feb. 1990. Stanfill, C. and Waltz, D. Toward Memory-Based Reasoning. Communications of the ACM, Vol. 29, No. 12, December 1986, pp. 1213-1228. Touretzky, D.S. The Mathematics of Inheritance Systems. Morgan Kaufmann, Los Altos, CA, 1986. Von Eicken, T., Culler, D., Goldstein, S. and Schauser, K. Active Messages: a Mechanism for Integrated Communication and Computation. Tech. Rep. UCB/CSD 92/#675, Computer Science Division, EECS, University of California, Berkeley, CA, 1992. Wilensky, R. Some Problems and Proposals for Knowledge Representation. Tech. Rep. UCB/CSD 86/294, University of California, Berkeley, May 1986. 302 Evett | 1993 | 45 |
1,371 | Case-Method: S iroaki Kit an0 Akihiro Shibata Case Systems Laboratory NEC Corporation Z-11-5 Shibaura, Minato Tokyo I.08 Japan kitano@ccs.mt.nec.co.jp Abstract Developing large-scale systems are major ef- forts which require careful planning and solid methodological foundations. This paper de- scribes CASE-METHOD, the methodology for building large-scale case-based systems. CASE- METHOD defines the procedure which manager- s, engineers, and domain experts should follow in developing case-based systems, and provides a set of supporting tools. An empirical study shows that the use of CASE-METHOD attains significant workload reduction in system devel- opment and maintenance (more than l/12) as well as qualitative change in corporate activi- ties. ntroduetion This paper describes CASE-METHOD, a methodolo- gy for building and maintaining case-based reasoning (CBR[Riesbeck and Schank, 19891) systems. CASE- METHOD has been inductively defined through a corporate-wide case-based system actually deployed at NEC corporation. It is now being applied to several case-based systems, ranging from division-wide systems to corporate-wide and nation-wide case-based systems. Despite increasing expectations for using case-based reasoning as a practical approach to building cost- effective nroblem-solvers and cornorate information ac- cess systims, no research has be& made on how to de- velop and maintain case-based systems. In the software engineering community, particuiarly among practition- ers, methodology development or selection is regarded as one of the most important development decisions to make. In general, software development methodologies define how to organize a project, how each development proce- dures should be carried out, and how to describe inter- face between development processes. Often, the method- ology provides automated tools, which support some of the development processes involved [Downs et. al., 1988, Wasserman et. al., 19831. A number of methodologies have been formulated by mainframe manufactures and by consulting firms. Some of these are AD/Cycle by IBM, Method/l by Andersen consulting, SUMMIT by Coopers and Librant, and NAVIGATOR by Ernst and Young. NEC Corporation shimazu,shibata@joke.cl.nec.co. jp If CBR systems are to be integrated into mainstream information systems, a solid methodological support is essential. Unfortunately, however, no methodological support has been provided from CBR community ([A- corn and Walden, 19921 is the possible exception, but only a part of the entire process has been defined). Al- though there are a few methodologies for building expert systems, such as HSTDEK by NASA[Freeman, 19871, KEMRAS by Alvey project, KADS by ESPRIT, and EX/METHOD by NEC, these methodologies are not ap- plicable for CBR system development, because underly- ing principles are so different between expert systems and CBR. Thus, the authors had to develop their own methodol- ogy, optimized for case-based systems. CASE-METHOD was designed to be consistent with methodologies for non-CBR systems, so that the corporate information systems division would be able to use the methodolo- gy without major trouble. Also, CASE-METHOD was inductively defined based on several CBR projects ac- tually carried out. Thus, CASE-METHOD is already a field-tested methodology. While it is not possible to de- scribe all the details for a full set CASE-METHOD due to space limitation, this paper describes basic components of CASE-METHOD - vision, process definition, and tool- S. erience Sharing Although CASE-METHOD can be applied to various ranges of projects, from task-specific problem-solvers to nation-wide case-based systems, the main target for CASE-METHOD is corporate-wide information systems. It is acknowledged that the mainstream corporate in- formation system has been designed based on the idea of Strategic Information System (SIS)[Wiseman, 19881. However, how SIS should be designed and how the sys- tem should be operated has only been vaguely defined. In addition, SIS confined itself within a traditional infor- mation processing paradigm of corporate behavior [Sim- mon, 19761. Thus, how knowledge can be transferred and reformulated in the organization has not been the issue of the traditional systems. CBR community, on the oth- er hand, viewed CBR mainly as a new problem-solving mechanism, and have not discussed how the CBR idea impacts corporate information systems. In order to bridge these gaps, the authors propose the Experience Sharing Architecture (ESA) concept. ESA Large Scale Knowledge 303 From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. facilitates sharing of experiences corporate-wide, thereby promoting organizational knowledge creation, and im- proves core skills in the corporation. This will be at- tained through the use of case-based systems integrated with mainstream information systems, such as existing SIS. While the authors agree the importance of SIS and effectiveness of the information processing paradigm of corporate behavior, the authors argue that a new di- mension should be added, in order to further enhance the power of corporate information systems. Since people and organization learn from experiences (see [Badaracco, 1991, Ishikura, 1992, Meen and Keough, 1992, Nonaka, 1991, Nonaka, 1990, Senge, 19901 for dis- cussron on corporate knowledge creation and organiza- tional learning), collecting, sharing, and mining experi- ences collected in form of case is the best approach to improving knowledge level and skills of the organization. In [Kitano, et. al., 19921, effectiveness of case-based sys- tems to support organizational knowledge creation and learning, particularly for Nonaka’s theory, has been dis- cussed. CASE-METHOD provides how to build and maintain the system to support the organizational knowledge cre- ation. In addition, CASE-METHOD defines how organiza- tional knowledge creation can be carried out, in the light of modern information technology. It is a methodology to develop case-based systems, as well as a methodology to implement the knowledge creation cycle. 3 Case-Method Cycle The methodology employs an iterative approach, which allows the system to evolve as process iterates. Figure 1 shows the system evolution cycle in CASE-METHOD. CASE-METHOD defines the system development process, the case-base development process, the system operation process, the database mining process, the management process, and the knowledge transfer process. System Development Process This process em- ploys a standard software engineering approach, such as waterfall model or flower model [Humphrey, 19891. As a development methodolo develop a CBR system, a y, the goal is to design and w rch can store and retrieve a case-base created in the case-base development process. Case-Base Development Process The goal of this process is to develop and maintain a large-scale case- base. Details will be described in the next section. System Operation Process This process defines in- stallation, deployment, and user supports for the CBR system. This follows standard software engineering and RDB management procedures. Data-Base Mining Process Data-base mining will be carried out using the case-base. Statistical analysis, rule-extraction and other appropriate techniques will be applied. The current model defines how to analyze case- base, using standard statistical procedures, and rule ex- traction, using decision tree [Quinlan, 19921. This pro- cess is a subject for further research. Management Process This process defines how the project task force should be formed, what kind of or- ganizational support should be provided to the project, Figure 1: Case-Method Cycle 304 Kitano and what kind of management control should be car- ried out to obtain a constant flow of high quality cas- es. The authors have defined a mixed scheme involv- ing bottom-up/top-down control, incentive systems, cen- tral/local controls, and case filtering committee. This process should be rearranged for each organization. Knowledge Transfer Process This process defines methods to transfer knowledge (cases and extracted rules) to related divisions. Network-based system de- ployment, incentive systems, management control, and newsletter publications have been defined. In addition, how to create a case report format, which is one of the major feed back means, is defined. These processes form one cycle in the knowledge cre- ation and system evolution. Because of case structure reformulation in the case base development process and rule extraction in the data-base mining process, the qual- ity of knowledge to be stored and transfered improves as the cycle iterates. When the system rearched the maturi- ty stage, the system should hold a case-base consisting of appropriately represented and indexed high quality cas- es and a set of extracted rules specific to the application domain. 4 ajor recess Definitions Case Collection The first phase in the development requires collecting seed cases. The seed cases provide an initial concept regarding the application domain land- scape. In case of SQUAD system[Kitano, et. al., 19921, the authors started with 100 seed cases, to define a crude case format and data structure. In case of the nation- wide case retrieval system, the authors are working on several hundred cases from the beginning. As a start up phase in the project, cases are generally collected in un- structured and ununiformed style, such as full-text and other domain-specific forms. From the second cycle, this phase involves (I) collec- tion of cases which are consistent with the pre-defined case report format, and (2) filtering of cases so that on- ly cases with minimum acceptable quality will be sen- t to the next phase. Cases are reported in structured style, using pre-defined case report form and full-text with specified writing style. Products of this phase are (1) a set of case report forms, and (2) a set of case reports in full-text. Attribute-Value Extraction The goal of the attribute-value extraction phase is to extract all pos- sible elements in case representation and indexing. In the initial cycle, this phase consists of three processes; (1) keyword listing, (2) attribute identification, and (3) value grouping. The process can be semi-automatic, but a certain amount of human monitoring would be necessary, as new keywords and compound nouns need to be identified by human experts. Each attribute and value is examined, to determine whether or not it is independent from oth- er attributes and values. Ideally, a set of attributes is expected to be a linearly independent set. However, in reality, this is not always possible. Thus, some depen- dency would be allowed. However, an excessive degree of dependency makes case representation and indexing less transparent. Products of this phase are (1) a list of attributes, (2) a list of possible values for each attribute, (3) a thesaurus of keywords to be the value of each attribute, and (4) a set of normalized units for problem description and evaluation. Hierarchy Formation The hierarchy formation phase defines relationships among keywords and at- tributes. For each attribute identified in the previous phase, a set of keywords has already been grouped. In this phase, relationships between keywords will be de- fined, mostly by using IS-A relation. The process of defining the relationship will be carried out in both bottom-up and top-down manner. Generally, it starts as a bottom-up process involving sub-grouping a set of keywords, and creates a super-class of one or more key- words. Then, the IS-A relation will be defined between the created super-class and keywords. One or more su- perclass will be grouped and a superclass for them will be defined. Then, the IS-A relation will be defined between them. This iterative process builds up an IS-A hierar- chy in a bottom-up manner. This bottom-up process creates a minimally sufficient hierarchy to cover values for a set of existing cases. However, it does not guarantee whether the defined hierarchy can cope with unknown, but possible, cases. Thus, the top-down process will be carried out to incorporate a set of class and values to cover possible unknown cases. In the top-down process, the domain expert checks to determine whether or not all possible subclass are assigned for each class. If possible subclasses are missing, the missing class will be added. After a set of hierarchies is defined, the relative im- portance of each attribute and the distance between in- dividual values will be assigned. Ideally, this weight and distance assignment process needs to be carried out, us- ing a sound statistical and empirical basis. However, in many cases, obtaining such statistical data would be unfeasible. In fact, none of the in-house projects is ca- pable of obtaining such statistics. This is mainly due to the nature of domains and constraints on develop- ment and deployment schedules. Plus, in some systems, assigning pre-tied weights works against achieving the system. Particularly when the user’s goals for using the system differ greatly, pre-fixed attribute weights unde- sirably bias the search space in an unintended fashion. Thus, decisions on how to assign weights and how to val- ue distance measures must reflect characteristics of the domain and an actual deployment plan. A product of this phase is a set of concept hierarchies created for each attribute. The hierarchies are assigned with similarities between values. Database efhition and ata Entry Then, database definition will be created using the set of hier- archies just defined. There are several methods to map the hierarchy into relational data-base (RDB). Which method is to be used is up to the system designer. However, CARET case-base retrieval shell supports flat record style database definition, as opposed to structured indexing [Hammond, 1986, Kolodner, 19841, due to the maintenance ease [Kitano, et. al., 19921. Using RDBMS is an important factor in bringing CBR into the main- stream information system environment. At the end of these processes, a defined RDB contains a set of cases. The system should be operational after this phase. Large Scale Knowledge Bases 305 Feedback The goal of the feedback phase is to provide explicit knowledge to case reporters, so that the quality of cases to be reported could be improved. In addition, it is expected that, by providing the explicit knowledge after extensive knowledge engineering, tacit and explicit knowledge regarding each case reporter may be reformu- lated in a more consistent manner. This is an important phase in the proposed methodology. One way of providing a feedback is to create a case report format. The case report format should be cre- ated from the hierarchy used to index the case. There are three major benefits for distributing the case report format. First, by looking at items in the case report format, case reporters may be able to understand an overall picture of the domain in which they are involved. In the corporate-wide system, the level of expertise the case reporter has may vary significantly, and some re- porter are not aware of the correct classification appli- cable to problems and counter measures. Distribution of the case report format is expected to improve the quality of reported cases. In fact, improvement in the quality has been confirmed in the SQUAD system ap plied for the software quality control (SWQC:[Mizuno, 19901) domain. Second, using the case report format reduces data entry cost. Since all attribute-values are covered in the case report format, a simple bulk data entry strategy can be applied to register reported cas- es. This leads to substantial cost saving, as will be re- ported later. Third, by allowing free-form description for items, which can not be represented using the pre- defined attribute-values, new attributes and values can be identified easily and efficiently. These new attributes and values are added to the indexing hierarchies, and the case report format in the next cycle will include new attribute-values. The products of this phase is a new case report format to be used for reporting cases in the next cycle. 5 Supporting Tools A set of tools to support the process has been developed. Some of them are: CARET RDB-based CBR Shell [Shi- mazu, et. al., 19931, Canae/Yuzu GUI Construction K- it, Hierarchy Editor to help develop concept hierarchies using graphical interface, CAPIT Case-Based Natural Language Interface [Shimazu, et. al., 19923 which gener- ate seed SQL specifications to be used by CARET, and Database Mining Tools to accomplish trend and statisti- cal analysis. Figure 2 shows a list of tools for each part of the system development. The key supporting tool is CARET, the RDB-based CBR shell. Although several on-going projects employ the method- ology and tools described in this paper, an empirical re- sult is reported on the effectiveness of the approach using a corporate-wide case-based system applied for software quality control (SWQC). The project was initiated in 1981 as a corporate-wide quality control project, and the case-based system was introduced recently. The au- thors have accumulated over 25,000 cases as of Decem- ber 1992, and 3,000 cases are now being reported every year by over 15,000 active participants. The case-based system was called SQUAD and its motivation and the system architecture have been described in [Kitano, et. al., 19921. CARET is a CBR shell which operates on commer- The authors have observed a cial relational database management systems (RDBM- System Development S), such as ORACLE. When the user specifies at- significant reduction in the system development cost. For this kind of system, the expected workload for de- tributes and values representing the problem, CARET veloping the entire system (but excluding a knowledge- produces a set of SQL (Standard Query Language) base) is about 10 man-months. However, the system specifications, which will be dispatched to RDBM- was completed with less than 4 man-months of work- S. Since SQL cannnot be used to achieve similarity- based retrieval, CARET produces several SQL speci- load. Since this workload includes successive up-grading of the CARET CBR shell itself, the real workload for the fications, ranked by similarity measure calculated us- ing indexing hierarchies. For example, let us assume SQUAD itself is estimated to be about 1.5 man-months. Computer is defined to have subclass Parallel and Since CARET reached the well-defined state, the au- Serial, Parallel has instances CM-2 and PARAGON, and thors expect the next system can be built within a 1 Serial has an instances SparcStation. If the us- man-month workload. There are two major contribut- ing factors for this workload reduction. Part of system Tools Process Definition CARET RDB-based CBR Shell (aR system hteption) I Non-AI Part I CASELAND CASE Tools.... I SEA/I. OMT Figure 2: Tools and Process Definitions in Case-Method er specified CM-~ as a value for one of the attribute such as Run-Time-Machine, a SQL specification with highest similarity should contain Run-Time-Machine = CM-2 However, other SQLs with slightly lower similar- ity value may contain Run-Time-Machine = PARAGON. Even lower similarity SQL specifications may contain Run-Time-Machine = Spar&tat ion. Such a relaxation strategy has been incorporated in CARET so that near- est neighbour similarity retrieval can be carried out using commercial RDBMS. One question which may be raised is whether or not such an approach can attain reason- able response time in large-scale case-based systems. As will be described later in Fig. 3, a practically acceptable response time has been obtained, using real data. Users may describe the problem by chasing values for each attribute from menus provided by the graphical user interface, or by sending seed SQL specification. The seed SQL specification may be generated by CAPIT natural langu age user interface[Shi-mazu, et. al., 19921, so that users may interact with the system in natural language. 6 Empirical Results 306 Kitano Workload Number of (Man Months) Attributes 150 6 First Second Third Fourth Cycle Figure 3: Case-Base Building Workload Irkload Months) 6 Without Case-Method With Case-Method 1 2 3 4 Cycle Year1 Year2 Figure 4: Total Workload First, use of RDBMS in CARET offered a significant workload saving for building a specific case storage mech- anism. All necessary functionalities and performance tunin facilities have been provided by the commercial RDB8IS. Second, Canae/Yuzu, a GUI construction environ- ment, dramaticalli reduced the workload needed for user interface development Since SQUAD extensively uses menus and tables for user interface, nre-defined parts for the user interface eliminated req&ements for-cod- ing these parts of the software. The *authors assessment indicates that the user interface development workload was reduced to l/10. Case-Base uilding and Maintenance Applica- tion of the methodology resulted in qualitative and quan- titative change in case-base building and maintenance. On the quantitative side, the authors have observed a reduction in workload for building case-based from cases reported from various divisions. Before the methodolo- gy was introduced, the case report format was free-form with about 20 items. One domain expert has been work- ing on the case-base building for her full-time job. Yet, it took almost 6 months to add 1,500 cases reported twice a year. There are two activity cycles in a year. Thus, processing over 3,000 reported cases for each year took a whole year. This is almost 6 man-months workload for 1,500 cases, which represents the cases needed to be processed in one cycle. By introducing the methodology described in this paper, the workload began to decrease. After a fourth cycle, the total workload was reduced to 0.5 man-months for 1,500 cases, l/12 of initial workload. At this cycle, the number of attributes used reached 130. Figure 3 shows the history of workload reduction. Thus, total system development cost and maintenance cost has been reduced dramatically (Figure 4). There are qualitative effects as well. As the number of attributes and possible values increases, more case have been covered b; a set of values which are already on the case report form. Current coverage is over 95%. Thus, the case-base building process b&ame less expertise- demanding. It was turned into a simple data entry task, which can be automated in the next step. Mowever, there are cases which still need soecial hand&. These are cas- es which cannot be covered bv the valu& and attributes defined in the case reports. “A knowledge engineer on the development team analyzes and registers these cas- es. At the same time, new values or new attributes are , added to the case report form, so that the coverage can be increased in the next cycle.. In addition, quick turnaround for data entry enabled the SWQC drvision to carry out detailed analysis of the new cases. This also enabled the SWQC division to in- spect the quality of cases, using extra-time created as a result of workload reduction. Run Time Performance The CARET performance on commercial RDBMS has attained a practically ac- ceptable speed. Using Oracle RDBMS on SparcStation2, the average response time for a query to a case-base of up to 1,500 cases (with 130 indexing attributes) is about 2.0 seconds. Figure 5 shows response time for various queries on various case-base sizes. Queries -2 and -3 are normal queries, and query-l is the worst case performance. Large Scale owledge Bases 307 set Multi-use Knowledge Bases,” NASA Conference Pub- Zication 2492, NASA/Marshall Space Flight Center, 1987. Query 1 8.00 6.00 Query 2 Query 3 1500 cases Figure 5: Case-Base Retrieval Time 7 Conclusion This paper has described CASE-METHOD, a methodol- ogy for building large-scale case-based systems. This is the first methodology defined for case-based systems development. The methodology was defined inductive- ly through actual development projects, ranging from task-specific systems to corporate-wide and nation-wide systems. It is the battle-proven methodology. CASE-METHOD provides a set of process definition- s and supporting tools. Empirical study demonstrates that CASE-METHOD effectively reduced system develop ment and maintenance cost, as well as offering qualita- tive changes in the corporate activities. However, the authors have yet to define an effective means to vali- date and verify the system behavior due to the same reason pointed out in [Hennessy and Hinkle, 19921. In order for the proposed methodology to be accepted as a mainstream system development methodology, these issues and consistency with the ISO-9000-3 need to be addressed. However, CASE-METHOD has been sucessfuly deployed, and would be the first step toward a method- ology for building large-scale case-based systems. References [Acorn and Walden, 19921 Acorn, T. and Walden, S., “SMART: Support Management Automated Reason- ing Technology for Compaq Customer Service,” Inno- vative A Press, 1 dir lications of Artificial Intelligence 4, AAAI 2. [Hammond, 1986] Hammond, C., Case-Based Planning: An Integrated Theory of Planning, Learning, and Memory, Ph.D. Thesis, Yale University, 1986. [Hennessy and Hinkle, 19921 Hennessy, D. and Hinkle, D-7 “Applying Case-Based Reasoning to Autoclave Loading,” I&?Z,?Z Expert, Oct. 1992. [Humphrey, 19891 Humphrey, W., Managing the Soft- ware Process, Addison- Wesley, 198 9. [Ishikura, 19921 Ishikura, Y., BuiZding Core Skills of the Organization, NTT Publishing, 1992 (in Japanese). [Kitano, et. al., 19921 Kitano, H., Shibata, A., Shimazu, H., Kajihara, J., and Sato, A., “Building Large-Scale and Cor orate-Wide Case-Based Systems,” Proc. of AAAI-94 San Jose, 1992. [Kolodner, 19841 Kolodner, J., Retrieval and Organiza- tional Strategies in Conceptual Memory: A Computer Model, Lawrence Erlbaum Assoc., 1984. [Meen and Keough, 19921 Meen, D. and Keough, M., “Creating the learning organization,” The McKinsey Quarterly, No. 1, 1992. [Mizuno, 19901 Mizuno, Y., Total Quality Control for Software, Nikka-giren, 1990 (in Japanese). [Nonaka, 19911 Nonaka, I., “The Knowledge Creat- in Company,” 1951. Harvard Business Review, Nov.-Dec., [Nonaka, 19901 N onaka, I., A Theory of Organizational KnowZedge Creation, Nikkei, 1990 (in Japanese). [Quinlan, 19921 Quinlan, R., C4.5: Programs for Ma- chine Learning, Morgan-Kaufmann, 1992. [Riesbeck and Schank, 19891 Riesbeck, C. and Schank, R., Inside Case-Based Reasoning, Lawrence Erlbaum Associates, 1989. [Senge, 19901 Senge, P., The Fifth Discipline: The Art & Practice of The Learning Organization, Doubleday, 1990. [Shimazu, et. al., 19921 Shimazu, H., Arita, S., and Takashima, Y., “Design Tool Combining Keyword An- alvzer and Case-Based Parser for Developing Natural Lan uage Database Interfaces,” Proc. of COZING-92, Nan es, 1992. 5! [Shimazu, et. al., 19931 Shimazu, H., Kitano, H., and Shibata, A., “Retrieving Cases from Relational Data- Base: Another Stride Towards Corporate-Wide Case- Based Systems,” Proc. of IJCAI-93, 1993. [Simmon, 19761 Simmon, H., Administrative Behavior, 3rd edition, Free Press, 1976. [Badaracco, 19911 Badaracco, J., The Knowledge Link, Harvard Business School Press, 1991. [Downs et. al., 19881 Downs, E., Clare, P., and Coe, I., Structured Systems Analysis and Design Method, Prentice Hall International. 1988. I [Freeman, 19871 Freeman, M., “HSTDEK: Develop- ing A Methodology for Construction of Large-scale, [Wasserman et. al., 19831 Wasserman, A., Freeman, P., and Pacella, M., “Characteristics of Software Devel- opment Methodologies,” (Eds.) Olle, T., Sol, H., and Tully, C., Information Systems Design Methodologies, North Holland, 1983. [Wiseman, 19881 Wiseman, C., Strategic Information Systems, Irwin, 1988. 308 Kitano | 1993 | 46 |
1,372 | Automated Index Generation for Constructing arge-scale vers er Richard Qsgo.od and Ray Bareiss The Institute for the Learning Sciences Northwestern University Evanston, Illinois 60201 osgood@ils.nwu.edu and bareiss@ils.nwu.edu Abstract At the Institute for the Learning Sciences we have been developing large scale hypermedia systems, called ASK systems, that are designed to simu- late aspects of conversations with experts. They provide access to manually indexed, multimedia databases of story units. We are particularly con- cerned with finding a practical solution to the problem of finding indices for thes units when the database grows too large for manual techniques. Our solution is to provide automated assistance that proposes relative links between units, elimi- nating the need for manual unit-to-unit compar- ison. In this paper we describe eight classes of links, and show a representation and inference pro- cedure to assist in locating instances of each. Introduction Interaction with a knowledge-based system typically provides a user with only limited information. For ex- ample, a diagnostic system typically returns a classifi- cation in response to a sequence of situational features. If an explanation is provided, it is usually a trace of the system’s inference process. In contrast, consulta- tion with a human expert typically provides a wealth of information. An expert knows which questions to ask in a problem solving situation, why those questions are important, which questions not to ask, how to inter- pret and justify the actual results, alternative meth- ods of data collection, et cetera. Unfortunately, these aspects of expertise have proven difficult to represent *This research was supported, in part, by the Defense Advanced Research Projects Agency, monitored by the Air Force Office of Scientific Research under contract F49620- 88-C-0058 and the Office of Naval Research under con- tract N00014-90-J-4117, by the Office of Naval Research under contract N00014-89-J-1987, and by the Air Force Of- fice of Scientific Research under contract AFOSR-89-0493. The Institute for the Learning Sciences was established in 1989 with the support of Andersen Consulting, part of The Arthur Andersen Worldwide Organization. The Insti- tute receives additional support from Ameritech and North West Water which are Institute partners, and from IBM. with current AI formalisms. As a practical alterna- tive, builders of knowledge-based systems have turned to hypermedia to capture such knowledge in a partially represented form [Spiro and Jehng, 19901. For the last three years, we have been developing a class of large-scale hypermedia systems called ASK systems [Ferguson et al., 19921, that are designed to capture important aspects of a conversation with an expert. An ASK system provides access to a multi- media database containing short video clips of inter- views with experts, archival video material, and text passages. Currently, these systems are indexed in two ways. ASK systems can be built by human “indexers” (our term for knowledge engineers) who use a question- based methodology and some supporting tools to cre- ate relative links between pieces of the material [Os- good and Bareiss, 19921. 0 ur experience shows that as the size of the system’s database grows beyond about 100 stories, (depending on the degree of interrelated- ness) the process of identifying relevant connections between stories becomes prohibitively difficult for in- dexers. (The term story refers to an individual con- tent unit in the database and is not limited to the traditional narrative sense.) We call this phenomenon the indexer saturation problem: an indexer cannot re- member enough about the contents of the database to make all appropriate connections, and the prospect of exhaustive search for all connections is onerous [Con- klin, 19871). The second way in which the problem arises is when authors must index their own stories. School Stories is a collaborative hypermedia authoring environment for telling and interconnecting stories about grade K-12 experiences in US public schools. There is no separate indexer role in the system. Authors notice a connection between a story in the system and one they know. This new story is entered into the system and linked directly by its author to the eliciting story at the point it is told. Unfortunately, no easy way exists for an author to find links between a new story and the rest of the database. We are beginning to provide automated assistance to achieve more complete interconnectivity in all our ASK systems than is possible with our current manual in- Large Scale Knowledge Bases 309 From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. dexing methods. The contents of each story are repre- sented as input to a computerized search process which compares simple representations of the input story to that of other stories in the story base and proposes connections between them to an indexer or author. Although fully automated indexing of stories would be ideal, we do not believe it to be practical, given the current state of the art of knowledge representa- tion. It will require a more complete representation of story content as well as large amounts of commonsense knowledge to infer automatically the same set of the connections typically made by human indexers. Given our desire to build a practical tool today, we have decided to employ a partial representation of story contents and very limited auxiliary knowledge. The cost of this decision is the requirement to keep a skilled human “in the loop”, to determine the relevance of proposed links, and to maintain a story representa- tion that can be easily processed by both machines and humans (see, e.g., semiformal knowledge struc- tures [Lemke and Fischer, 19901). This decision bal- ances the strengths of humans (e.g., feature extraction and categorization) and computers (e.g., rapid search and record keeping), enabling us to build a useful tool and solve a problem intractable to either machine or human alone. The remainder of this paper discusses the ASK model of hypermedia, our representation of stories, the specific procedures for inferring links between stories, and our ongoing research. The ASK Model of Hypermedia ASK systems are based on a simple theory of the memory organization that might underlie conversation about problem solving.[Schank, 1977; Ferguson et al., 19921. This general theory argues that coherence in a conversation comes from the connectivity of human memory, i. e., there is alignment between expression and thought (see e.g., [Chafe, 19791). We hypothesize that after hearing a piece of information in such a con- versation, there are only a few general categories of follow-up information that represent a natural contin- uation of the thread of the conversation rather than a major topic shift. The categories can be thought of the poles of four axes or dimensions. These eight poles represent the most general kinds of questions that a user is likely to have in a conversation about problem solving. The browsing interface of an ASK system rei- fies this model of conversation by placing each relative link between stories in one of these eight general cat- egories [Ferguson et al., 19921. Users can find their specific questions in the category that best describes the question. The four dimensions are Refocusing, Causality, Comparison, and Advice. The Refocusing dimension concerns both adjustments to the specificity of topic under consideration as well as relevant digressions like clarifying of the meanings of terms or describing situ- ations in which the topic arises. One pole, Context, points to the big picture within which a piece of infor- mation fits. The other, Specifics, points to examples of a general principle, further details of a situation, def- initions of terms, or descriptions of parts of the whole, et cetera. The Causality dimension arises directly out of the human desire to understand a situation in terms of its antecedents and consequences. We group tempo- ral order and the causal chain because people typi- cally collapse the distinction. The Causes (or earlier events) pole points to how a situation developed. The Results (or later events) pole points to the outcome of a situation. The Comparison dimension concerns questions of similarity and difference, analogy and alternative, at the same level of abstraction as the reference story. The pole, Analogies, points to similar situations from other contexts or from the experiences of other experts. The Alternatives pole points to different approaches that might have been taken in a situation or differences of opinion between experts. Finally, the Advice dimension captures the idea of carrying away a lesson, either negative or positive, for use in the problem solver’s situation. The Oppor- tunities pole points to advice about things a prob- lem solver should capitalize upon in a situation. The Warnings pole points to advice about things that can go wrong in a problem solving situation. The Partial Representation of Stories Our approach to devising a representation for stories has been to provide a domain-independent representa- tional frame that is instantiated with domain-specific fillers (Figure 1). A primary purpose of the frame is to enforce consistency of feature selection by an indexer. The representation is simple, indexical, and natural for human indexers to employ. It is just detailed enough to support the types of inference needed to recognize rela- tionships between stories. In this and subsequent sec- tions, we will describe a model of naive intentionality expressed in this frame structure and inference proce- dures specific to the conversational categories. We will offer examples of each from the School Stories applica- tion. Because all of the stories of interest in the School Stories’ domain (K-12 school experiences) concern hu- man intentional behavior, our representation is based upon the intentional chain [Schank and Abelson, 19751. This is the simple model implicit in the design of the upper section of the frame shown in Figure 1. First, agents play roles and have beliefs that influence their selection of a course of action. Second, to play out those roles, agents establish goals and plans to achieve them. Finally, actions based on those plans and goals yield bpth intended and unintended results. When representing a story, an indexer must instan- tiate the slots of this domain-independent frame with 310 Osgood I AgentRole: athlete-the role the agent plays in the story 1 Belieffype: strong doesn’t mean dumb-the agent’s belief inducing the goal 1 IntentionLevel: actually did-the level of intentionality(goa1, plan, or act) IntentionType: get good grades-the goal, plan or actionof an agent ’ OutcomeTypes: positive emotiona%-the results of the IntentionType SituationType: conflict with others-a name linking multiple interacting frames 1 TimeOfOccurrence: after reference-sequencing information for frames ( StoryType: literal ezample-story application information Figure 1: A representational frame for describing one scene of a story Figure 2: IntentionType Slot Fillers Near Get Good Grades fillers representing the key domain concepts of the story. To achieve representational consistency, fillers are chosen from pre-enumerated taxonomies-one for each slot. Each filler exists in a domain specific hier- archy. The semantics of the hierarchies are intentional for the IntentionType slot, for example, getting good grades is a way to grdu& (Figure 2) and categorical for the rest, e.g., for the AgentRole slot, a teacher without leverage is a kind of teacher. Figure 1 also shows examples of fillers drawn from the School Sto- ries domain. A priori enumeration of all slot fillers is not in- tended. Rather our idea is to provide an indexer- extensible set for which the initial enumeration serves as an example. Indexers can enter a new term in the hierarchy by determining its similarity to pre-existing fillers. Assessment of the similarity of fillers during representation works because it is conducted by index- ers in the target system’s task context-the same one in which they would have judged the appropriateness of hand-crafted relative links between stories. In effect, the similarity of concepts is represented in equivalence classes, not computed from features [Porter, 19891, i.e., similar concepts have a common parent in the hier- archy. To infer links, these hierarchies of equivalence classes are processed by inference procedures described in the next section. The representational frame or scene captures the in- tentionality of a single agent. The upper portion the Figure 1 frame says: an athlete actually did get good grades by believing that being strong doesn’t mean be- ing dumb and this had a positive emotional impact on him/her. In the frame’s lower part in Figure 1 we include three additional slots. The SituationType slot func- tions both to group frames together and to describe the kind of agent interaction in those frames enabling the reprepresentation of interactions among multiple agents, sometimes with conflicting goals [Schank and Osgood, 19911. I n d exers employ multiple frames-one or more for each agent, filling just the slots in each that they feel apply as in Figure 3. For example, a situa- tion about how to handle student boredom is captured by selecting Being Bored to fill the SituationType slots of two frames of the same story, one about a Stu- dent who Shows Eack of Interest and the other about a Teacher who Assigns An Independent Activity. The frame representation deliberately overspecifies situations. This makes feasible inferences of the same type at two different levels of abstraction. For exam- ple, similarity between stories can be assessed at the level of an entire situation through the fillers of the SituationType slot. Similarity can also be assessed between stories at the level of agent activity through fillers of the top section of the frame in Figure 1. The TimeOfOccurrence slot supports sequenc- ing of scenes in stories to establish intrastory causal/temporal relationships. For example, the term at reference indicates the relative point in time of the main action of the story, while drawing a lesson from the story happens after reference, another time desig- nation. The StoryType slot allows the indexer to advise the inferencing mechanism to identify what the story might be useful for and what the level of abstraction of the story content is. For example, if a story contains useful cautionary advice this slot will contain the value Warnings. If a story is a good explicit of example of something, Literal Example would fill this slot. nference rocedures We have implemented inference procedures for all of the link types specified by the ASK model of hyper- media. In concept, inference procedures compare one of the representation frames of a reference story with all other frames in the story base. Operationally, in- ference is implemented as path finding, not exhaustive search and test. Links from slot fillers in the refer- ence story frames are traversed in the concept hierar- chy to identify sibling fillers which instantiate slots of other stories. Inference procedures are implemented as deductive retrieval rules which exploit the relation- ships between slot fillers. Each rule can create one of Large Scale Knowledge Bases 311 Agent Role BeliefType IntentionLevel IntentionType OutcomeTypes SituationType TimeOfOccurrence StoryType Student Actually Did Show Lack of Interest Successful Being Bored At Reference Opportunity Agent Role Teacher BeliefType IntentionLevel Actually Did IntentionType Assign Independent Activity OutcomeTypes Successful Positive SituationType Being Bored TimeOfOccurrence At Reference StoryType Opportunity Figure 3: Two scenes for the story Entertaining the Troublemaker more links depending on whether or not the link type is symmetric, e.g., anabogies/unalogies, or complemen- tary, e.g., context/specifics. There are many senses of each link type. A particu- lar rule finds only one. Summaries of each rule we have implemented are listed. Each is described as a process which indexes a new story, the reference story, with re- spect to existing stories in the the database which are potential follow-up stories. Context, Specifics, and Examples are the imple- mented Refocusing rules. In a reference story scene if the parent concept of the situation or the agent’s activity (e.g., in the concept hierachy for situations, interpersonal struggles is the parent of being bored) oc- curs in a potential follow-up story scene, the context link is proposed. If on the other hand it is a child con- cept that is present in the follow-up story scene, then the specifics link is proposed. When a specifics link has been proposed and the follow-up story scene also has the story type of literal example, then an examples link is also proposed. Earlier Events, Later Events, Causes and Results are the Causality rules. When absolute temporal in- formation is available in a reference story scene, and a potential follow-up story scene describes the same situation or similar agent activity and has an earlier absolute time designation, the earlier events link is pro- posed. The later events link is proposed analogously. When absolute temporal information is not available in a reference story scene, and a potential follow-up story scene has the same agent activity but an earlier posi- tion in the intentional chain (e.g., in Figure 2, graduate is earlier in the intentional chain than get good grades), a causes link is proposed. A results link is proposed if the follow-up is later than the reference story scene in the intentional chain. Also, when a reference story scene is missing a belief to explain an agent’s activity or situation, causes links are proposed to all follow-up story scenes that can supply one. A results link is pro- posed if the reference and follow-up story scenes are about similar situations or have similar agent activity and the follow-up story scene can provide the reference scene with missing outcome information. Analogies and Alternatives are the Comparison rules. If a reference and follow-up story scene have agents with similar beliefs, situations, or activities (as determined taxomomically, e.g., in Figure 2, puss ex- 312 Osgood ams is a peer of get good grades), then an analogies link is proposed between them. However, if in other- wise similar story scenes, a dissimilar value is found in exactly one of the slots used above to compute simi- larity, then an alternatives link is proposed instead. Warnings and Opportunities are the Advice rules. In similar reference and follow-up story scenes, if one has a story type of one of the advice link types and the other does not, then a link of that type is proposed from the former to the latter. The indexer provides these story type values when representing the story. When we first defined the system, the information needs of these inference procedures determined the def- inition of the frame as well as the parts of domain concept hierarchy vocabulary that are explicitly men- tioned ,in the rules, e.g., a story type of, literal ex- ample, used in the examples link inference. Likewise these rules operate in conjunction with the represen- tations of similarity built into the equivalence classes of the hierarchy. The effectiveness of machine-assisted relative indexing is dependent upon the tuning of this relationship between rules and representation. Expe- rience with tuning the School Stories system indicates that this task is within the capabilities of our indexers. An Example: Indexing School Stories Automated inference helps the authors working on School Stories find appropriate links between stories. While our work in this area is ongoing, the examples below illustrate the kinds of links between stories that can be inferred from the simple representation of sto- ries described above, One story entitled Entertaining the Troublemaker begins: One problem for smart kids is to keep from bor- ing them in school. Each year that I was in school, my teachers had to find some way to keep me out of trouble since I was both bored and rambunctious. In the second grade I ran messages for the teacher. In the third I built baseball parks out of oak tag. In the fourth I wrote songs. These events turn out to be most of what I remember from those years. School for me was one long attempt to avoid boredom and troub1e.l The author has represented it in two scenes in Fig- ure 3. ‘As part of its search for links, the system runs ‘This story was w ritten by Roger Schank for Group- Write: School Stories. Agent Role BeliefType IntentionLevel IntentionType OutcomeTypes SituationType TimeOfOccurrence StoryType Figure 4: A Scene from A Different Bag of Tools Student Actually Did Disrupt Class Success&l Being Bored At Reference Literal Example Figure 5: IntentionType Fillers Near Disrupt Class its inference procedure for finding examples links listed above against the frames in Figure 3 and Figure 4. The rule for examples first specializes the fillers for each of the slots of the frame for the reference story (Fig- ure 3). For instance, one causal specialization of the IntentionType slot with filler, Show Lack of Interest is Disrupt Class, i.e., one way to show lack of interest is to the disrupt class (Figure 5). Because the candidate follow-up story frame in Figure 4 has the StoryType slot filler Literal Example, the system proposes an ex- amples link to story A Difleerent Bug of Tools, the story represented partially by the frame in Figure 4, which reads: I had learned to do integrals by various methods shown in a book that my high school physics teacher, Mr. Bader, had given me. One day he told me to stay after class. “Feynman,” he said, “you talk too much and you make too much noise. I know why. You’re bored. So I’m going to give you a book. You go up there in the back, in the corner, and study this book, and when you know everything that’s in this book, you can talk again”2 Agent Role BeliefType IntentionLevel IntentionType OutcomeTypes SituationType TimeOfOccurrence StoryType Student Actually Did Leave Class Successful Getting What You Want At Reference Literal Example Figure 6: A Scene from the story A Deal’s a Deal In this simple case, our representation was sufficient to infer a possible examples link.3 The system con- tinues its search and finds additional ways in which to 2This story was extracted by Ian Underwood for Group- Write: School Stories from Feynman, R (1985) Surely you’re joking, Mr. Feynman: adventures of a curious char- acter. New York: W. W. Norton. 31n a group story-telling environment authors do not connect these same two stories. It finds a similarity link (one sense of analogies) as well through Situa- tionType: Being Bored. The human indexer can ac- cept one or both of these links for inclusion in School Stories. The system goes on to propose as many other links as the story representations and rules will permit. The author accepts or rejects them as appropriate. The representation-rule combination excludes some close yet still inappropriate links, as well. The frame for the story A Dea19s a Deal in Figure 6 does not qual- ify as an examples link for our original story because, while it is has the StoryType slot filler Literal Ex- ample, the IntentionType filler Leave C%ass is not a specialization of Show Lack of Interest (Figure 5). In the story a Deal’s a Deal the students were upset be- cause a teacher had broken a promise. It was not that they were bored. How well the approach excludes near misses depends on the assignment of filler terms to equivalence classes in the concept hierarchies. This assumes that agents do similar things for the same reasons. This kind of similarity limits inadvertent feature matching, because similarities are derived within the context of a specific unambiguous hierarchy locale. In the above example, one construal of Leave Class could conceivably be to Show Lack of Interest, but that is not the reason in A Deal’s a Deal. In that story the agents Leave Class as a way to Refuse to Cooperate with a Teacher. Showing Lack of Interest is a weaker reason and is not repre- sented as similar, i.e., not placed in the same local context of the intentional hierarchy (Figure 5). These simple examples illustrate how richly con- nected the stories in our test domain are and how, with a simple representation and processes, these links can be inferred. Given the human commitment to fill out frames for stories and to verify each system-proposed link, such a method significantly reduces the cognitive load human indexers face. Ongoing Research This work raises a number of research issues: balanc- ing a fine grained representation against the ability to do simple syntactic feature matching, extending do- main concept hierarchies consistently, and testing the effectiveness of the inference rules for machine assisted indexing. It is difficult to determine just how much detailed do- main knowledge should be represented in the content hierarchies to support the kinds of inferencing we have envisioned. There is a trade-off between the coarse- ness of the representation and its usefulness for infer- maintain strong causal/temporal threads by telling a se- quence of related stories. Therefore the conversational cat- egories have analogical semantics. In the case of an exam- ples link (one sense of specifics), one story is an example of the kind of thing discussed in general terms by a story which is probably about another situation written by an- other author. Large Scale Knowledge Bases 313 rmg links by feature matching. At one extreme we could have used fine grained representations that en- rich expressiveness but make overall determination of similarity between stories very difficult, because the representations must be processed deeply to compen- sate for acceptable variation in representation. At the other extreme we could have reified complex relation- ships into flat propositional features which reduces in- ferencing to simple feature matching. For example, we rejected the use of complex role relations as a way to represent multiple interacting agents in the Agen- tRole slot, e.g., student who is infatuated with the teacher but the teacher does not respond favorably. Use of such unduly extended filler names flattens the rep- resentation lessening the ability to infer links, because the internal structure of the filler is not accessible to inference [Domeshek, 19921. We have tried to find an acceptable balance in our representation between flat and deep representation. Our principle is to provide just the amount of representation needed by the infer- ence rules we have defined. It is the indexer’s job to define the domain concept hierarchies and to use these as fillers in frames for sto- ries. These fillers establish equivalence classes for in- ferencing. Also where they are placed in the hierar- chy represents a prediction about where future index- ers will find fillers to describe their stories. Therefore, consistency and economy in the selection of the hier- archy vocabulary is required by both machine and hu- man. We do not yet know how consistent the human extension of domain hierarchies will be. Our expe- rience to date suggests that indexers sometimes over- look or misinterpret the semantics of existing fillers. In many domains, different vocabularies tend to be used in different situations. The result is the creation of synonymous categories. Indexers may also misuse the hierarchy by placing elements of widely divergent levels of abstraction at the same level in the hierarchy. Our current solution is to use the simplest partial concept hierarchy that will support the desired inferences-a corollary of the principle governing representation for rules stated above. Finally, we have not yet subjected the conversa- tional category-based inference rules for machine as- sisted linking to a systematic comparison with the link sets derived by human indexers independently. We have however conducted some informal checks on the system’s performance in one domain (School Stories). The automated approach found a superset of the links human indexers found in a sample of 16 stories selected at random from the database. We are beginning to apply our technique in a very different domain, i.e., military transportation planning. These open issues have not prevented us from see- ing some significant benefits to indexers already from machine-assisted knowledge acquisition as described herein. Ideally, as our inference procedures are im- proved and as our confidence grows that the indexes generated converge with those humans would produce, we may be able to grant autonomy to some of them, en- abling our ASK hypermedia systems to generate some classes of relative links dynamically. Whether or not that proves possible, we are creating an optimal part- nership between human and tool, enabling large-scale relative indexing which neither human nor machine can do alone. Acknowledgments: The dynamic indexing tool was writ- ten by Paul Brown and Paul Rowland. References Chafe, W. 1979. The flow of thought and the flow of language. In Givon, T., editor 1979, Discourse and syntax. Academic Press, New York. 159-181. Conklin, E. 1987. Hypertext: An introduction and survey. IEEE Computer 2117-41. Domeshek, E. 1992. Do the Right Thing: Component Theory for Indexing Stories as Social Advice. Ph.D. Dissertation, Yale University, New Haven, CT. Ferguson, W.; Bareiss, R.; Birnbaum, L.; and Os- good, R. 1992. ASK systems: An approach to the realization of story-based teachers. The Journal of the Learning Sciences 2:95-134. Lemke, A. and Fischer, G. 1990. A cooperative prob- lem solving system for user interface design. In Pro- ceedings of the Eighth National Conference on Artifi- cial Intelligence, Menlo Park, CA. AAAI Press/The MIT Press. Osgood, R. and Bareiss, R. 1992. Index generation in the construction of large-scale conversational hy- permedia systems. AAAI-93 Spring Symposium on Case-Based Reasoning and Information Retrieval. Porter, B. 1989. Similarity assessment: Computation vs. representation. In Proceedings: Case-Based Rea- soning Workshop, San Mateo, CA. Morgan Kaufman Publishers. Schank, R. and Abelson, R. 1975. Scripts, Plans, Goals and Understanding. Lawrence Erlbaum Asso- ciates, Hillsdale, NJ. Schank, R. and Osgood, R. 1991. A content theory of Ct memory indexing. Technical Report 2, The Institute for the Learning Sciences, Northwestern University, Evanston, IL. Schank, R. 1977. Rules and topics in conversation. Cognitive Science 1:421-441. Spiro, R. and Jehng, J. 1990. Cognitive flexibility and hypertext: Theory and technology for the non- linear traversal of complex subject matter. In Nix, D. and Spiro, R., editors 1990, Cognition, Education, and Multimedia: Exploring Ideas in High Technology. Lawrence Erlbaum Associates, Hillsdale. 163-205. 314 Osgood | 1993 | 47 |
1,373 | abilist ie (Extended Abstract) Arthur L. Delcher* Simon Kasif * Computer Science Dept. Dept. of Computer Science Loyola College Johns Hopkins University Baltimore, MD 21210 Baltimore, MD 21218 Abstract In this, paper we present a probabilistic approach to analysis and prediction of protein structure. We argue that this approach provides a flexible and convenient mechanism to perform general scien- tific data analysis in molecular biology. We apply our approach to an important problem in molec- ular biology-predicting the secondary structure of proteins-and obtain experimental results com- parable to several other methods. The causal networks that we use provide a very convenient medium for the scientist to experiment with dif- ferent empirical models and obtain possibly im- portant insights about the problem being studied. Introduction Scientific analysis of data is an important potential ap- plication of Artificial Intelligence (AI) research. We believe that the ultimate data analysis system using AI techniques will have a wide range of tools at its dis- posal and will adaptively choose various methods. It will be able to generate simulations automatically and verify the model it constructed with the data generated during these simulations. When the model does not fit the observed results the system will try to explain the source of error, conduct additional experiments, and choose a different model by modifying system parame- ters. If it needs user assistance, it will produce a simple low-dimensional view of the constructed model and the data. This will allow the user to guide the system to- ward constructing a new model and/or generating the next set of experiments. We believe that flexibility, ef- ficiency and direct representation of causality are key issues in the choice of representation in such a system. As a first step, in this paper we present a proba- bilistic approach to analysis and prediction of protein structure. We argue that this approach provides a flex- ible and convenient mechanism to perform general sci- * Supported by NSF DARPA Grant CCR-8908092 and AFOSR Grant AFOSR-89-1151 316 Delcher Harry R. Goldberg William SU Mind-Brain Institute Dept. of Computer Science Johns Hopkins University Johns Hopkins University Baltimore, MD 21218 Baltimore, MD 21218 entific data analysis in molecular biology. We apply our approach to an important problem in molecular biology: predicting the secondary structure of proteins [Chou and Fasman, 1978; Garnier et al., 19781. A num- ber of methods have been applied to this problem with various degree of success [Holley and Karplus, 1989; Cost and Salzberg, 1993; Qian and Sejnowski, 1988; Maclin and Shavlik, 1992; Zhang et al., 1993; Muggle- ton and King, 19911. In addition to obtaining experi- mental results comparable to other methods, there are several theoretically and practically important obser- vations that we have made in experimenting with our system. It has been claimed in several papers that prob- abilistic (statistical) approaches have been outper- formed by neural network methods and memory- based methods by a wide margin. We show that probabilistic methods are comparable to other meth- ods in prediction quality. In addition, the predic- tions generated by our methods have precise quan- titative semantics which is not shared by other clas- sification methods. Specifically, all the causal and statistical independence assumptions are made ex- plicit in our networks thereby allowing biologists to study causal links in a convenient manner. This gen- eralizes correlation studies that are normally used in statistical analysis of data. Our method provides a very flexible tool to exper- iment with a variety of modelling strategies. This flexibility allows a biologist to perform many prac- tically important statistical queries which can yield important insight into a problem. From the theoretical point of view we found that dif- ferent ways to model the domain produce practically different results. This is an experience that AI re- searchers encounter repeatedly in many knowledge- representation schemes: different coding of the prob- lem in the architecture results in dramatic differ- ences in performance. This has been observed in production systems, neural networks, constraint net- works and other representations. Our experience re- From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. inforces the thesis that while knowledge representa- tion is a key issue in AI, a knowledge-representation system typically provides merely the programming language in which a problem must be expressed. The coding, analogous to an algorithm in procedural languages, is perhaps of equally great importance. However, the importance of this issue is grossly un- derestimated and not studied as systematically and rigorously as knowledge representation languages. Previous methods for protein folding were based on the window approach. That is, the learning algo- rithm attempted to predict the structure of the cen- tral amino acid in a “window” of X: amino acids residues. It is well recognized that in the context of protein folding, very minimal mutations (amino acid substitutions) often cause significant changes our understanding of solutions to the protein folding problem, most of the emphasis on structure prediction has been at the level of secondary structure prediction. There are fundamentally two approaches that have been taken to predict the secondary structure of pro- teins. The first approach is based on theoretical meth- ods and the second is based on data derived empiri- cally. Theoretical methods rely on our understanding of the rules governing amino acid interactions, they are mathematically sophisticated and computationally time-intensive. Conversely, empirically based tech- niques combine a heuristic with a probabilistic schema in determining structure. Empirical approaches have reached prediction rates approaching 70%-the appar- ent limit given our current base of knowledge. The most obvious weakness of empirically based pre- in the secondary structure located-far from the mu- tation cite. Our method is aimed at capturing this behavior. Protein Folding Proteins have a central role in essentially all biological processes. They control cellular growth and develop- ment, they are responsible for cellular defense, they control reaction rates, they are responsible for propa- gating nerve impulses, and they serve as the conduit for cellular communication. The ability of proteins to perform these tasks, i.e., the function of a protein, is directly related to its structure. The results of Chris- tian Anfinsen’s work in the late 1950’s indicated that a protein’s unique structure is specified by its amino- acid sequence. This work suggested that a protein’s conformation could be specified if its amino acid se- quence was known, thus defining the protein folding problem. Unfortunately, nobody has been able to put this theory into practice. The biomedical importance of solving the protein diction schemes is their reliance on exclusively local in- fluences. Typically, a window that can be occupied by 9-13 amino acids is passed along the protein’s amino acid sequence. Based on the context of the central amino acid’s sequence neighbors, it is classified as be- longing to a particular structure. The window is then shifted and the amino acid which now occupies the central position of the window is classified. This is an iterative process which continues until the end of the protein is reached. In reality, the structure of an amino acid is determined by its local environment. Due to the coiled nature of a protein, this environment may be in- fluenced by amino acids which are far from the central amino acid in sequence but not in space. Thus, a pre- diction scheme which considers the influence of amino acids which are, in sequence, far removed from the cen- tral amino acid of the window may improve our ability to successfully predict a protein’s conformation. Notation folding problem cannot be overstressed. Our ability For the purpose of this paper, the set of proteins is to design genes-the molecular blueprints for speci- assumed to be a set of sequences (strings) over an al- fying a protein’s amino acid sequence-has been re- phabet of twenty characters (different capital letters) fined. These genes can be implanted into a cell and that correspond to different amino acids. With each this cell can serve as the vector for the production of protein sequence of length n we associate a sequence large quantities of the protein. The protein, once iso- of secondary structure descriptors of the same length. lated, potentially can be used in any one of a multitude The structure descriptors take three values: h, e, c of applications-uses ranging from supplementing the that correspond to o-helix, ,&sheet and coil. That human defense system to serving as a biological switch is, if we have a subsequence of hPk . . . PE in positions for controlling abnormal cell growth and development. i, i+ 1,. . . ) i+L it is assumed that the protein sequence A critical aspect of this process is the ability to spec- in those positions folded as a helix. The classification ify the amino acid sequence which defines the required problem is typically stated as follows. Given a protein conformation of the protein. sequence of length n, generate a sequence of structure Traditionally, protein structure has been described predictions of length n which describes the secondary at three levels. The first level defines the protein’s structure of the protein sequence. Almost without ex- amino acid sequence, the second considers local confor- ception all previous approaches to the problem have mations of this sequence, i.e., the formation of rod-like used the following approach. The classifier receives a structures called a-helices, planar structures called ,& window of length 2K + 1 (typically Ii’ < 12) of amino sheets, and intervening sequences often categorized as acids. The classifier then predicts the secondary struc- coil. The third level of protein structure specifies the ture of the central amino acid (i.e., the amino acid in global conformation of the protein. Due to limits on position K) in the window. Machine Learning 317 Structure segment: . . . yg-@gg . . . Evidence segment: Figure 1: Causal tree model. A Probabilistic Framework for Protein Analysis When making decisions in the presence of uncertainty, it is well-known that Bayes rule provides an optimal decision procedure, assuming we are given all prior and conditional probabilities. There are two major difficulties with using the approach in practice. The problem of reasoning in general Bayes networks is NP- complete, and we often do not have accurate estimates of the probabilities. However, it is known that when the structure of the network has a special form it is possible to perform a complete probabilistic analysis efficiently. In this section we show how to model proba- bilistic analysis of the structure of protein sequences as belief propagation in causal trees. In the full version of the paper we also describe how we dealt with problems such as undersampling and regularization. The general schema we advocate has the following form. The set of nodes in the networks are either protein-structure nodes (PS-nodes) or evidence nodes (E-nodes). Each PS-node in the network is a discrete random variable Xi that can take values which correspond to descrip- tors of secondary structure, i.e., segments of h’s, e’s and c’s. With each such node we associate an evidence node that again can assume any of a set of discrete values. Typically, an evidence node would correspond to an occurrence of a particular subsequence of amino acids at a particular location in the protein. With each edge in the network we will associate a matrix of con- ditional probabilities. The simplest possible example of a network is given in Figure 1. We assume that all conditional dependencies are rep- resented by a causal tree. This assumption violates some of our knowledge of the real-world problem, but provides an approximation that allows us to perform an efficient computation. For an exact definition of a causal tree see Pearl [Pearl, 1988]. Protein Modeling Using Causal Networks As mentioned above, the network is comprised of a set of protein-structure nodes and a set of evidence nodes. Figure 2: Example of causal tree model using pairs, showing protein segment GSAT with corresponding secondary structure c&h Protein-structure nodes are finite strings over the al- phabet {h, e, c}. For example the string hhhhhh is a string of six residues in an a-helical conformation, while eecc is a string of two residues in a P-sheet con- formation followed by two residues folded as a coil. Ev- idence nodes are nodes that contain information about a particular region of the protein. Thus, the main idea is to represent physical and statistical rules in the form of a probabilistic network. We note that the main point of this paper is advocating the framework of causal net- works as an experimental tool for molecular biology applications rather than focusing on a particular net- work. The framework allows us flexibility to test causal theories by orienting edges in the causal network. For our initial experiments we have chosen the sim- plest possible models. In this paper we describe two that we feel are particularly important: a classical Hid- den Markov Model using the Viterbi algorithm and causal trees using Pearl’s belief updating. We shall show that the second approach is better and matches in accuracy other methods that have a less explicitly quantitative semantics. In our first set of experiments we converged on the following model that seems to match in performance many existing approaches. The network looks like a set of PS-nodes connected as a chain. To each such node we connect a single evidence node. In our experiments the PS-nodes are strings of length two or three over the alphabet {h, e, c} and the evidence nodes are strings of the same length over the set of amino acids. The following example clarifies our representation. Assume we have a string of amino acids GSAT. We model the string as a network comprised of three evidence nodes GS, SA, AT and three PS-nodes. The network is shown in Figure 2. A correct prediction will assign the values cc, ch, and hh to the PS-nodes as shown in the figure. Let X0,X1,... , X, be a set of PS-nodes connected as in Figure 1. Generally, speaking the distribution for the variable X; in the causal network as below can be computed using the following formulae. Let ex = ei, ei+l, - . . , e, denote the set of evidence nodes to ihe 318 Delcher M * . . . . . . . . . . . k=O L=l k=2 . . . k=n-I k=n State Time Figure 3: Modelling the Viterbi algorithm as a shortest path problem. right of Xi, and let ef;, = el,ea,...,ei- 1 be the set of evidence nodes to the left of Xi. By the assumption of independence explicit in the network we have Thus, P(Xi /L-I, e&> = P(Xi 1%I) P(Xi lef;, , e;,) = aP(e;, IX)P(Xi I&) where CY is some normalizing constant. For length con- sideration we will not describe the algorithm to com- pute the probabilities. The reader is referred to Pearl for a detailed description [Pearl, 1988]. Pearl gives an efficient procedure to compute the belief distribution of every node in such a tree. Most importantly, this procedure operates by a simple efficient propagation mechanism that operates in linear time. rotein Modeling Using the Viterbi Algorithm In this section we describe an alternative model for prediction. This model has been heavily used in speech understanding systems, and indeed was sug- gested to us by Kai Foo Lee whose system using simi- lar ideas achieves remarkable performance on speaker- independent continuous speech understanding. We implemented the Viterbi algorithm and compare its performance to the method outlines above. We briefly describe the method here. We follow the dis- cussion by Forney [Forney, 19731. We assume a Markov process which is characterized by a finite set of state transitions. That is, we assume the process at time k can be described by a random variable Xk that assumes a discrete number of val- ues (states) 1, . . . , A4. The process is Markov, i.e., the probability P(Xk+i 1x0, . . . Xk) = P(Xk+lIXk). We denote the process by the sequence X = Xc, . . . , XI,. We are given a set of observations 2 = 20, . . . , Zk such that Zi depends only on the transition Ti = (Xi+l, Xi). Specifically, P(Z]X) = n;=,(Zi]Xi). The Viterbi al- gorithm is a solution to the maximum aposteriori esti- mation of X given 2. In other words we are seeking a sequence of states X for which P(Z]X) is maximized. An intuitive way to understand the problem is in graph theoretic terms. We build a n-level graph that contains nM nodes (see Figure 3). With each transi- tion we associate an edge. Thus, any sequence of states has a corresponding path in the graph. Given the set of observations 2 with any path in the graph we as- sociate a length L = - In P(X, 2). We are seeking a shortest path in the graph. However, since n-l n-l = P&+1 I&) qwh+l,Xk) k=O k=O if we define x(Tk) = - ln P(XK+~ ]XK) - ln ~>(ZI, ]Tk) we obtain that - ln P(z, x) = cF=,’ Ak. Now we can compute the shortest path through this graph by a standard application of shortest path algo- rithms specialized to directed acyclic graphs. For each time step i we simply maintain A4 paths which are the shortest path to each of the possible states we could be in at time i. To extend the path to time step i + 1 we simply compute the lengths of all the paths extended by one time unit and maintain the shortest path to each one of the A4 possible states at time i + 1. Our experimentation with the Viterbi algorithm was completed in Spring 1992. We recently learned that Machine Learning 319 David Haussler [Haussler et aZ., 19921 and his group suggested the Viterbi algorithm framework for pro- tein analysis as well. They experimented on a very different problem and also obtain interesting results. We document the performance of Viterbi on our prob- lem even though, as described below, the causal-tree method outperformed Viterbi. The difference between the methods is that the Viterbi algorithm predicts the most likely complete sequence of structure elements, whereas the causal-tree method makes separate pre- dictions about individual PS-nodes. Trial Positions - Correct Using: Pairs I Triples 2339 1518 (64.9%) 2624 1567 (59.7%) 2488 1479 (59.5%) 2537 1666 (65.7%) 2352 1437 (61.1%) 2450 1510 (61.6%) 2392 1489 (62.3%) 2621 1656 (63.2%) 1469 (62.8%) 1518 (57.9%) 1435 (57.7%) 1604 (63.2%) 1392 (59.2%) 1470 (60.0%) ’ 1447 (60.5%) 1601 (61.1%) Experiments All 1 19803 1 12322 (62.2%) 1 11936 (60.3%) The experiments we conducted were performed to al- low us to make a direct comparison with previous methods that have been applied to this problem. We followed the methodology described in [Zhang et al., 1993; Maclin and Shavlik, 19921 which did a thor- ough cross-validated testing of various classifiers for this problem. Since it is known that two proteins that are homologous (similar in chemical structure) tend to fold similarly and therefore generate accuracies of pre- dictions that are often overly optimistic, it is important to document the precise degree of homology between the training set and the testing set. In our experiments the set of proteins was divided into eight subsets. We perform eight experiments in which we train the net- work on seven subsets and then predict on the remain- ing subset. The accuracies are averaged over all eight experiments. This methodology is referred to as k-way cross validation. Table 1: Causal tree results for 8-way cross-validation using segments of length 2 and length 3. Method Total Helix Sheet Coil Chou-Fasman ANN w/ state FSKBANN w/o state Viterbi Chain-Pairs Chain-Trinles 57.3% 61.8% 61.7% 63.4% 62.2% 58.5% 62.2% 60.3% 31.7% 43.6% 39.2% 45.9% 42.4% 48.3% 55.9% 53.0% 36.9% 18.6% 24.2% 35.1% 26.3% 47.0% 51.7% 45.5% 76.1% 86.3% 86.0% 81.9% 84.6% 69.3% 67.4% 70.8% Experimental Results We report the accuracy of prediction on individual residues and also on predicting runs of helices and sheets. Table 1 shows the prediction accuracy of our methods using the causal network method for each one of the eight trials in our 8-way cross-validation study. In the pairs column we document the performance of the causal network described earlier using PS-nodes and E-nodes that represent protein segments of length 2. The triples column gives the results for the same network with segments of length 3. The decrease in accuracy for triples is a result of undersampling. Table 2: Overall prediction accuracies for various pre- diction methods. Comparative method results from [Maclin and Shavlik, 19921. Table 2 shows the performance of our method in predicting the secondary structure at each amino acid position in comparison with other methods. In Table 3 we report the performance of our method on predict- ing runs of helices and sheets and compare those with other methods that were applied to this problem. To summarize, our method yields performance compara- ble to other methods on predicting runs of helices and sheets. It seems to have particularly high accuracy in predicting individual helices. molecular biology. We have reported our initial ex- periments applying this approach to the problem of protein secondary structure prediction. One of the main advantages of the probabilistic approach we de- scribed here is our ability to perform detailed experi- ments where we can experiment with different causal models. We can easily perform local substitutions (mu- tations) and measure (probabilistically) their effect on the global structure. Window-based methods do not support such experimentation as readily. Our method is efficient both during training and during prediction, which is important in order to be able to perform many experiments with different networks. Discussion In this paper we have proposed causal networks as a general and efficient framework for data analysis in Our initial experiments have been done on the sim- plest possible models where we ignore many known dependencies. For example, it is known that in CX- helices hydrogen bonds are formed between every ith and (i + 4)th residue in a chain. This can be incorpo- rated in our model without losing efficiency. We also can improve our method by incorporating additional 320 Delcher Description Chain-Pair FSKBANN ANN ChowFasman Average length of predicted helix run 9.4 8.52 7.79 8.00 Average length of actual helix run IO.3 - - - Percentage of actual helix runs overlapped 66% 67% 70% 56% by predicted helix runs Percentage of predicted helix runs that 62% 66% 61% 64% overlap actual helix runs Average length of predicted sheet run Average length of actual sheet run Percentage of actual sheet runs overlapped by predicted sheet runs Percentage of predicted sheet runs that overlap actual sheet runs 3.8 3.80 2.83 6.02 5.0 - - - 56% 54% 35% 46% 60% 63% 63% 56% Table 3: Precision of run (segment) predictions. Comparative method results from [Maclin and Shavlik, 19921. correlations among particular amino acids as in [Gibrat et al., 19871. We achieve prediction accuracy simi- lar to many other methods such as neural networks. We are confident that with sufficient fine tuning we can improve our results to equal the best methods. Typically, the current best prediction methods involve complex hybrid methods that compute a weighted vote among several methods using a combiner that learns the weights. E.g., the hybrid method described by [Zhang et al., 19931 combines neural networks, a statis- tical method and memory-based reasoning in a single system and achieves an overall accuracy of 66.4%. Bayesian classification is a well-studied area and has been applied frequently to many domains such as pattern recognition, speech understanding and others. Statistical methods also have been used for protein structure prediction. What characterizes our approach is its simplicity and the explicit modeling of causal links. We believe that for scientific data analysis it is particularly important to develop tools that clearly dis- play all the causal independence assumptions. Causal networks provide a very convenient medium for the scientist to experiment with different empirical models and obtain possibly important insights into a problem. References Chou, P. and Fasman, G. 1978. Prediction of the secondary structure of proteins from their amino acid sequence. Advanced Enzymology 47:45-148. Cost, S. and Salzberg, S. 1993. A weighted near- est neighbor algorithm for learning with symbolic fea- tures. Machine Learning 10(1):57-78. Forney, G. D. 1973. The Viterbi algorithm. Proceed- ings of the IEEE 61(3):268-278. Garnier, J.; Osguthorpe, D.; ; and Robson, B. 1978. Analysis of the accuracy and implication of sim- ple methods for predicting the secondary structure of globular proteins. Journal of Molecular Biology 120:97-120. Gibrat, J.-F.; Garnier, J.; and Robson, B. 1987. Further developments of protein secondary struc- ture predicition using information theory. Journal of Molecular Biology 198:425-443. Haussler, D.; Krogh, A.; Mian, S.; and Sjolander, K. 1992. Protein modeling using hidden markov mod- els. Technical Report UCSC-CRL-92-23, University of California, Santa Cruz. Holley, L. and Karplus, M. 1989. Protein secondary structure prediction with a neural network. In Pro- ceedings of the National Academy of Sciences USA, volume 86. 152-156. Maclin, R. and Shavlik, J. 1992. Refinement of ap- proximate domain theories by knowledge-based neu- ral networks. In Proceedings Tenth National Confer- ence on Artificial Intelligence. 165-170. Muggleton, S. and King, R. 1991. Predicting protein secondary structure using inductive logic program- ming. Technical report, Turing Institute, University of Glasgow, Scotland. Pearl, J. 1988. Probabilistic Reasoning in Intelligent Systems. Morgan Kaufmann. Qian, N. and Sejnowski, T. 1988. Predicting the secondary structure of globular proteins using neu- ral network models. Journal of Molecular Biology 202:865-884. Zhang, X.; Mesirov, J.; and Waltz, D. 1993. A hy- brid system for protein secondary structure predic- tion. Molecular Biology (to appear). Machine Learning 321 | 1993 | 48 |
1,374 | Cl 4 e andomize etio e ecisio trees Sreerama Murthy: Simon Kasif: Steven Salzbergi Richard lDept. of Comp uter Science, Johns Hopkins University, Baltimore, MD 21218 2Dept. of Co m p uter Science, Yale University, New Haven, CT 06520 ‘lastname& j s. hu.edu, 2beigel-richard@cs.yale.edu Abstract This paper introduces OCl, a new algorithm for generating multivariate decision trees. Multivari- ate trees classify examples by testing linear com- binations of the features at each non-leaf node of the tree. Each test is equivalent to a hyperplane at an oblique orientation to the axes. Because of the computational intractability of finding an optimal orientation for these hyperplanes, heuristic meth- ods must be used to produce good trees. This c paper explores a new method that combines de- terministic and randomized procedures to search for a good tree. Experiments on several different real-world data sets demonstrate that the method consistently finds much smaller trees than compa- rable methods using univariate tests. In addition, the accuracy of the trees found with our method matches or exceeds the best results of other ma- chine learning methods. 1 Introduction Decision trees (DTs) have been used quite extensively in the machine learning literature for a wide range of classification problems. Many variants of DT algo- rithms have been introduced, and a number of differ- ent goodness-of-split criteria have been explored. Most of the research to date on decision tree algorithms has been restricted to either (1) examples with sym- bolic attribute values [Quinlan, 19861 or (2) univari- ate tests for numeric attributes [Breiman et al., 19841, [Quinlan, 19921. Univariate tests compare the value of a single attribute to a constant; i.e., they are equiv- alent to partitioning a set of examples with an axis- parallel hyperplane. Although Breiman et al [1984] suggested an elegant method for inducing multivari- ate linear decision trees, there has not been much ac- tivity in the development of such trees until very re- cently [Utgoff and Brodley, 19911, [Heath et al., 19921. Because these trees use oblique hyperplanes to parti- tion the data, we call them oblique decision trees. This paper presents a new method for inducing oblique decision trees. As it constructs a tree, this method searches at each node for the best hyperplane to partition the data. Although most of the search- ing is deterministic hill-climbing, we have introduced randomization to determine the initial placement of a hyperplane and to escape from local minima. By lim- iting the number of random choices, the algorithm is guaranteed to spend only polynomial time at each node in the tree. In addition, randomization itself has pro- duced several benefits. Our experiments indicate that it successfully avoids local minima in many cases. Ran- domization also allows the algorithm to produce many different trees for the same data set. This offers the possibility of a new family of classifiers: Ic-decision- tree algorithms, in which an example is classified by the majority vote of k trees (See [Heath, 19921). Two other methods for generating oblique trees, that have been introduced recently, are perceptron trees [Utgoff and Brodley, 19911 and simulated anneal- ing (SADT) [Heath et al., 19921. The former shows that much smaller trees can be induced when oblique hyperplanes are used. However, theirs is a determinis- tic algorithm, and Heath [1992] shows that the problem of finding an optimal oblique tree is NP-Complete.3 This work also introduces a completely randomized technique for finding good hyperplanes. The motiva- tion for randomization is given in [Heath et al., 19921, but the idea can briefly be explained as follows. Con- sider the hyperplane associated with the root of a de- cision tree. The optimal (smallest) decision tree may use non-optimal decision plane at the root. Obviously this is true for each node of the tree; this observa- tion suggests a randomized strategy where we try to construct the smallest tree using several candidate hy- perplanes at each node. This idea can be facilitated by using a randomized algorithm to find good separat- ing hyperplanes. That is, if a randomized algorithm is executed repeatedly, it will find different hyperlanes . each time. [Heath et al., 19921 use an algorithm based on simulated annealing to generate good splits. Our 3More precisely, Heath [Heath, 19921 proves that the problem of finding an optimal oblique split is NP-Complete, using the number of misclassified examples as the error measure. 322 Murthy From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. method is also randomized, but it includes a substan- tial directed search component that allows it to run much faster. In our experiments, our method ran much faster than SADT without sacrificing accuracy in the resulting classifier. The algorithmic content of this paper focusses on the question of how to partition a given sample space into homogeneous regions. A complete description of any DT building method should also include discus- sion of its choices regarding the pruning strategies and the stop-splitting criteria. However, we do not ad- dress these issues here, because our choices for them are quite straightforward and standard in the litera- ture. We stop splitting when the sample space asso- ciated with the current node has zero impurity (see Section 2.4). The only pruning done by our method consists of cutting off subtrees at nodes whose impu- rity measure is less than a certain threshold. For a good review and comparison of pruning strategies, see [Mingers, 19891 and [Quinlan, 19921. The problem of partitioning the sample space in- volves the following related issues: 8 restrictions planes, on the location and orientation of hyper- e goodness measures for evaluating a split, o strategies to search through the space of hyperplanes for the best hyperplane, and possible o methods for choosing above search begins. a hyperplane from which the These issues are fundamental to the design of a DT al- gorithm [Breiman et al., 19841, and many existing DT algorithms can be classified on the basis of how they make these choices. Section 2 elaborates our algorithm with respect to each of these issues. Section 3 presents the results of using our method to classify several real- world data sets, and compares our results to those of some existing methods. Section 4 summarizes the lessons learned from these experiments. 2 The OCI Algorithm In this section we discuss details of our oblique de- cision tree learning method. We call this algorithm OCl, for Oblique Classifier 1. OCl imposes no re- strictions on the orientation of the hyperplanes. This is the main difference between OCl and methods such as ID3 and CART, which use only axis-parallel hyper- planes. However, OCl cannot distinguish between two hyperplanes that have identical sets of points on both sides. In other words, if the sample space consists of n examples in d dimensions (d attributes), then our algorithm recognizes only (:) distinct hyperplanes. The initial hyperplane at each node in the decision tree is chosen randomly by OCl. Even if such a ran- domly placed hyperplane has a very poor location, it is usually improved greatly in the first few perturbations. 2.1 Search Strategies The strategy of searching through the space of possi- ble hyperplanes is defined by the procedure that per- turbs the current hyperplane into a new location. As there are an exponential number, (i), of possible hy- perplane locations, any procedure that simply enumer- ates all of them will be unreasonably costly. The two main alternatives considered in the past have been to use a non-deterministic search procedure, as in SADT [Heath et al., 19921, or to use a heuristic deterministic procedure, as in CART [Breiman et al., 19841. OCl combines these two approaches, using heuristic search until it finds a local minimum, and then using 8 non- deterministic search step to get out of the local mini- mum. We will start by explaining how we perturb a hyper- plane to split the sample space P at a node of a DT. P contains n examples, each with d attributes. Each ex- ample belongs to a particular category. The of the current hyperplane H can be written: kc ) aiXi + ad+1 = 0 i= 1 equation Let Pj = (Xjl, Xj2,. . ., zjd) be the jth example from the sample space P. If we substitute Pj into the equation for H, we get: Cfzl(ai2ji) + ad+1 = Vj, where the sign of Vj tells US whether the point Pj is above or below the hyperplane H. If H splits the sample space P perfectly, then all points belonging to the same category in P will have the same sign i.e., SiCJn( Vj ) = sign(&) iff catego?YJ(P;) = CdegOT2J(Pj) OCl perturbs the coefficients of H one at a time. If we consider the coefficient a, as a variable, and all other coefficients as constants, Vj can be viewed as a function of a,. If Vj is defined as Uj = amxjm -vj xjrn (1) then the point Pj is above H if a, > Vi, and below otherwise. Thus, by fixing the values of the coefficients al . . . ad+1 , except am, we can obtain n constraints on the value of Umr using the n points in suming no degeneracies). the set P (as- The problem then is to find a value for a, that satis- fies as many of these constraints as possible. (If all the constraints-are satisfied , then we have a perfect split.) This problem is easy to solve; in fact, it is just an axis parallel split in 1-D. The value am, obtained by solv- ing this one dimensional problem is a good candidate to be used as the new value of the coefficient a,. ILet HI be the hyperplane obtained by changing a, to a,, in H. If H has better (lower) impurity than HI, then HI is discarded. If HI has lower impurity, HI becomes the new locatiorl of the hyperplane. If H and HI have identical impurities, and different locations, accepted with probability stag-prob. then HI is Machine Learning 323 Perturb(H,m) 1 forj = 1 ton Compute i7+ (Eq. 1) Sort 771.. . V, m nondecreasing order. am, = best univariate split of the sorted V’s. HI = result of substituting am, for am in H. If (;m~.rity( H) < impwity( HI)) =%b,; stagnant = 0 3 Else if (impurity(H) = impwity(.H~)) { % = am, with probability stag-p& = e--rtaPnant stagnant = stagnant + 1 3 3 Figure 1: Perturbation Algorithm The parameter stag,prob, denoting “stagnation probability”, is the probability that a hyperplane is perturbed to a location that does not change the impu- rity measure. To prevent the impurity from remaining stagnant for a long time, stag-prob decreases exponen- tially with the number of “stagnant” perturbations. It is reset to 1 every time the global impurity measure is improved. Pseudocode for our perturbation procedure is given in Fig. 1. Now that we have a method for locally improving a coefficient of a hyperplane, we need a method for de- ciding which of the d+ 1 coefficients to pick for pertur- bation. We experimented with three different orders of coefficient perturbation, which we labelled Seq, Best, and R-50: Seq : Repeat until none of the coefficient values is modified in the for loop: For i = 1 to d + 1, Perturb(H, ;) Best: Repeat until coefficient m remains unmodified : m= coefficient which when perturbed, results in the maximum improvement of the impurity measure. Perturb(H,m) R-50: Repeat a fixed number of times : (50 in our experiments) m= random integer between 1 and d + 1 Perturb(H,m) As will be shown in our experiments (Section 3), the order of perturbation of the coefficients does not affect the classification accuracy as much as other pa- rameters, especially the number of iterations (see Sec- tion 2.2.2). But if the number ofiterations and the im- purity measure are held constant, the order can have a significant effect on the performance of the method. In our experiments, though, none of these orders was uniformly better than any other. A sequence of perturbations stops when the split 324 Murthy reaches a local minimum (which may also be a global minimum) for the impurity measure. Our method uses randomization to try to jump out oflocal minima. This randomization technique is described next. 2.2 Local Minima A big problem in searching for the best hyperplane (and in many other optimization problems, as well) is that of local minima. The search process is said to have reached a local minimum if no perturbation of the current hyperplane, as suggested by the perturba- tion algorithm, decreases the impurity measure, and the current hyperplane does not globally minimize the impurity measure. We have implemented two ways of dealing with local minima: perturbing the hyperplane in a random direc- tion, and re-running the perturbation algorithm with additional initial hyperplanes. While the second tech- nique is a variant of the standard technique of multiple local searches, the first technique of perturbing the hy- perlane in a random direction is novel in the cant ext of decision tree algorithms. Notably, moving the hy- perlane in a random direction rather than modifying one of the coefficients one at a time does not modify the time complexity of the algorithm. 2.2.1 Perturb coefficients in a random direc- tion When a hyperplane H = Cf-, ai*zi+ad+l can not be improved by deterministic perturbation, we do the following. Let R = (T~,Q,..., ~d+l) be a random vector. Let Q be the amount by which we want to perturb H in the direction R. i.e., Let HI = cf=, (ai + crri)z; + (ad+1 + o~d+l) be the suggested perturbation of H. The only variable in the equation of HI is cr. There- fore each of the n examples in P, depending on its category, imposes a constraint on the value of a (See Section 2.1). Use the perturbation algorithm in Fig. 1 to compute the best value of cr. If the hyperplane HI obtained thus improves the im- purity measure, accept the perturbation. Continue with the coefficient perturbation procedure. Else stop and output H as the best possible split of P. We found in our experiments that a single random perturbation, when used at a local minimum, proves to be very helpful. Classification accuracy improved for every one of our data sets when such perturbations were made. 2.2.2 Choosing multiple initial hyperplanes Because most of the steps of our perturbation algo- rithm are deterministic, the initial randomly-chosen hyperplane determines which local minimum will be encountered first. Perturbing a single initial hyper- plane deterministically thus is not likely to lead to the best split of a given dataset. In cases where the ran- dom perturbation method may have failed to escape from local minima, we thought it would be useful to start afresh, with a new initial hyperplane. We use the word iteration to denote one run of the perturbation algorithm, at one node of the decision tree, using one random initial hyperplane; i.e., one at- tempt using either Seq, Best, or R-50 to cycle through and perturb the coefficients of the hyperplane. One iteration also includes perturbing the coefficients ran- domly once at each local minimum, as described in Sec- tion 2.2.1. One of the input parameters to OCl tells it how many iterations to use. If it uses more than one iteration, then it always saves the best hyperplane found thus far. In all our experiments, the classification accuracies increased with more than one iteration. Accuracy seemed to increase up to a point and then level off (after about 20-50 iterations, depending on the do- main). Our conclusion was that the use of multiple initial hyperplanes substantially improved the quality of the best tree found. 2.3 Comparison to Breiman et a1.‘s method Breiman et al [1984, pp. 171-1731 suggested a method for inducing multivariate decision trees that used a per- turbation algorithm similar to the deterministic hill- climbing method that OCl uses. They too perturb a coefficient by calculating a quantity similar to Uj (Eq. 1) for each example in the data, and assign the new value of the coefficient to be equal to the best univariate split of the U’s* In spite of this apparent similarity, OCl is significantly different from the above algorithm for the following reasons. Their algorithm does not use any randomization. They choose the best univariate split of the dataset as their only choice of an initial hyperplane. When a local minimum is encountered, their deterministic algorithm halts. Their algorithm modifies one coefficient of the hy- perplane at a time. One step of our algorithm can modify several coefficients at once. Breiman et al. report no upper bound on the time it takes for a hyperplane to reach a (perhaps locally) optimal position. In contrast, our procedure only ac- cepts a limited number of perturbations. The num- ber of changes that reduce the impurity is limited to n, the number of examples. The number of changes that leave impurity the same is limited by the pa- rameter stag-prob (Section 2.1). Due to these restric- tions, OCl is guaranteed to spend only polynomial time on each hyperplane in a tree.4 ‘The theorethical bound on the alnount of time OCl spends on perturbing a hyperplane is O(dn’log n). To guarantee this bound, we have to reduce atcrgqrob to zero after a fixed number of changes, rather than reducing it exponentially to zero. The latter method leaves an expo- In addition, the procedure in [Breiman et al., 19841 is at best an outline: though the idea is elegant, many details were not worked out, and few experiments were performed. Thus, even without the significant changes to the algorithm we have introduced, there was a need for much more experimental work on this algorithm. 2.4 Goodness of a hyperplane Our algorithm attempts to divide the d-dimensional attribute space into homogeneous regions, i.e., into re- gions that contain examples from just one category. (The training set P may contain two or more cate- gories.) The goal of each new node in the tree is to split the sample space so as to reduce the “impurity” of the sample space. Our algorithm can use any mea- sure of impurity, and in our experiments, we considered four such measures: information gain [Quinlan, 19861, max minority, sum minority, and sum of impurity (all three defined in [Heath, 19921). Any of these measures seem to work well for our algorithm, and the classifica- tion accuracy did not vary significantly as a function of the goodness measure used. More details of the com- parisons are given in Section 3 and Table 2. 2.4.1 Three new irupurity measures The im- purity measures max minority, sum minority, and sum of impurity were all very recently introduced in the context of decision trees. We will there- fore briefly define them here. For detailed compar- isons, see [Heath, 19921. For a discussion of other impurity measures, see [Fayyad and Irani, 19921 and [Quinlan and Rivest, 19891. Consider the two half spaces formed by splitting a sample space with a hyperplane H, and call these two spaces L and R (left and right). Assume that there are only two classes of examples, though this definition is easily extended to multiple categories. If all the exam- ples in a space fall into the same category, that space is said to be homogeneous. The examples in any space can be divided into two sets, A and B, according to their class labels, and the size of the smaller of those two sets is the minority. The max minority (MM) mea- sure of H is equal to the larger of the two minorities in L and R. The sum minority measure (SM) of H is equal to the sum of the minorities in both L and R. The sum of impurity measure requires us to give the two classes numeric values, 0 and 1. Let Pi, ..) PL be the points (examples) on the left side of H. Let L”pi be the category of the point Pi. We can define the average class avg of L as avg = cf= CPi i . The impurity of L is then defined as CF’.,, (Cp; - avg)2 The sum of impurity (SI) of H is equal to the sum of the impurity measures nentially small chance that a large number of perturbations will be permitted. In practice, however, hyperplanes were never perturbed more than a small (< 12) times. The ex- pected runnin g time of OC1 for perturbing a hyperplane appears to be O(Enlog n), where k is a small constant. Machine Learning 325 Table Data Star Galaxy (Bright) Star Galaxy w4 IRIS Cancer ! 3 - I 1: Compa Method ax--- CSADT ID3 l-NN BP OCl l-NN zk- CSADT ID3 l-NN BP OCl CSADT ID3 l-NN isons with Accuracy .-Jz!L 99.2 99.1 99.1 98.8 99.8 95.8 95.1 92.0 98.0 94.7 94.7 96.0 96.7 97.4 94.9 90.6 96.0 ,her n Tree Size 15.6 18.4 44.3 - - - 36.0 SI - - - - 3.0 SI 4.2 SM 10.0 MM - - - - 2.4 SI 4.6 SM 36.1 SI - - ethods Impurity Measure Sl SI SI - on both 1; and R. 3 Experiments In this section, we present results of experiments we performed using OCl on four real-world data sets. These results, along with some existing classification results for the same domains, are summarized in Ta- ble 1. All our experiments used lo-fold cross-validation trials. We built decision trees for each data set using various combinations of program parameters (such as the number of iterations, order of coefficient perturba- tion, impurity measure, impurity threshold at which a node of the tree may be pruned). The results in Table 1 correspond to the trees with the highest classification accuracies. The results for the CSADT and ID3 methods are taken from Heath [Heath, 19921. CSADT is an alter- native approach to building oblique decision trees that uses simulated annealing to find good hyperplanes. These prior results used identical data sets to the ones used here, although the partitioning into training and test partitions may have been different. In each case, though, we cite the best published result for the algo- rithm used in the comparison. Star/galaxy discrimination. Two of our data sets came from a large set of astronomical images collected by Odewahn et al [Odewahn et al., 19921. In their study, they used these images to train perceptrons and back propagation (BP) networks to differentiate be- tween stars and galaxies. Each image is characterized by 14 real-valued attributes and one identifier, viz., Ustar” or ugalaxy”. The objects in the image were di- vided by Odewahn et al. into “bright” and “dim” data Table 2: Effect of parameters on accuracv and DT size Imp. l-7 Prune Act. Tree Iter Meas. Order Thresh. (%I T- 1 SI R-50 10 96.4 10 SM Best 4 97.0 10 SM seq 10 96.6 20 SM R-50 8 96.8 50 MM Best 6 97.1 100 SI Best 8 96.9 1 MM sea 0 93.7 1 MM seq 2 93.8 1 MM SW 10 92.5 1 MM Best 10 89.2 1 MM R-50 10 92.3 Depth & Size 3.0,4.9 3.3,4.3 2.3,3.3 3.1,4.3 1.9,2.8 1.9,2.3 6.2,19.6 4.9,14.3 2.9,5.6 3.9,6.7 2.8,5.0 sets based on the image intensity values, where the “dim” images are inherently more difficult to classify. The bright set contains 3524 objects and the dim set contains 4652 objects. Heath [Heath, 19921 reports the results of applying the SADT and ID3 algorithms only to the bright im- ages. We ran OCl on both the bright and dim images, and our results are shown in Table 1. The table com- pares our results with those of CSADT, ID3, l-nearest- neighbor (l-NN), and back propagation on bright im- ages, and with l-NN [Salzberg, 19921 and back propa- gation on the dim images. Classifying irises. The iris dataset has been exten- sively used both in statistics and for machine learning studies [Weiss and Kapouleas, 19891. The data con- sists of 150 examples, where each example is described by four numerical attributes. There are 50 exam- ples in each of three different categories. Weiss and Kapouleas [Weiss and Kapouleas, 19891 obtained ac- curacies of 96.7% and 96.0% on this data with back propagation and l-NN, respectively. Breast cancer diagnosis. A method for classify- ing using pairs of oblique hyperplanes was described in [Mangasarian et al., 19901. This was applied to classify a set of 470 patients with breast cancer, where each ex- ample is characterized by nine numeric attributes plus the label, benign or malignant. The results of CSADT and ID3 are from Heath [Heath, 19921, and those of l-NN are from Salzberg [Salzberg, 19911. Table 2 shows how the OCl algorithm’s performance varies as we adjust the parameters described earlier. The table summarizes results from different trials us- ing the cancer data. We ran similar experiments for all our data sets, but due to space constraints this table is shown as a representative. The most important param- eter is the number of iterations; we consistently found better trees (smaller and more accurate) using 50 or 326 Murthy more iterations. There was no significant correlation between pruning thresholds and accuracies, and the sum minority (SM) impurity measure almost always produced the smallest (though not always the most accurate) trees. We did not find any other significant sources of variation, either in the impurity measure OP the order of perturbing coefficients. 4 Conclusions Our experiments seem to support the following conclu- sions: The use of multiple iterations; i.e., several differ- ent initial hyperplanes, substantially improves per- formance. The technique of perturbing the entire hyperplane in the direction of a randomly-chosen vector is a good means for escaping from local minima. No impurity measure has an overall better perfor- mance than the other measures for OCl. The nature of the data determines which measure performs the best. No particular order of coefficient perturbation is su- perior to all others. One of OUP immediate next steps in the development of OCl will be to use the training set to determine the program parameters (e.g., number of iterations, best impurity measure for a dataset, and order of perturba- tion). The experiments contained here provide an impor- tant demonstration of the usefulness of oblique decision trees as classifiers. The OCl algorithm produces re- markably small, accurate trees, and its computational requirements are quite modest. The small size of the trees makes them more useful as descriptions of the do- mains, and their accuracy provides a strong argument for their use as classifiers. At the very least, oblique de- cision trees should be used in conjunction with other methods to enhance the tools currently available for many classification problems. Acknowledgements Thanks to David Heath for helpful comments. S. Murthy and S. Salzberg were supported in part by the National Science Foundation under Grant IRI- 9116843. References Breiman, L.; Friedman, J.H.; Olshen, R.A.; and Stone, C.J. 1984. Classification and Regression Dees. Wadsworth International Group. Fayyad, U. and Irani, K. 1992. The attribute speci- fication problem in decision tree generation. In Pro- ceedings of AAAI-92, San Jose CA. AAAI Press. 104- 110. Heath, D.; Kasif, S.; and Salzberg, S. 1992. Learn- ing oblique decision trees. Technical report, Johns Hopkins University, Baltimore MD. Heath, D. 1992. A Geometric .&amework for Ma- chine Learning. Ph.D. Dissertation, Johns Hopkins University, Baltimore MD. Mangasarian, 0.; Setiono, R.; and Wolberg, W. 1990. Pattern recognition via linear programming: Theory and application to medical diagnosis. In SIAM Work- shop on Optimization. Mingers, J. 1989. An emperical comparison of pruning methods for decision tree induction. Machine Learn- ing 4(2):227-243. Odewahn, S.C.; Stockwell, E.B.; Pennington, R.L.; Humphreys, R.M.; and Zumach, W.A. 1992. Au- tomated stargalaxy descrimination with neural net- works. Astronomical Journal 103(1):318-331. Quinlan, J.R. and Rives& R.L. 1989. Inferring deci- sion trees using the minimum description length prin- ciple. Information and Computation 80:227-248. Quinlan, J.R. 1986. Induction of decision trees. 1Ma- chine Learning 1(1):81-106. Quinlan, J.R. 1992. C4.5 Programs for Machine Learning. Morgan Kaufmann. Salzberg, S. 1991. Distance metrics for instance-based learning. In Methodologies for Intelligent Systems: 6th International Symposium, ISMIS ‘91. 399-408. Salzberg, S. 1992. Combining learning and search to create good classifiers. Technical Report JHU-92/12, Johns Hopkins University, Baltimore MD. Utgoff, P.E. and Brodley, C.E. 1991. Linear machine decision trees. Technical Report 10, University of Massachusetts, Amherst MA. Weiss, S. and Kapouleas, I. 1989. An emperical corn- parison of pattern recognition, neural nets, and ma- chine learning classification methods. In Proceedings of Eleventh IJCAI, Detroit MI. Morgan Kaufmann. Machine Learning 327 | 1993 | 49 |
1,375 | An Empir art Sehnan and AI Principles Research Department AT&T Bell Laboratories Murray Hill, NJ 07974 { selman, kautz} @research.att.com Abstract GSAT is a randomized local search procedure for solv- ing propositional satisfiability problems. GSAT can solve hard, randomly generated problems that are an order of magnitude larger than those that can be han- dled by more traditional approaches, such as the Davis- Putnam procedure. This paper presents the results of numerous experiments we have performed with GSAT, in order to improve our understanding of its capabilities and limitations. We first characterize the space traversed by GSAT. We will see that for nearly all problem classes we have encountered, the space consists of a steep descent fol- lowed by broad flat plateaus. We then compare GSAT with simulated annealing, and show how GSAT can be viewed as an efficient method for executing the low- temperature tail of an annealing schedule. Finally, we report on extensions to the basic GSAT procedure. We discuss two general, domain-independent exten- sions that dramatically improve GSAT’s performance on structured problems: the use of clause weights, and a way to average in near-solutions when initializing the procedure before each try. Introduction Selman et al. (1992) introduced a randomized greedy lo- cal search procedure called GSAT for solving propositional satisfiability problems. Experiments showed that this pro- cedure can be used to solve hard, randomly generated prob- lems that are an order of magnitude larger than those that can be handled by more traditional approaches, such as the Davis-Putnam procedure or resolution. GSAT was also shown to perform well on propositional encodings of the N-queens problem, graph coloring problems, and Boolean induction problems. This paper presents the results of numerous experiments we have performed with GSAT, in order to improve our understanding of its capabilities and limitations. We will begin with an exploration of the shape of the search space that GSAT typically encounters. We will see that for nearly all problem classes we have examined, the space consists of a steep descent followed by broad plateaus. We then 46 Selman compare GSAT with simulated annealing, and show how GSAT can be viewed as a very efficient method for executing the low-temperature tail of an annealing schedule. A common criticism of randomized algorithms like GSAT is that they might not do as well on problems that have an intricate underlying structure as they do on randomly gener- ated problems. Based on our understanding of the shape of GSAT’s search space, we developed two general, domain- independent extensions that dramatically improve its perfor- mance: the use of clause weights, and a way to average in near-solutions when initializing the procedure before each try. We will also describe other local search heuristics which appear promising, but did not improve performance on our test problems. This paper is unabashidly empirical. Although we will point to relevant results in the theoretical literature, we will not present an abstract analysis of our results. It would obviously be highly desirable to characterize precisely the class of problems for which GSAT succeeds, and to provide precise bounds on its running time. Unfortunately, such results are extremely rare and difficult to obtain in work on incomplete algorithms for NP-hard problems. The situation is similar, for example, in research on simulated annealing, where the formal results show convergence in the limit (i.e., after an arbitrary amount of time), but few address the rate of convergence to a solution. In fact, a good, general char- acterization of the rate of convergence appears to be beyond the current state of the art of theoretical analysis (Bertsimas and Tsitsiklis 1992; Jerrum 1992). Current theory does, however, explain why GSAT performs well on certain lim- ited classes of formulas (e.g. 2-SAT and over-constrained formulas), and the range of applicability of such formal re- sults will certainly increase over time (Papadimitriou 199 1; Koutsoupias and Papadimitriou 1992). We believe that ex- perimental work should proceed in parallel with theoretical work, because real data can point out the problem-solving techniques that are worthy of formal analysis, and can help distinguish the asymptotic results that carry over to practical cases from those that do not. The GSAT Procedure GSAT performs a greedy local search for a satisfying From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. Procedure GSAT Input: a set of clauses a, MAX-FLIPS, and MAX-TRIES Output: a satisfying truth assignment of Q, if found for i := 1 to MAX-TRIES T := a randomly generated truth assignment for j := 1 to MAX-FLIPS if T satisfies a then return T P := a propositional variable such that a change in its truth assignment gives the largest increase in the total number of clauses of a that are satisfied by T T := T with the truth assignment of p reversed end for end for return “no satisfying assignment found” Figure 1: The GSAT procedure. assignment of a set of propositional clauses.* The procedure starts with a randomly generated truth assignment. It then changes (‘flips’) the assignment of the variable that leads to the largest increase in the total number of satisfied clauses. Such flips are repeated until either a satisfying assignment is found or a pre-set maximum number of flips (MAX- FLIPS) is reached. This process is repeated as needed up to a maximum of MAX-TRIES times. See Figure 1. (For a related approach, see Gu (1992)) GSAT mimics the standard local search procedures used for finding approximate solutions to optimization problems (Papadimitriou and Steiglitz 1982) in that it only explores potential solutions that are “close” to the one currently being considered. Specifically, we explore the set of assignments that differ from the current one on only one variable. The GSAT procedure requires the setting of two parameters, MAX-FLIPS and MAX-TRIES, which determine, respec- tively, how many flips the procedure will attempt before giving up and restarting, and how many times this search can be restarted before quitting. As a rough guideline, set- ting MAX-FLIPS equal to about ten times the number of variables is sufficient. The setting of MAX-TRIES will gen- erally be determined by the total amount of time that one wants to spend looking for an assignment before giving up. In our experience so far, there is generally a good set- ting of the parameters that can be used for all instances of an application. Thus, one can fine-tune the procedure by experimenting with various parameter settings. It is im- portant to understand that we are not suggesting that the parameters need to be reset for each individual problem - only for a broad class, for example, coloring problems, ran- dom formulas, etc. Practically all optimization algorithms for intractable problems have parameters that must be set this way,2 so this is not a particular disadvantage of GSAT. Furthermore, one could devise various schemes to auto- ‘A clause is a disjunction of literals. A literal is a propositional variable or its negation. A set of clauses corresponds to a formula in conjunctive normal form (CNF): a conjunction of disjunctions. Thus, GSAT handles CNF-SAT. 2For example, see the discussion on integer programming meth- ods in Fourer (1993). matically choose a good parameter setting by performing a binary search on different parameter settings on a sequence of problems. Summary of Previous In Selman et al. (1992), we showed that GSAT substantially outperforms backtracking search procedures, such as the Davis-Putnam procedure, on various classes of formulas. For example, we studied GSAT’s performance on hard ran- domly generated formulas. (Note that generating hard ran- dom formulas for testing purposes is a challenging problem by itself, see Cheeseman etal. (1991); Mitchell etal. (1992); William and Hogg (1992); Larrabee and Tsuji (1993); and Crawford and Auton (1993).) The fastest backtrack type procedures, using special heuristics, can handle up to 350 variable hard random formulas in about one hour on a MIPS workstation (Buro and Kleine Burring 1992; Crawford and Auton 1993). Nevertheless, the running time clearly scales exponentially, for example, hard 450 variable formulas are undoable. Our current implementation of GSAT, using the random walk option discussed in Selman and Kautz (1993), solves hard 1500 variable formulas in under an hour. Sel- man et al. also showed that GSAT performs well on proposi- tional encodings of the N-queens problem, hard instances of graph coloring problems (Johnson et al. 199 l), and Boolean induction problems (Kamath et al. 1992). e Sears ace Crucial to a better understanding of GSAT’s behavior is the manner in which GSAT converges on an assignment. In Figure 2, we show how the GSAT’s search progresses on a randomly generated 100 variable problem with 430 clauses. Along the horizontal axis we give the number of flips, and along the vertical axis the number of clauses that still re- mained unsatisfied. (The final flip reduces this number to zero.) It is clear from the figure that most of the time is spent wandering on large plateaus. Only approximately the first 5% of the search is spent in pure greedy descent. We have observed qualitatively similar patterns over and over again. (See also the discussion on “sideway” moves in Selman et al. (1992), and Gent and Walsh (1993).) The bottom panel in Figure 2 shows the search space for a 500 variable, 2 150 clause random satisfiable formula. The long tableaus become even more pronounced, and the relative size of the pure greedy descent further diminishes. In general, the harder the formulas, the longer the tableaus. Another interesting property of the graphs is that we see no upwards moves. An upward move would occur when the best possible flip increases the number of unsatisfied clauses. This appears to be extremely rare, especially for the randomly generated instances. The search pattern brings out an interesting difference between our use of GSAT and the standard use of local search techniques for obtaining good approximate solutions to combinatorial optimization problems (Lin and Kernighan 1973; Papadimitriou and Steiglitz 1982; Papadimitriou et al. 1990). In the latter, one generally halts the local search procedure as soon as no more improvement is found. Our Automated Reasoning 47 60 50 40 # unsat 30 20 10 0 # unsat 0 50 100 150 200 250 300 350 400 450 500 # flips 250 200 150 0 0 2 4 6 # flks (inl~housla2nds) 14 16 18 20 ? Figure 2: GSAT’s search space on a 00 and 500 variables formulas. figure shows that this is appropriate when looking for a near-solution, since most of the gain lies in the early, greedy descent part. On the other hand, when searching for a global minimum (i.e., a satisfying assignment) stopping when flips do not yield an immediate improvement is a poor strategy - most of the work occurs in satisfying the last few remaining clauses. Note that finding an assignment that satisfies all clauses of a logical theory is essential in many reasoning and problem solving situations. For example, in our work on planning as satisfiability, a satisfying assignment correspond to a correct plan (Kautz and Selman 1992). The near-satisfying assign- ments are of little use; they correspond to plans that contain one of more “magical” moves, where blocks suddenly shift positions. Simulated Annealing Simulated annealing is a stochastic local search method. It was introduced by Kirkpatrick et al. (1983) to tackle com- binatorial optimization problems. Instead of pure greedy lo- cal search, the procedure allows a certain amount of “noise” which enables it to make modifications that actually in- crease the cost of the current solution (even when this is not the best possible current modification). In terms of finding satisfying assignments, this means that the procedure some- times allows flips that actually increase the total number of unsatisfied clauses. The idea is that by allowing random oc- currences of such upwards moves, the algorithm can escape local minima. The frequency of such moves is determined by a parameter T, called the temperature. (The higher the temperature, the more often upward moves occur.) The parameter T is set by the user. Normally, one fol- lows a so-called annealing schedule in which one slowly 48 Sehan decreases the temperature until T reaches zero. It can be shown formally that provided one “cools” slowly enough, the system will find a global minimum. Unfortunately, the analysis uses an exponentially long annealing schedule, making it only of theoretical interest (Hajek 1988). Our real interest is in the rate of convergence to a global minimum for more practical annealing schedules. Current formal meth- ods, however, appear too weak to tackle this question.3 It is interesting to compare the plot for GSAT (Figure 2) with that for annealing (Figure 3) on the same 100 variable random formula.4 In the early part of the search, GSAT performs pure greedy descent. The descent is similar to the initial phase of an annealing schedule, although more rapid, because GSAT performs no upward moves. In the next stage, both algorithms must search along a series of long plateaus. GSAT makes mostly sideways moves, but takes advantage of a downward move whenever one arises. Annealing has reached the long, low-temperature “tail” of its schedule, where it is very unlikely to make an upward move, but allows both sideways and downward moves. Be- cause much of the effort expended by annealing in the initial high temperature part of the schedule is wasted, it typically takes longer to reach a solution. Note, for example, that after less than 500 moves GSAT has reached a satisfying assignment, while the annealing algorithm still has 5 un- satisfied clauses. A more rapid cooling schedule would, of course, more closely mimick GSAT. Thus we can view GSAT as a very efficient method for executing the low-temperature tail of an annealing sched- ule. Furthermore, our experiments with several annealing schedules on hard, random formulas confirmed that most of the work in finding a true satisfying assignment is in the tail of the schedule. In fact, we were unable to find an an- nealing schedule that performed better than GSAT, although we cannot rule out the possiblity that such a schedule ex- ists. This is an inherent difficulty in the study of annealing approaches (Johnson et al. 1991). xtensions The basic GSAT algorithm is quite elementary, and one might expect that more sophisticated algorithms could yield better performance. We investigated several extensions to GSAT, and found a few that were indeed successful. But it is important to stress that the basic GSAT algorithm is very robust, in the sense that many intuitively appealing modi- fications do not in fact change its performance. We have found that experimental study can help reveal the assump- tions, true or false, implicit in such intuitions, and can lead 3Recent work by Pinkas and Dechter (1992) and Jerrum (1992) provides some interesting formal convergence results for a special class of optimization problems. 4We use the annealing algorithm given in Johnson et al. (1991). Start with a randomly generate truth assignment; repeatedly pick a random variable, and compute how many more clauses become satisfied when the truth value of that variable is flipped - call this number 6. If 6 2 0, make the flip. Otherwise, flip the variable with probability e ‘IT . We slowly decrease the temperature from 10 down to 0.05. to interesting research. questions for further empirical or theoretical Improving t nitid Assignment One natural intuition about GSAT is that it would be better to start with an initial assignment that is “close” to a solution, rather than with a totally random truth assignment. Indeed, the theoretical analysis of general greedy local search pre- sented in (Minton et al. 1992) shows that the closer the initial assignment is to a solution, the more likely it is that local search will succeed. Therefore we tried the following method for creating bet- ter initial assignments: First, a variable is assigned a random value. Next, all clauses containing that variable are exam- ined, to see if the values of any unassigned variables are then determined by unit propagation. If so, these variables are assigned, and again unit propagation is performed. (If a clause is unsatisfied by the current partial assignment, it is simply ignored for the time being.) When no more propa- gations are possible, another unassigned variable is given a random value, and the process repeats. Experiments revealed that this strategy did not signifi- cantly reduce the time required to find a solution. In ret- rospect, this failure can be explained by the shape of the search space, as discussed above. The descent from an ini- tial state in which many clauses are unsatisfied to one which only a few are unsatisfied occupies only a tiny fraction of the overall execution time, and initial unit propagation helps only in this phase of the search. The problem is that the number of unsatisfied clauses is a fairly crude measure of the distance to a solution, mea- sured in terms of the number of flips required to reach a satisfying assignment. (Minton et al. (1992) make a similar observation regarding coloring problems. See also Gent and Walsh (1993).) This led us to consider another strategy for generating good initial assignments. Since GSAT typically performs many tries before finding a solution, we make use of the information gained from previous tries to create an initial assignment that is already some distance out on a low plateau, and thus actually closer to a solution. We do this by initializing with the bitwise average of the best assignment found in the two previous tries. The bitwise average of two truth assignments is an as- signment that agrees with the assignment of those letters on which the two given truth assignments are identical; the remaining letters are randomly assigned truth values. After many tries in which averaging is performed, the initial and final states become nearly identical. We therefore reset the initial assignment to a new random assignment every 10 to 50 tries.’ In Selman and Kautz (1993) we give an empirical eval- uation of the averaging strategy. We considered proposi- tional encodings of hard graph coloring problems used by Johnson et al. (199 1) to evaluate specialized graph coloring ‘We thank Geoffrey Hinton and Hector Levesque for suggest- ing this strategy to us. The strategy has some of the flavor of the approaches found in genetic algorithms (Davis 1987). Automated Reasoning 49 0 50 100 150 200 250 300 350 400 450 500 # flips Figure 3: Simulated annealing’s search space on a 100 variable formula. algorithms. Our experiments show that GSAT with the av- eraging strategy compares favorably with some of the best specialized graph coloring algorithms as studied by John- son. This is quite remarkable because GSAT does not use any special techniques for graph coloring. Handling Structure with Clause Weights As we noted in the introduction, the fact that GSAT does well on randomly-generated formulas does not necessarily indicate that it would also perform well on formulas that have some complex underlying structure. In fact, Ginsberg and Jonsson (1992) supplied us with some graph coloring problems that GSAT could not solve, even with many tries each with many flips. Their dependency-directed back- tracking method could find solutions to these problems with little effort (Jonsson and Ginsberg 1993). In running GSAT on these problems, we discovered that at the end of almost every try the same set of clauses remained unsatisfied. As it turns out, the problems contained strong asymmetries. Such structure can lead GSAT into a state in which a few violated constraints are consistently “out-voted” by many satisfied constraints. To overcome asymmetries, we added a weight to each clause (constraint).6 A weight is a positive integer, indicat- ing how often the clause should be counted when determin- ing which variable to flip next. Stated more precisely, having a clause with weight L is equivalent to having the clause oc- cur L times in the formula. Initially, all weights are set to 1. At the end of each try, we increment by 1 the weights of those clauses not satisfied by the current assignment. Thus the weights are dynamically modified during problem solv- ing, again making use of the information gained by each try. 6Morris (1993) has independently proposed a similar approach. 50 Selman # unsat clauses # of times reached at end of try basic weights 0 0 80 1 2 213 2-4 0 0 5-9 90 301 lO+ 908 406 Table 1: Comparison of GSAT with and without weights on a highly asymmetrical graph coloring problem (see text for explanation). Using weights, GSAT solves a typical instance of these coloring problems in a second or two. This is comparable with the time used by efficient backtrack-style procedures. Table 1 shows the distribution of the number of unsatisfied clauses after each try for GSAT with and without weights on Ginsberg and Jonsson’s 50 node graph (200 variables and 2262 clauses). We used a total of 1000 tries with 1000 flips per try. For example, basic GSAT never found an assignment that had no unsatisfied clauses, but GSAT with weights found one in 80 tries out of 1000. Similarly, basic GSAT found an assignment with one unsatisfied clause only twice, while GSAT with weights found such an assignment 213 times. The weight strategy turns out to help not only on problems handcrafted to fool GSAT (including the similarly “mislead- ing” formulas discussed in Selman et al. (1992)), but also on many naturally occuring classes of structured satisfiability problems. A case in point are formulas that encode planning problems. As we reported in Kautz and Selman (1992), the basic GSAT algorithm had difficulty in solving formulas that encoded blocks-world planning problems. However, using weights GSAT’s solution times are comparable with those of the Davis-Putnam procedure on these formulas. Details appear in Selman and Kautz (1993). The regularities that appear in certain non-random classes of generated formulas tend to produce local minima that can trap a simple greedy algorithm. The weights, in effect, are used to fill in local minima while the search proceeds, and thus uncover the regularities. Note that this general strategy may also be useful in avoiding local minima in other opti- mization methods, and provides an interesting alternative to the use of random noise (as in simulated annealing). Conclusions The experiments we ran with GSAT have helped us under- stand the nature of the search space for propositional sat- isfiability, and have led us to develop interesting heuristics that augment the power of local search on various classes of satisfiability problems. We saw that search space is charac- terized by plateaus, which suggests that the crucial problem is to develop methods to quickly traverse broad flat regions. This is in contrast, for example, to much of the work on sim- ulated annealing algorithms, which support the use of slow cooling schedules to deal with search spaces characterized by jagged surfaces with many deep local minima. We discussed two empirically successful extensions to GSAT, averaging and clause weights, that improve effi- ciency by re-using some of the information present in previ- ous near-solutions. Each of these strategies, in effect, helps uncover hidden structure in the input formulas, and were motivated by the shape of GSAT’s search space. Given the success of these strategies and the fact that they are not very specific to the GSAT algorithm, it appears that they also hold promise for improving other methods for solving hard combinatorial search problems. In our future research we hope to improve our formal understanding of the benefits and applicability of these techniques. Finally, we should note that we do not claim that GSAT and its descendants will be able to efficiently solve all inter- esting classes of satisfiability problems. Indeed, no one uni- versal method is likely to prove successful for all instances of an NP-complete problem! Nonetheless, we believe it is worthwhile to develop techniques that extend the practical range of problems that can be solved by local search. eferences Bertsimas, D. and Tsitsiklis, J. (1992). Simulated Annealing, in Probability and Algorithms, National Academy Press, Wash- ington, D.C., 17-29. Buro, M. and Kleine Biining, H. (1992). Report on a SAT Com- petition. Technical Report # 110, Dept. of Mathematics and Informatics, University of Paderborn, Germany, Nov. 1992. Cheeseman, Peter and Kanefsky, Bob and Taylor, William M. (1991). Where the Really Hard Problems Are. Proc. ZJCAZ-91, 1991,163-169. Crawford, J.M. and Auton, L.D. (1993) Experimental Results on the Cross-Over Point in Satisfiability Problems. Proc. AAAZ- 93, to appear. Davis, E. (1987) Genetic Algorithms and Simulated Annealing, in Pitman Series of Research Notes in Artificial Intelligence, London: Pitman; Los Altos, CA: Morgan Kaufmann. Davis, M. and Putnam, H. (1960). A Computing Procedure for Quantification Theory. J. Assoc. Comput. Mach., 7, 1960, 201-215. Feynman, R.P., Leighton, R.B. , and Sand, M. (1989). The Feyn- man Lectures on Physics, Vol. 1, Addison-Wesley Co, Reading, Fo%; R. Gay, D.M., and Kernighan, B.W. (1993). AMPL: A Modeling Language for Mathematical Programming, San Francisco, CA: The Scientific Press. Gent, I.P. and Walsh, T. (1993). Towards an Understanding of Hill-Climbing Procedures for SAT. Proc. AAAZ-93, to appear. Ginsberg, M. and Jonsson, A. (1992). Personal communication, April 1992. Gu, J. (1992). Efficient Local Search for Very Large-Scale Satis- fiability Problems. Sigart Bulletin, vol. 3, no. 1, 1992, 8-12. Hajek, B. (1988). Cooling Schedules for Optimal Annealing. Math. Opel: Res., 13, 311-329. Jerrum, M. (1992) Large Cliques Elude the Metropolis Process. Random Structures and Algorithms, vol. 3, no. 4, 347-359. Johnson, J.L. (1989). A Neural Network Approach to the 3- Satisfiability Problem. Journal of Paralle! and Distributed Computing, 6,435-449. Johnson, D.S., Aragon, C.R., McGeoch, L.A., and Schevon, C. (1991) Optimization by Simulated Annealing: an Experimental Evaluation; Part II, Graph Coloring and Number Partioning. Operations Research, 39(3):378-406, 1991. Jcinsson, A.K., and Ginsberg, M.L. (1993) Experimenting with New Systematic and Nonsystematic Search Techniques. Work- ing Notes of the AAAZ Spring Symposia, 1993. Kamath, AI?, Karmarkar, N.K., Ramakrishnan, K.G., and Re- sende, M.G.C. (1992). A Continuous Approach to Inductive Inference. Mathematical Programming, 57, 215-238. Kautz, H.A. and Selman, B. (1992). Planning as Satisfiability. Proc. ECAZ-92, Vienna, Austria. Kirkpatrick, S., Gelett, C.D., and Vecchi, M.P. (1983). Optimiza- tion by Simulated Annealing. Science, 220, 621-630. Larrabee, T. and Tsuji, Y. (1993) Evidence for a Satisfiability Threshold for Random 3CNF Formulas. Working Notes AAAZ Spring Symposia, 1993. Lin, S. and Kemighan, B.W. (1973). An Efficient Heuristic Al- gorithm for the Traveling-Salesman Problem. Operations Re- search, 21,498-5 16. Koutsoupias, E. and Papadimitriou C.H. (1992) On the Greedy Algorithm for Satisfiability. Information Processing Letters, 43,53-55. McCarthy, J. and Hayes, P.J. (1969) Some Philosophical Prob- lems From the Standpoint of Artificial Intelligence, in Machine Intelligence 4, Chichester, England: Ellis Horwood, 463ff. Minton, S., Johnston, M.D., Philips, A.B., Johnson, M.D., and Laird, P. (1992) Minimizing Conflicts: a Heuristic Repair Method for Constraint Satisfaction and Scheduling Problems, Artijicial Intelligence, (58)1-3, 1992, 161-205. Mitchell, D., Selman, B., and Levesque, H.J. (1992). Hard and Easy Distributions of SAT Problems. Proc. AAAZ-92, San Jose, CA, 459-465. Morris, P (1993). Breakout Method for Escaping from Local Minima. Proc. AAAZ-93, to appear. Papadimitriou, C.H. (1991). On Selecting a Satisfying Truth As- signment. Proc. FOCS-91, 163-l 69. Papadimitriou, C.H., Shaffer, A., and Yannakakis, M. (1990). On the Complexity of Local Search. Proc. STOC-90. Papadimitriou, C.H., Steiglitz, K. (1982). Combinatorial opti- mization. Englewood Cliffs, NJ: Prentice-Hall, Inc., 1982. Pinkas, G. and Dechter, R. (1992). Proc. AAAl-92, 434-439. Selman, B. and Levesque, H.J., and Mitchell, D.G. (1992). A New Method for Solving Hard Satisfiability Problems. Proc. AAAZ-92, San Jose, CA, &tO&t6. - Selman. B. and Kautz. H. (1993). Domain-Indeoendent Exten- sions to GSAT: Solving’ Large Structured Sati’sfiability Prob- lems. Proc. ZJCAZ-93, to appear. William, C.P. and Hogg, T. (1992) Using Deep Structure to Locate Hard Problems. Proc. AAAZ-92,472-477. Automated Reasoning 51 | 1993 | 5 |
1,376 | Finding Accurate Fro A Knowledge-Intensive Michael Pazzani and Clifford Brunk Department of Information and Computer Science University of California Irvine, CA 92717 pazzani@ics.uci.edu, brunk@ics.uci.edu Abstract An approach to analytic learning is described that searches for accurate entailments of a Horn Clause domain theory. A hill-climbing search, guided by an information based evaluation function, is performed by applying a set of operators that derive frontiers from domain theories. The analytic learning system is one component of a multi-strategy relational learning system. We compare the accuracy of concepts learned with this analytic strategy to concepts learned with an analytic strategy that operationalizes the domain theory. Introduction There are two general approaches to learning classification rules, Empirical learning programs operate by finding regularities among a group of training examples. Analytic learning systems use a domain theory’ to explain the classification of examples, and form a general description of the class of examples with the same explanation. In this paper, we discuss an approach to learning classification rules that integrates empirical and analytic learning methods. The goal of this integration is to create concept descriptions that are more accurate classifiers than both the original domain theory (which serves as input to the analytic learning component) and the rules that would arise if only the empirical learning component were used. We describe a new analytic learning method that returns a frontier (i.e., conjunctions and disjunctions of operational2 and non-operational literals) instead of an operationalization (i.e., a conjunction of operational literals) and we demonstrate there is an accuracy advantage in allowing an analytic learner to dynamically select the level of generality of the learned concept, as a function of the training data. In previous work (Pazzani, et al., 1991; Pazzani & Kibler, 1992), we have described FOCL, a system that extends Quinlan’s (1990) FOIL program in a number of ways, most significantly by adding a compatible explanation-based learning (EBL) component. In this paper we provide a brief review of FOIL and F OCL , then discuss how 1. We use domain theory to refer to a set of Horn-Clause rules given to a learner as an approximate definition of a concept and learned concept to refer to the result of learning. 2. We use the term operational to refer to predicates that are defined extensionally (i.e., defined by a collection of facts). However, the results apply to any satirically determined definition of operationality. 328 Pazzani operationalizing a domain theory can adversely affect the accuracy of a learned concept. We argue that instead of operationalizing a domain theory, an analytic learner should return the most general implication of the domain theory, provided this implication is not less accurate than any more specialized implication. We discuss the computational complexity of an algorithm that enumerates all such descriptions and then describe a greedy algorithm that efficiently addresses the problem. Finally, we present a variety of experiments that indicate replacing the operationalization algorithm of FOCL with the new analytic learning method results in more accurate learned concept descriptions. FOIL FOIL le‘arns classification rules by constructing a set of Horn Clauses in terms of known operational predicates. Each clause body consists of a conjunction of literals that cover some positive and no negative examples. FOIL starts to learn a clause body by finding the literal with the maximum information gain, and continues to add literals to the clause body until the clause does not cover any negative examples. After learning each clause, FOIL removes from further consideration the positive examples covered by that clause. The learning process ends when all positive examples have been covered by some clause. FOCL FOCL extends FOIL by incorporating a compatible EBL component. This allows FOCL to take advantage of an initial domain theory. When constructing a clause body, there are two ways that FOCL can add literals. First, it can create literals via the same empirical method used by FOIL. Second, it can create literals by operationalizing a target concept, i.e., a non-operational definition of the concept to be learned (Mitchell, et al., 1986). FOCL uses FOIL’s information-b,ased evaluation function to determine whether to add a literal learned empirically or a conjunction of literals learned analytically. In general FOCL learns clauses of the form Z-C OiAOd/\ Of where Oi is an initial conjunction of operational literals learned empirically, od is a conjunction of literals found by operationalizing the domain theory, and Of is a final conjunction of literals learned empirically 3. Pazzani, et al. (1991) demonstrate 3. Note the target concept is operationalized at most once per clause and that either Oi, Od, orOf may be empty. From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. that FOCL can utilize incomplete and incorrect domain theories. We attribute this capability to its uniform use of an evaluation function to decihe whether to include literals learned empirically or analytically. Qperationalization in FOCL differs from that of most EBL programs in that it uses a set of positive and negative examples, rather than a single positive example. A non- operational literal is operationalized by producing a specialization of a domain theory that is a conjunction of operational literals. When there are several ways of operationalizing a literal (i.e., there are multiple, disjunctive clauses), the information gain metric is used to determine which clause should be used by computing the number of examples covered by each clause. Figure 1 displays a typical domain theory with an operationalization ( fAghr\kdApAq) represented as bold nodes. m Figure 1. The bold nodes represent one operationalization ( f ACJAhAkA lApA ) of the domain theory. In standard EBL, this path would be chosen if it were a proof of a single positive example. In FOCL, this path would be taken if the choice made at a disjunctive node had greater information gain (with respect to a set of positive and negative examples) than alternative choices. Operationalization The operationalization process yields a specialization of the target concept. Indeed, several systems designed to deal with overly general theories rely on the operationalization process to specialize domain theories (Flann & Dietterich, 1990; Cohen, 1992). However, fully operationalizing a domain theory can result in several problems: 1. 2. Overspecialization of correct non-operational concepts. For example, if the domain theory in Figure 1 is completely correct, then a correct operational definition will consist of eight clauses. However, if there are few examples, or some combinations of operationalizations are rare, then there may not be a positive example corresponding to all combinations of all operationalizations of non-operational predicates. As a consequence, the learned concept may not include some combinations of operational predicates (e.g., iAjAkAlArASAt), although there iS no evidence that these specializations &are incorrect. Replication of empirical le‘arning. If there is a literal omitted from a clause of a non-operational predicate, then this literal will be omitted from each operationalization involving this predicate. For 3 example, if the domain theory in Figure 1 erroneously contained the rule bt fAh instead of btfAgAh, then each operationalization of Ihe target concept using this predicate (i.e., fAhAkAlAmAnA0, fAhAkAlApAq, and fAhAkAlArAsAt) will contain the same omission. FOCL can recover from this error if its empirical component can find the omitted literal, g. However, to obtain a correct learned concept description, FOCL would have to find the same condition independently three times on three different sets of examples. This replication of empirical learning is analogous to the replicated subtree problem in decision trees (Pagallo & Haussler, 1990). This problem should be most noticeable when there are few training examples. Under this circumstance, it is unlikely that empirical learning on several arbitrary partitions of a data set will be as accurate as learning from the larger data set. Proofs involving incorrect non-operational predicates may be ignored. If the definition of a non-operational predicate (e.g., c in Figure 1) is not true of any positive example, then the analytic learner will not return any operationalization using this predicate. This reduces the usefulness of the domain theory for an analytic learner. For example, if (= is not true of any positive example, then FOCL as previously described can find only two operationalizations: UAV and WAX. Again, we anticipate that this problem will be most severe when there are few training examples. With many examples, the empirical learner can produce accurate clauses that mitigate this problem. . Figun-e 2. The bold nodes represent one frontier of the domain theory, bh ( ( IMAO ) v (PAN ) ) . frontiers of a onmain eory To address the problems raised in the previous section, we propose an analytic learner that does not necessarily fully operationalize target concepts. Instead, the learner returns a jkorztier of the domain theory. A frontier differs from an operationalization of a domain theory in three ways. The frontier represented by those nodes immediately above the line in Figure 2, bA ( (mAIlA ) v (DA{? ) ) , illustrates these differences: 1. Non-operational predicates (e.g., 1~~) can appear in the frontier. Machine Learning 329 2. 3. A disjunction of two or more clauses that define a non- operational predicate (e.g., (mAnA I v (pr\q) ) can appear in the frontier. A frontier does not necessarily include all literals in a conjunction (e.g., neither C, nor any specialization of C, appears in the frontier). Combined, the first two distinguishing features of a frontier address the first two problems associated with operationalization. Overspecialization of correct non- operational concepts can be avoided if the analytic component returns a more general concept description. Similarly, replication of empirical learning can be avoided if the analytic component returns a frontier more general than an operationalization. For example, if the domain theory in Figure 2 erroneously contained the rule be-f Ah instead of b+fAg/\h and frontier fhhr\kAlr\d was returned, then an empirical learner would only need to be invoked once to specialize this conjunction by adding g. Of course, if one of the clauses defining d were incorrect, it would make sense to specialize d. However, operationalization is not the only means of specialization. For example, if the analytic learner returned fAhAkAlA ( (mAnA0) v(pAq) ) , then replication of induction problem could also be avoided. This would be desirable if the clause d+rAsAt were incorrect. The third problem with operationalization can be addressed by removing some literals from a conjunction. For example, if no positive examples use atbAcAd because c is not true of any positive example, then the analytic learner might want to consider ignoring c and trying a+bAd. This would allow potentially useful parts of the domain theory (e.g. b and d) to be used by the analytic learner, even though they may be conjoined with incorrect parts. The notion of a frontier has been used before in analytic learning. However, the previous work has assumed that the domain theory is correct and has focused on increasing the utility of learned concepts (Hirsh, 1988; Keller, 1988; Segre, 1987) or learning from intractable domain theories (Braverman & Russell, 1988). Here, we do not assume that the domain theory is correct. We argue that to increase the accuracy of learned concepts, an analytic learner should have the ability to select the generality of a frontier derived from a domain theory. To validate our hypothesis, we will replace the operationalization procedure in FOCL with an analytic learner that returns a frontier. In order to avoid confusion with FOCL, we use the name FOCL-FRONTIER to refer to the system that combines this new analytic learner with an empirical learning component based on FOIL. In general, FOCL-FRONTIER learns clauses of the form r+--O~Q’~~Of where Oi is an initial conjunction of operational literals learned empirically, Fd is a frontier of the domain theory, and Of is a final conjunction of literals learned empirically. We anticipate that due to its use of a frontier rather th‘an an operationalization, FOCL-FRONTIER will be more accurate than FOCL, particularly when there are few training examples or the domain theory is very accurate. 330 Pazzani Enumerating Frontiers of a Formally, a frontier can be defined as follows. Let b represent a conjunction of literals and p represent a single literal. 1. 2. 3. 4. The target concept is a frontier. A new frontier can be formed from an existing frontier by replacing a literal p with blv...vbiv...vb, provided there are rules ptbl,...,p+--bi,...,p+-b,. A new frontier can be formed from an existing frontier by replacing a disjunction blv...vbi_lvbivbi+lV...vbn with b,v...vbi_lvbi+l v...vb, for any i. This deletes bi. A new frontier can be formed from an existing frontier by replacing a conjunction p1 A...AP i _ 1 ~p$ AP i + 1 A...AP, with P~A...AP~_~A~~+~ A...AP, for any i. This deletes pi. One approach to analytic learning would be to enumerate all possible frontiers. The information gain of each frontier could be computed, and if the frontier with the maximum information gain has greater information gain than any literal found empirically, then this frontier would be added to the clause under construction. Such an approach would be impractical for all but the most trivial, non-recursive domain theories. Since each frontier specifies a unique combination of leaf nodes of an and-or tree (i.e., selecting all leaves of a subtree is equivalent to selecting the root of the subtree and selecting no leaves of a subtree is equivalent to deleting the root of a subtree), there are 2k frontiers of a domain theory that has k nodes in the and/or tree. For example, if every non-operational predicate has n clauses, each clause is a conjunction of m literals, and inference chains have2 $epth of d an&nodes, then the number of frontiers is 2m n . Deriving Frontiers from the Target Concept Due to the intractability of enumerating all possible frontiers, we propose a heuristic approach based upon hill- climbing search. The frontier is initialized to the target concept. A set of transformation operators is applied to the current frontier to create a set of possible frontiers. If none of the possible frontiers has information gain greater than that of the current frontiefi, then the current frontier is returned. Otherwise, the potential frontier with the maximum information gain becomes the current frontier and the process of applying transformation operators is repeated. The following transformation operators are used? e Clnuse specialization: If there is a frontier containing a literal P, and there are exactly n rules of the form ptbl , . . . . ptbi, . . . . ptb,, then n frontiers formed by replacing P with bi are evaluated. 4. The information gain of a frontier is calculated in the same manner than Quinlan (1990) calculates the information gain of a literal: by counting the number of positive and negative examples that meet the conditions represented by the frontier. 5. The numeric restrictions placed upon the applicability of each operator are for efficiency reasons (i.e., to ensure that each unique frontier is evaluated only once). . Specialization by removing disjunctions: a. If there is a frontier containing a literal p, and there are n rules of the form ptb, , . . . , ptbi , . . . . pc-b,, then n frontiers formed by replacing p with blv...vbi_,vbi+l v...Vb, are evaluated (provided n>2). b. If there is a frontier containing a disjunction blv...vbi_lvbivbi+lv...vbm, then m frontiers replacing this disjunction with blV...Vbi_1Vbi+l v...vb, are evaluated (provided m>2). 9 Generalization by adding disjunctions: If there is a frontier containing a (possibly trivial) disjunction of conjunction of li terals blv...vbi_lvbi+l v...vb, and there are rules of the form ptb, , . . . , ptbi_l, ptbi , Ptbi+l, ..., pt-b, and m<n-1, then n-m frontiers replacing the disjunction blv...vbi_Ivbi+l v...vb, with blv...vbi_,Vbivb. v...vb, are evaluated. This is implemented effic&%ly by keeping a derivation of each frontier, rather than by searching for frontiers matching this pattern. * Generalization by literal deletion: If there is a frontier containing a conjunction of literals plA~.Api_l~piApi+lA.~Ap,, then n frontiers replacing this conjunction with p 1 A...AP i _ 1 AP i + ., A...AP, are evaluated. There is a close correspondence between the recursive definition of a frontier and these transformation operators. However, there is not a one-to-one correspondence because we have found empirically that in some situations it is advantageous to build a disjunction by adding disjuncts and in other cases it is advantageous to build a disjunction by removing disjuncts. The former tends to occur when few clauses of a predicate are correct while the latter tends to occur when few clauses are incorrect. Note that the first three frontier operators derive logical entailments from the domain theory while the last does not. Deleting literals from a conjunction is a means of finding an abductive hypothesis. For example, in EITHER (Ourston & Mooney, 1990), a literal can be assumed to be true during the proof process of a single example. One difference between FOCL-FRONTIER and the abduction process of EITHER is that EITHER considers all likely assumptions for each unexplained positive example, and FOCL-FRONTIER uses a greedy approach to deletion based on an evaluation of the effect on a set of examples. Evalluation In this section, we report on a series of experiments in which we compare FOCL using empirical learning alone (EMPIRICAL), FOCL using a combination of empirical learning and operationalization, and FOCL-FRONTIER. We evaluate the performance of each algorithm in several domains. The goal of these experiments is to substantiate the claim that analytic learning via frontier transformations results in more accurate learned concept descriptions than analytic learning via operationalization. Throughout this paper, we use an analysis of variance to determine if the difference in accuracy between algorithms is significant. 1.00 0.95 4 0.90 $ 0.85 0.80 “0 0.75 4 0.70 0.65 0.60 0.55 EMPIRICAL (200) EMPIRICAL (50) 0 5 10 15 20 25 30 35 FOCL-FRONTIER (200) FOCL-FRONTIER (50) 0.80 0 5 10 15 20 25 30 35 Number of Modifications Figure 3. A comparison of FOCL ‘s empirical component (EMPIRICAL), FOCL using both empirical learning and operationalization, and FOCL-FRONTIER in the chess end gain domain. up per: The accuracy of EMPIRICAL (given training sets of size 50 and 200) and the average accuracy of the initial theory as a function of the number of changes to the domain theory. Bowes: The accuracy of FOCL and FOCL-FRONTIER on the same data. Chess End Games The first problem we investigate is learning rules that determine if a chess board containing a white king, white rook, and black king is in M illegal configuration. This problem has been studied using empirical learning systems by Muggleton, et al. (1989) and Quinlan (1990). Here, we comp‘are the accuracy of FOCL-FRONTIER and FOCL using a methodology identical to that used by Pazzani and Kibler (1992) to compare FOCL and FOIL. In these experiments the initial theory given to FOCL and FOCL-FRONTIER was created by introducing either 0, 1, 2, 4, 6, 8, 10, 12, 14, 16, 20, 24, 30 or 36 random modifications to a correct domain theory that encodes the relevant rules of chess. Four types of modifications were made: deleting a literal from a clause, deleting a clause, adding a literal to a clause, and adding a clause. Added clauses are constructed with random literals. Each clause contains at least one literal, there is a 0.5 probability that a clause will have at least two literals, a 0.25 probability of containing at least three, and so on. We ran experiments using 25, 50, 75, 150, and 200 training examples. On each trial the training< and test examples were drawn randomly from the set of 86 possible board configurations. We ran 32 trials of each algorithm and measured the accuracy of the learned concept description on 1000 examples. For each algorithm the Machine Learning 331 curves for 50 and 200 training examples are presented. Figure 3 (upper) graphs the accuracy of the initial theory and the concept description learned by FOCL’s empirical component as functions of the number of modifications to the correct domain theory. Figure 3 (lower) graphs the accuracy of FOCL and FOCL-FRONTIER. The following conclusions may be drawn from these experiment. First, FOCL-FRONTIER is more accurate than FOCL when there are few training examples. An analysis of variance indicates that the analytic learning algorithm has a significant effect on the accuracy (pc.0001) when there are 25, 50 and 75 training examples. However, where there are 150 or 200 training examples, there is no significant difference in accuracy between the analytic learning algorithms because both analytic learning algorithms (as well as the empirical algorithm) are very accurate on this problem with larger numbers of training examples. Second, the difference in accuracy between FOCL and FOCL-FRONTIER is greatest when the domain theory has few errors. With 25 and 50 examples, there is a significant interaction between the number of modifications to the domain theory and the algorithm (p<.OOOl and pc.005, respectively). During these experiments, we also recorded the amount of work EMPIRICAL, FOCL and FOCL-FRONTIER performed while learning a concept description. Pazzani and Kibler (1990) argue that the number of times information gain is computed is a good metric for describing the size of the search space explored by FOCL. Figure 4 graphs these data as a function of the number of modifications to the domain theory for learning with 50 training examples. FOCL-FRONTIER tests only a small percentage of the 225 frontiers of this domain theory with 25 leaf nodes. The frontier approach requires less work than operationalization until the domain theory is fairly inaccurate. This occurs, in spite of the larger branching factor because the frontier approach generates more general concepts with fewer clauses than those created by operationalization (see Table 1). When the domain theory is very inaccurate, FOCL and FOCL-FRONTIER perform slightly more work than EMPIRICAL because there is a small overhead in determining that the domain theory has no information gain. g 1000 V EMPIRICAL (50) 9 - FOCL (50) 0 500 2 V FOCL-FRONTIER (50) x 0 0 5 10 15 20 25 30 35 Number of Modifications Figure 4: The number of times the information gain metric is computed for each algorithm. FOCL (92.6 % accurate) illegal(WKr,WKf,WRr,WRf,BKr.BKf)tequal(BKf,WRf). illegal(WKr,WKf,WRr,WRf,BKr,BKf)tequal(BKr,WRrI. illegal(WKr,WKf,WRr,WRf,BKr,BKf)+near(WKr,BKr) A near(WKf,BKf). illegal(WKr,WKf,WRr,WRf,BKr.BKf)+-equal(BKr,WKf) A equal(WKr,BKr) A near(WKf,BKf). illegal(WKr,WKf,WRr,WRf,BKr.BKf)tequal(WKr,WRr) A equal(WKf,WRf). (98.3% accurate) illegal(WKr,WKf,WRr,WRf,BKr,BKf)tk_attack(WKr,WKf.BKr,BKf) V r_attack(WRr,WRf,BKr,BKf). illegal(WKr,WKf,WRr,WRf,BKr,BKfJ+equal(BKf,WRf). illegal(WKr,WKf,WRr,WRf,BKr,BKf)tsame_pos(WKr,WKf,WRr,WRf). Table 1. Typical definitions of i 1 legal. The variables refer to the rank and file of the white king, white rook, and the black king. The domain theory was 91.0% accurate and 50 training examples were used. Educational Loans The second problem studied involves determining if a student is required to pay back a loan based on enrollment and employment information. This theory was constructed by an honors student who had experience processing loans. This problem, available from the UC Irvine repository, was previously used by an extension to FOCL that revises domain theories (Pazzani & Brunk, 1991). The domain theory is 76.8% accurate on a set of 1000 examples. We ran 16 trials of FOCL and FOCL-FRONTIER with this domain theory on randomly selected training sets ranging from 10 to 100 examples and measured the accuracy of the learned concept by testing on 200 distinct test examples. The results indicate that the learning algorithm has a significant effect on the accuracy of the learned concept (p<.OOOl). Figure 5 plots the mean accuracy of the three algorithms as a function of the number of training examples. 1.00 0.95 *& 0.90 2 0.85 g 0.80 0.65 0 10 20 30 40 50 60 70 80 90 1OC Number of examples Figure 5. The accuracy of FOCL’s empirical component alone, FOCL with operationalization and FOCL-FRONTIER on the student loan data. Nynex Max Nynex Max (Rabinowitz, et al., 1991) is an expert system that is used by NYNEX (the parent company of New York Telephone and New England Telephone) at several sites to determine the location of a malfunction for customer- reported telephone troubles. It can be viewed as solving a 332 Pazzani classification problem where the input is data such as the type of switching equipment, various voltages and resistances and the output is the location to which a repairman should be dispatched (e.g., the problem is in the customer’s equipment, the customer’s wiring, the cable facilities, or the central office). Nynex Max requires some customization at each site in which it is installed. 100 200 300 400 Number of Examples igure 6. The accuracy of the lea customizing the Max knowledge-base. rning al gorithms at In this experiment, we compare the effectiveness of FOCL-FRONTIER and FOCL at customizing the Nynex Max knowledge-base. The initial domain theory is taken from one site, and the training data is the desired output of Nynex Max at a different site. Figure 6 shows the accuracy of the learning algorithms (as measured on 200 independent test examples), averaged over 10 runs as a function of the number of training examples. FOCL-FRONTIER is more accurate than FOCL (p<.OOOl). This occurs because the initial domain theory is fairly large (about 75 rules), very disjunctive, and fairly accurate (about 95.4%). Under these circumstances, FOCL requires many examples to form many operational rules, while FOCL-FRONTIER learns fewer, more general rules. FOCL-FRONTIER is the only algorithm to achieve an accuracy significantly higher than the initial domain theory. elated ark Cohen (1990; 1991a) describes the ELGIN systems that makes use of background knowledge in a way similar to FOCL-FRONTIER. In particular, one variant of ELGIN called ANA-EBL, finds concepts in which all but k nodes of a proof tree are operational. The algorithm, which is exponential in k, learns more accurate rules from overly general domain theories than an algorithm that uses only operational predicates. A different variant of ELGIN, called K-TIPS, selects k nodes of a proof tree and returns the most general nodes in the proof tree that are not ancestors of the selected nodes. This enables the system to learn a set of clauses containing at most k literals from the proof tree. Some of the literals may be non-operational and some subtrees may be deleted from the proof tree. In some ways, ELGIN is like the optimal algorithm we described above that enumerates all possible frontiers. A major difference is that ELGIN does not allow disjunction in proofs, and for efficiency reasons is restricted to using small values of k. FOCL-FRONTIER is not restricted in such a fashion, since it relies on hill-climbing search to avoid enumerating all possible hypotheses. In addition, the empirical learning component of FOCL-FRONTIER allows it to learn from overly specific domain theories in addition to overly general domain theories. In the GRENDEL system, Cohen (1991 b) uses a grammar rather than a domain theory to generate hypotheses. Cohen shows that this grammar provides an elegant way to describe the hypothesis space searched by FOCL. It is possible to encode the domain theory in such a grammar. In addition, it is possible to encode the hypothesis space searched by FOIL in the grammar. GRENDEL uses a hill-climbing search method similar to the operationalization process in FOCL to determine which hypothesis to derive from the grammar. Cohen (1991b) shows that augmenting GRENDEL with advice to prefer grammar rules corresponding to the domain theory results in concepts that are as accurate as those of FOCL (with operationalization) on the chess end game problem. The primary difference between GRENDEL and FOCL-FRONTIER is that FOCL-FRONTIER contains operators for deleting literals from and-nodes and for incorporating several disjunctions from or-nodes. However, due to the generality of GRENDEL’s grammatical approach, it should be possible to extend GRENDEL by writing a preprocessor that converts a domain theory into a grammar that simulate these operators. Here, we have shown that these operators result in increased accuracy, so it is likely that a grammar based on the operators proposed here would increase GRENDEL’s accuracy. FOCL-FRONTIER is in some ways similar to theory revision systems, like EITHER (Ourston & Mooney, 1990). However, theory revision systems have an additional goal of making minimal revisions to a theory, while FOCL-FRONTIER uses a set of frontiers from the domain theory (and/or empirical learning) to discriminate positive from negative examples. EITHER deals with propositional theories and would not be able to revise any of the relational theories used in the experiments here. A more recent theory revision system, FORTE (Richards & Mooney, 1991), is capable of revising relational theories. It has been tested on one problem on which we have run FOCL, the illegal chess problem from Pazzani & Kibler (1992). Richards (1992) reports that with 100 training examples FOCL is significantly more accurate than FORTE (97.9% and 95.6% respectively). For this problem, FOCL-FRONTIER is 98.5% accurate (averaged over 20 trials). FORTE has a problem with this domain, since it contains two overly-general clauses for the same relation and its revision operators assume that at most one clause is overly general. Although it is not possible to draw a general conclusion form this single example, it does indicate that there are techniques for taking advantage of information contained in a theory that FOCL utilizes that are not incorporated into FORTE. Machine Learning 333 Future Work Cohen, W. (1992). Abductive explanation-based learning: A solution to the multiple inconsistent explanation problem. Here, we have described one set of general purpose operators that derive frontiers. We are currently experimenting with more special purpose operators designed to handle commonly occurring problems in knowledge-based systems. For example, one might wish to consider operators that negate a literal in a frontier (since we occasionally omit a not from rules) or that change the order of arguments to a predicate. Initial experiments (Pazzani, 1992) with one such operator in FOCL (replacing one predicate with a related predicate) yielded promising results. Gonclusion In this paper, we have presented an approach to integrating empirical and analytic learning that differs from previous approaches in that it uses an information theoretic metric on a set of training examples to determine the generality of the concepts derived from the domain theory. Although it is possible that the hill-climbing se‘arch algorithm will find a local maximum, experimentally we have demonstrated that in situations where there are few training examples, the domain theory is very accurate, or the domain theory is highly disjunctive this approach learns more accurate concept descriptions than either empirical learning alone or a similar approach that integrates empirical learning and operationalization. From this we conclude that there is an advantage in allowing the analytic learner to select the generality of a frontier derived from a domain theory both in terms of accuracy and in terms of the amount of work required to learn a concept description. Acknowledgments This research is supported by an Air Force Office of Scientific Research Grant, F49620-92-J-030, and by the University of California, Irvine through an allocation of computer time. We thank Kamal Ali, William Cohen, Andrea Danyluk, Caroline Ehrlich, Dennis Kibler, Ray Mooney and Jim Wogulis for comments on an earlier draft of this paper. References Braverman, M. & Russell, S. (1988). Boundaries of operationality. Proceedings of the Fifth International Conference on Machine Learning (pp. 221-233). Ann Arbor, MI: Morgan Kaufmann. Cohen, W. (1990). Explanation-based generalization as an abstraction mechanism in concept learning. Technical Report DCS-TR-271 (Ph.D. dissertation). Rutgers University. Cohen, W. (1991a). The generality of overgenerality. Proceedings of the Eighth International Workshop on Machine Learning (pp. 490-494). Evanston, IL: Morgan Kaufmann. Cohen, W. (1991 b). Grammatically biased learning: Learning Horn Theories using an explicit antecedent description language. AT&T Bell Laboratories Technical Report 11262-910708-16TM (available from the author). Machine Learning. Flann, N., & Dietterich, T. (1990). A study of explanation- based methods for inductive learning. Machine Learning, 4, 187-226. Hirsh, H. (1988). Reasoning about operationality for explanation-based learning. Proceedings of the Fifth International Conference on Machine Learning (pp. 214- 220). Ann Arbor, MI: Morgan Kaufmann. Keller, R. (1988). Operationality and generality in explanation-based learning: Separate dimensions or opposite end-points. AAAI Spring Symposium on Explanation-Based Learning. Stanford University. Mitchell, T., Keller, R., & Kedar-Cabelli, S. (1986). Explanation-based learning: A unifying view. Machine Learning, 1, 47-80. Muggleton, S., Bain, M., Hayes-Michie, J., & Michie, D. (1989). An experimental comparison of human and machine learning formalisms. Proceedings of the Sixth International Workshop on Machine Learning (pp. 113-l 18). Ithaca, NY: Morgan Kaufmann. Ourston, D., & Mooney, R. (1990). Changing the rules: A comprehensive approach to theory refinement. Proceedings of the Eighth National Conference on Artificial Intelligence (pp. 815-820). Boston, MA: Morgan Kaufmann. Pagallo, G., & Haussler, D. (1990). Boolean feature discovery in empirical learning. Machine Learning, 5, 71- 100. Pazzani, M., & Brunk, C. (1991). Detecting and correcting errors in rule-based expert systems: an integration of empirical and explanation-based learning. Knowledge Acquisition, 3, 157-173. Pazzani, M., Brunk, C., & Silverstein, G. (1991). A knowledge-intensive approach to learning relational concepts. Proceedings of the Eighth International Workshop on Machine Learning (pp. 432-436). Evanston, IL: Morgan Kaufmann. Pazzani, M., & Kibler, D. (1992). The role of prior knowledge in inductive learning. Machine Learning. Pazzani. M., (1992). When Prior Knowledge Hinders Learning. AAAI workshop on constraining learning with prior knowledge. San Jose. Quinlan, J.R., (1990). Learning logical definitions from relations. Machine Learning, 5, 239-266. Rabinowitz, H., Flamholz, J., Wolin, E.,& Euchner, J. (1991). Nynex Max: A telephone trouble screening expert system. In R. Smith & C. Scott (Eds.) Innovative applications of artificial intelligence, 3, 213-230. Richards, B. (1992). An Operator-Based Approach to First- Order Theory Revision. Ph.D. Thesis. University of Texas, Austin. Richards, B. & Mooney, R. (1991). First-Order Theory Revision. Proceedings of the Eight International Workshop on Machine Learning (pp. 447-451). Evanston, IL: Morgan Kaufmann. Segre, A. (1987). On the operationality/generality trade- off in explanation-based learning. Proceedings of the Tenth International Joint Conference on Artificial Intelligence (pp. 242-248). Milan, Italy: Morgan Kaufmann. 334 Pazzani | 1993 | 50 |
1,377 | nctions s Mehran Sahami Department of Computer Science Stanford University Stanford, CA 94305 sahami@cs.Stanford.EDU Abstract This paper investigates an algorithm for the construction of decisions trees comprised of linear threshold units and also presents a novel algorithm for the learning of non- linearly separable boolean functions using Madaline- style networks which are isomorphic to decision trees. The construction of such networks is discussed, and their performance in learning is compared with standard Back- Propagation on a sample problem in which many irrelevant attributes are introduced. Littlestone’s Winnow algorithm is also explored within this architecture as a means of learning in the presence of many irrelevant attributes. The learning ability of this Madaline-style architecture on non-optimal (larger than necessary) networks is also explored. I[ntroduc&ion We initially examine a non-incremental algorithm that learns binary classification tasks by producing decision trees of linear threshold units (LTU trees). This decision tree bears some similarity to the decision trees produced by ID3 (Quinlan 1983) and Perceptron Trees (Utgoff 1988), yet it seems to promise more generality as each node in our tree implements a separate linear discriminant function while only the leaves of a Perceptron Tree have this generality and the remaining nodes in both the Perceptron Tree and the trees produced by ID3 perform a test on only one feature. Recently, Brodley and Utgoff (1992) have also shown that the use of multivariate tests at each node of a decision tree often provides greater generalization when learning concepts in which there are irrelevant attributes. Furthermore, as presented in (Brent 1990), we show how such an LTU tree can be transformed into a three-layer neural network with two hidden layers and one output layer (the input layer is not counted) and can often be trained much more quickly than the standard Back-Propagation algorithm applied to an entire network (Rumelhart, Hinton, & Williams 1986). After examining this transformation, a new incremental learning algorithm, based on a Madaline- style architecture (Ridgway 1962, Widrow & Winter 1988), is presented in which learning is performed using such three-layer networks. The effectiveness of this algorithm is assessed on a sample non-linearly separable boolean function in order to perform comparisons with the LTU tree algorithm and a similar network trained using standard Back-Propagation. Being primarily interested in functions in which many irrelevant attributes exist, we also explore the performance of the Winnow algorithm (Littlestone 1988, 1991) (which has proven effective in learning linearly separable functions in the presence of many irrelevant attributes) within the Madahne-style learning architecture. We contrast how it performs in learning our sample non-linearly separable function with the classical fixed increment (Perceptron) updating method (Duda & Hart 1973). We also examine the effectiveness of such learning procedures in “non- optimal” Madaline-style networks, and comment on possible future extensions of this learning architecture. The LTU Tree Algorit The tree building algorithm is non-incremental requiring that the set of all training instances, S, be available from the outset1 We begin with the root node of the tree and produce a hyperplane to separate our training set using any means we wish (in our trials, Back-Propagation was applied to one node to produce a single separating hyperplane) into the sets So and S1, where Si (i = 0, 1) indicates the set of instances classified as i by the separating hyperplane. If there are instances in So which should be classified as 1 (called “incorrect O’s”) we then create a left child node and recursively apply the algorithm on the left child using So as the training set. Similarly, if any instances in Sl should be classified as 0 (“incorrect I’s”) we create a right child node and again recursively apply our algorithm on the right child using Sl as the training set. Thus the algorithm normally terminates when all of the instances in the original training set, S, are correctly classified by our tree. The classification procedure using the completed tree requires us to simply begin at the root node and determine whether the given instance is classified as a 0 or 1 by the hyperplane stored there. A classification of 0 means we follow the left branch, otherwise we follow the right, and recursively apply this procedure with the hyperplane stored at the appropriate child node. The classification given at a leaf node in the tree is the final output of the classification procedure. Note that the leaves in this decision tree do not lNotation and naming conventions in the description of the LTU tree algorithm are from Brent (1990). Machine Learning 335 From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. classify all instances into one labeling, rather the classification for the instance is the result of applying the linear discriminator stored in the leaf node. For our experiments, certain (reasonable) limiting assumptions were placed on the building of such LTU trees in order to prevent needlessly complex trees, thereby helping to improve generalization and reduce the algorithm’s execution time. These included setting a maximum tree depth of 10 layers and tolerating a certain percentage of error in each individual node. This toleration condition was set after some empirical observations which indicated that given some number of similarly classified instances in a node, n, a certain percentage of erroneous classifications, E, would be acceptable (thus precluding further branching for that particular classification from the node). These values are as follows: eifnI25thenE=25% *ifn>25&n5100thenE=12% 0 else E = 6% Initial testing was performed within this LTU tree architecture using a variety of methods for learning the linear discriminant at each node of the tree (Sahami 1993). Wishing to minimize the number of erroneous classifications made at each node in the tree, Back- Propagation appeared to be the most promising of these weight updating procedures. While this heuristic of minimizing errors at each node can occasionally produce larger than optimal trees2, it generally produces trees of optimal or near-optimal size, and was shown to produce the smallest trees on a number of sample functions when compared with other weight updating procedures. Since we are only allowed to store one hyperplane at each node (and not an entire network, although this might be an interesting angle for further research) we apply the Back- Propagation algorithm to only one unit at a time. To make this unit a linear threshold unit, a threshold is set at 0.5 after training is completed (this threshold is not used during training). Thus the output of the unit trained with Back-Propagation is given by: OLTUn = 1 onr0.5 0 otherwise on= l 1 + e-Nk where Nk = i?kiik + ok-1 where 0, is the actual real valued output of the nth trained unit on any instance and OLTU~ is the output of our “linear threshold unit.” 8 represents the “bias” weight of the unit. The updating procedure used in training each node is: 2An optimal tree would contain the minimum number of linear separators (nodes) necessary to successfully classify all instances in the training set, S. Aw’, = (hte * iik * ok * (1 - a) * (d - ok)) 1 where ok = - 1 + esNk’ Nk = i&ii& + @k-l and &+I = ok + A& 881, = (hate * & * (1 - a) * (d - ok)) Where w is the weight vector being updated and x is a given instance vector. We set bate = 1.0 and momentum = 0.5 in our experiments. There are many possible extensions to this LTU tree- building algorithm including irrelevant attribute elimination (Brodley & Utgoff 1992), producing several hyperplanes at each node using different weight updating procedures and selecting the hyperplane which causes the fewest number of incorrect classifications, using Bayesian analysis to determine instance separations (Langley 1992), post-processing of the tree to reduce its size, etc. These modifications are beyond the scope of this paper however, and generally are only fine tunings to the underlying learning architecture which is not changed by them. Creating Networks From LTU Trees The trees which are produced by the LTU tree algorithm can be mechanically transformed into three-layer connectionist networks that implement the same functions. Given an LTU tree, T, with m nodes, we can construct an isomorphic network containing the m nodes of the tree in the first hidden layer (each fully connected to the set of inputs). The second hidden layer consisting of n nodes (AND gates), where n is the number of possible distinct paths between the root of T and a leaf node (a node without two children). And the output layer merely being an OR gate connected to all n nodes in the previous layer. The connections between the first and second hidden layers are constructed by traversing each possible path from the root to a leaf in the tree T, and at each node recording which branch was followed to get to it. Thus each node in the second hidden layer represents a single distinct path through T by being connected to those nodes in the first layer which correspond to the nodes that were traversed along the given path. Since the nodes in the second hidden layer are merely AND gates, the inputs coming from the first hidden layer must first be inverted if a left branch was traversed in T at the node corresponding to a given input from the first hidden layer. Two examples are given below. As pointed out in (Brent 1990), it is more efficient to do classifications using the tree structure than the corresponding network since the only computations which must be performed are those which lie on a single path from the root of the tree to a leaf. Conveniently, when we later examine how to incrementally train a network which corresponds to an LTU tree, we may then transform the trained network into a decision tree to attain this computational benefit during classification. 336 Sahami Figure 1 Figure 2 Figure 1 shows a two node tree produced by the LTU tree algorithm, while Figure 2 shows the corresponding network after performing the transformation described above. Nodes 1 and 2 in Figure 1 correspond directly to nodes 1 and 2 in Figure 2. Node 3 simply has the output of node 1 as its input (since there is a path of length 1 in the tree from the root to node 1 which is considered a leaf.) Node 4 is a conjunct of the inverted output of node 1 (since we must follow the left branch from node 1 to reach node 2 in the tree) and the output of node 2. Node 5 is simply an OR gate. Figure 3 Figure 4 Figure 3 shows a more complex tree produced by the LTU tree algorithm, and Figure 4 represents the corresponding network. Nodes 1, 2, 3, and 4 in Figure 3 correspond directly to the same nodes in Figure 4. In Figure 4, node 5 represents the path l-2-4 in the tree, with the inverted output of node 1, inverted output of node 2 and output of node 4 as inputs. Node 6 represents the path l-2 (as node 2 in the tree is also considered a leaf) with the inverted output of node 1 and the output of node 2 as inputs. Node 7 corresponds to the path l-3 and has the outputs of nodes 1 and 3 as inputs. Again, node 8 is simply a disjunction of the outputs of nodes 5,6 and 7. Madaline-Style Learning Algorithm The updating strategy in this MadaIine-style architecture is based upon modifying the weight vectors in the first hidden layer of nodes by appropriately strengthening and weakening them based on incorrect predictions by the network. We also make use of knowing the structure of the LTU tree, T, which corresponds to the network we are training. When an instance is incorrectly classified as a 0, we know that no nodes in the second hidden layer corresponding to a leaf in T fired. Thus we look for the node corresponding to a leaf node in T which is closest to threshold and strengthen it. We also examine any nodes corresponding to non-leaf nodes in T that we would know exists along the path from the root of T to the given leaf node closest to threshold. If these nodes were over threshold but the given leaf is down their left child in T, then the node in the network corresponding to the particular non-leaf node in T is weakened. Similarly if the node corresponding to a non-leaf node in T was under threshold, but the leaf node is on a path down its right child in T, then the node in the network corresponding to the non-leaf node in T is strengthened. When an instance is misclassified as a 1, we simply find the node in the second hidden layer of the network which misfired (there can only be one) and weaken all nodes which are inputs to it and also correspond to leaf nodes in T. In the case of the network in Figure 2, this translates in to the following updating procedure: On a misclassified 0, determine if node 1 or node 2 is closer to threshold: 0 If node 1 is closer to threshold, then strengthen node 1, else strengthen node 2. On a misclassified 1, only node 3 or 4 (but not both) misfired in this case: 0 If the output of node 3 is 1,then weaken node I, else weaken node 2. How nodes are strengthened and weakened is based upon what learning method was being used on the Madaline-style networks. Both the classical fixed increment (referred to simply as Madaline below) and Littlestone’s Winnow algorithm (referred to as Mada-winnow) were employed in our tests as follows: Machine Learning 337 Algorithm Fixed Increment (Madahne) Updating Method Strengthen: Gk+l =&+ii Weaken: Winnow (Mada-winnow) Strengthen: i&+1 = a%+ ‘( I Weaken: $+1 = p”‘(&) Where w is the weight vector (wi is the ith component of w) at the node being modified and x is the instance vector which was misclassified. Note that a=2.0 and p=O.S (Winnow also uses a fixed threshold which was set to 4.0 in our initial experiments). Experimental esults In testing the LTU tree algorithm and the corresponding network for their abilty to learn, a non-linearly separable 5- bit boolean function was used. This function was defined aS: This function, effectively being the disjunction of two r-of- k threshold functions, is not linearly separable, but can be optimally learned using two hyperplanes to separate the instance space. Thus in testing our various learning methods on this function, we compare the LTU tree algorithm against training networks configured similarly to Figure 2 (as this is the optimal size network to learn the given function). In training the networks, we compare standard Back-Propagation applied to the entire network (using preset fixed weights in the second hidden and output layers to simulate the appropriate AND and OR gates) against our novel Madaline-style learning method (discussed above). Note that our learning procedure is effectively only learning the separating hyperplanes in the first hidden layer of the network (corresponding to learning the nodes of an LTU tree). On a technical note, the instance vectors presented to both the LTU tree and Back-Propagation applied to an entire network include the original boolean vector (comprised of l’s and O’s) with the complements of the original vector to create a “double length” instance vector (as preliminary testing showed that the use of complements helped improve learning performance with these algorithms.) In the Madaline-style tests, the instance vectors presented when using fixed increment updating were composed of l’s and -1’s without the addition of complements, whereas when using Winnow the instance vectors were similar to those with the LTU tree (complementary attributes were added). The number of instances presented for training, as well as the number of dimensions in the input vector were varied. Note that only the first 5 bits of the instance vector are relevant to its proper classification and the added bits are simply random, irrelevant attributes. The dimensions given in the graphs below measure the size of the original instance vector (not including complementary attributes). The graphs below represent 5 test runs on each algorithm in each case. Testing is done on an independent., randomly generated set of instances, numbering the same as the training set. The “% error (average)” refers to the percentage of errors made during testing by each algorithm over the 5 test runs. The “% error (best)” refers to the smallest percentage of errors made during testing over the 5 test runs. We see that, in the average case (Figure 5), when trained using 1000 instances (which are each seen only once), the Madaline network (using fixed-increment updating) outperforms all other algorithms as the number of irrelevant attributes is increased. The LTU tree (called BP tree here) performs without errors up to 15 dimensions (during which time it was consistently producing optimal trees of 2 nodes) and quickly begins to degenerate in performance as the trees it produces get larger due to poor separating hyperplanes being produced at each node. Not surprisingly, it is at this same point when using Back- Propagation over an entire network also begins to degenerate quickly leading us to realize that the network is getting too small to properly deal with irrelevant attributes. Mada-winnow also performs very erratically, due primarily to seeing too few instance vectors to settle into a good “solution state.” The best case analysis (Figure 6) indicates a simple linear increase in the number of errors made by Madahne (caused by a linear increase in the sum of weights from irrelevant attributes) as opposed to an erratic increase indicating that the boolean function was not learned. Similarly, Mada-winnow seems to be capable of learning the function up to 35 dimensions and quickly degenerates indicating that learning is not effectively taking place, as opposed to occasional misclassifications caused by added irrelevant attribute weights. We find the BP network still unable to learn beyond 15 dimensions, while the BP tree is still effective up to 30 dimensions. When we examine the results of using 3000 training instances (each of which is seen once), the effectiveness of the Madaline-style architecture becomes much more clear. In the average case (Figure 7) we still find the standard BP network degenerating after 15 dimensions. However, we see extremely low error rates in Madaline all the way through, indicating that not only has the target function been learned, but the effect of irrelevant attribute weights has also been minimized. Moreover, we find that Mada- winnow is successful in learning the target function with instances up to 35 dimensions in length before its predictive accuracy begins to fall. Similarly, the BP tree is effective for instances up to 40 dimensions before once 338 Sahami Dimensions Figure 5 Figure 6 Trained usine 3000 rando Dimensions Dimensions Figure 7 Figure 8 again tree sizes grow too large as the linear separators at each node provide poorer splits. In the best case (Figure 8) we see the most striking results as Madaline still continues a very low error rate, and Mada-winnow has 0% errors over the entire range of dimensions tested! This would indicate that by training a number of such Mada-winnow networks and using cross-validation techniques to determine which has the highest predictive accuracy, we can learn non- linearly separable boolean functions with an extremely high degree of accuracy even in the presence of many irrelevant attributes. This of course does require some knowledge as to what network size would provide the best results, but initially running the LTU tree algorithm on our data set could provide us with good ballpark approximations for this. Non-Optimal Networks Having seen the predictive accuracy of the Madaline-style networks in learning when the optimal network size3 was known, it is important to get an idea for the accuracy of such networks when they are non-optimal. In examining 3The notion of optimal network size stems from the transformation of an optimal LTU tree. the effects of using a network that is larger than necessary, the network in Figure 4 was used to learn the same 5-bit non-linearly separable problem. The updating procedure for this network is described below: On a misclassified 0, determine if node 2, 3 or 4 is closest to threshold: 0 If node 2 is closest to threshold, then strengthen node 2 and if node 1 is over threshold then weaken node 1. e If node 3 is closest to threshold, then strengthen node 3 and if node 1 is not over threshold then strengthen node 1. 0 If node 4 is closest to threshold, then strengthen node 4 and if node 1 is over threshold then weaken node 1. On a misclassified 1, determine if node 5, 6 or 7 misfired: 0 If the output of node 5 is 1, then weaken node 4. 0 If the output of node 6 is 1, then weaken node 2. e If the output of node 7 is 1, then weaken node 3. Now we compare the previous results of Madaline and Mada-winnow using the smaller network, denoted (S), with the larger network, denoted (IL). Again looking at the average of 5 test runs on 1000 training instances (Figure 9), we see that the performance of both Madaline and Mada- winnow are worse when learning using a larger network (as Machine Learning 339 I -I- Mada-winnow (L) -+-- Mada-winnow (S) - Madaline (L) - Madaline (S) I Trained using 1000 randomlv generated instances lnomolAomotAo Hd e4timmbd-m Dimensions Figure 9 Trained using 3000 r: 10 8 6 4 2 0 m Figure 11 Figure 12 we would expect, since there is greater possibility for confusion among which nodes to update). This is also seen in the best case graph (Figure 10) where we still see the erratic behavior of learning using the Mada-winnow (L) algorithm, which cannot properly learn the target function even with only a few irrelevant dimensions. The Madaline (L) algorithm still holds some promise as it maintains a relatively low error rate until about the 30 dimension mark before it too begins to quickly degenerate in its predictive ability. Again the most striking differences are seen when examining the graphs of learning runs using 3000 training instances. Noting that the “% error” scale on Figures 11 and 12 is much less than the previous figures (to make the graph more readable), we see that in the average case, while Mada-winnow (L)‘s behavior is still erratic (caused by the way the Winnow algorithm greatly modifies weights between each update, leading to instability in the resultant weight vector when training ceases), but the error rate stays below 10%. Moreover, Madaline (L) only shows a small linear decrease in its predictive ability over the entire graph, reflecting again that the target function was effectively learned and misclassifications are arising from the cumulative sum of small irrelevant attribute weights. Finally, Figure 12 shows the most impressive results. Dimensions Figure 10 domlg generated instances Dimensions First, Madaline(L) has only a slightly higher error rate that Madaline (S). And more impressively, the Mada-winnow (L) algorithm is able to maintain 0% error over the entire range of irrelevant attributes, reflecting that network size is not entirely crucial for effectively learning within this paradigm. An examination of the weights in the larger network indicated that, in fact, two nodes in the first hidden layer contained the appropriate hyperplanes required to learn the target function and the other two nodes had somewhat random but essentially “unused” weights in terms of instance classification. It is important to note that the fixed threshold used with the Winnow algorithm was dependent on the number of irrelevant attributes in the instance vectors presented. This reflects a problem inherent in the Winnow algorithm (in which threshold choice can have a large impact upon learning) and is not a shortcoming of the Madaline-style architecture. Future Work There is still a great deal of work that needs to be done in examining and extending both the LTU tree and the Madaline-style learning algorithms. In terms of the LTU tree, new methods for finding better separating hyperplanes as well as the incorporation of post-learning pruning 340 Sahami techniques would be very helpful in determining proper network size both for Madaline-style and standard neural networks. As for the Madaline-style networks, clearly more work needs to be done in examining larger networks and learning more complex functions. Another interesting problem arises in looking at methods to prune the network during training to produce better classifications. Also theoretical measures are needed for the number of training instances to present for adequate learning. Acknowledgments The author is grateful to Prof. Nils Nilsson, without whose ideas, guidance, help and support, this work would never have been done. Additional thanks go to Prof. Nilsson for reading and commenting on an earlier draft of this paper. Dr. Pat Langley also provided a sounding board for ideas for extending research dealing with LTU trees. References Brent, R. P. 1990. Fast training algorithms for multi-layer neural nets. Numerical Analysis Project Manuscript NA- 90-03, Dept. of Computer Science, Stanford Univ. Brodley, C. E., and Utgoff, P. E. 1992. Multivariate Versus Univariate Decision Trees. COINS Technical Report 92-8, Dept. of Computer Science, Univ. of Mass. Duda, R. O., and Hart, P. E. 1973. Pattern Classification and Scene Analysis. New York: John Wiley & Sons. Langley, P. 1992. Induction of Recursive Bayesian Classifiers. Forthcoming. Littlestone, N. 1988. Learning quickly when irrelevant attributes abound: a new linear-threshold algorithm. Machine Learning 2~285-318. Littlestone, N. 1991. Redundant noisy attributes, attribute errors, and linear-threshold learning using Winnow. In Proceedings of the Fourth Annual Workshop of Computational Learning Theory, 147- 156. San Mateo, CA: Morgan Kaufmann Publishers, Inc. Nilsson, N. J. 1965. Learning machines. New York: McGraw-Hill. Quinlan, J. R. 1986. Induction of decision trees. Machine Learning 1:81-106. Ridgway, W. C., 1962. An Adaptive Logic System with Generalizing Properties. Stanford Electronics Laboratories Technical Report 1556-1, prepared under Air Force Contract AF 33(616)-7726, Stanford Univ. Rumelhart, D. E.; Hinton, G. E.; and Williams, R. J. 1986. Learning internal representations by error propagation. Parallel Distributed Processing, Vol. 1, eds. D. E. Rumelhart and J. L. McClelland, 318-62. Cambridge, MA: MIT Press. Rumelhart, D. E. and McClelland, J. L. eds. 1986. Parallel Distributed Processing, Vol. 1. Cambridge, MA: MIT Press. Sahami, M. 1993. An Experimental Study of Learning Non-Linearly Separable Boolean Functions With Trees of Linear Threshold Units. Forthcoming. Utgoff, P. E. 1988. Perceptron Trees: A Case Study in Hybrid Concept Representation. In AAAI-88 Proceedings of the Seventh National Conference on Artificial Intelligence, 601-6. San Mateo, CA: Morgan Kaufmann. Widrow, B., and Winter, R. G. 1988. Neural Nets for Adaptive Filtering and Adaptive Pattern Recognition. IEEE Computer, March:25-39. Winston, P. 1992. Artificial Intelligence, third edition. Reading, MA: Addison-Wesley. Machine Learning 341 | 1993 | 51 |
1,378 | Generating Argument at ive J ent eterminers Michael Elhadad Ben Gurion University of the Negev Dept of Mathematics and Computer Science Beer Sheva, 84105, Israel elhadad@bengus.bgu.ac.il * Abstract This paper presents a procedure to generate judgment determiners, e.g., many, few. Although such deter- miners carry very little objective information, they are extensively used in everyday language. The pa- per presents a precise characterization of a class of such determiners using three semantic tests. A con- ceptual representation for sets is then derived from this characterization which can serve as an input to a generator capable of producing judgment determiners. In a second part, a set of syntactic features control- ling the realization of complex determiner sequences is presented. The ma@ing from the conceptual in- put to this set of syntactic features is then presented. The presented procedure relies on a description of the speaker’s arg-gumentative intent to control this mapping and to select appropriate judgment determiners. Introduction There are cases when answering many is a sign of ig- nor ante: Teacher: How many neutrons are there in an atom of Uranium? Child: many.. . In other cases, though, uttering a precise number is of no help to the hearer: Q: how difficult is Topology 101? Al: It has six assignments. A2: It requires many assignments. In Al, the precise number of assignments in the class can be seen as an awful lot or a pretty average work- load. In all cases, the precise number does not sat- isfy the communicative need expressed by the question, and answer A2, with a determiner like many is more felicitous. This paper addresses the issue of producing such judgment determiners (JDs) in a text generation system, focusing on a class I call argumentative judg- ment determiners. *This paper reports on work pursued while the author was at Columbia University, Dept of Computer Science 344 Elhadad This problem has been mostly ignored in previous work in generation for two reasons: first, most of the previous work on determiner generation has focused on the difficult decision definite/indefinite; second, most existing generation systems, except for Dale’s EPICURE (Dale 1988), d o not focus on the issue of non-singular NPs. Consequently, the generation of JDs, although it fulfills an important pragmatic function, has remained largely unexplored. The determiner generation procedure presented here is implemented as part of ADVISOR II, a generation system which provides advice to university students preparing their course schedule (Elhadad 1993). In this domain, an analysis of a corpus of 40,000 words containing transcripts of recordings of advising sessions with human academic advisors shows the following dis- tribution of determiners: In this table, judgment dets include (many, few, al/, no, a lot, a large number of, lots of). This distribu- tion indicates that, at least in this domain, whenever a quantity must be referred to, JDs are used more of- ten than exact determiners, and highlights the need to cover JDs in a generation system like ADVISOR II. The paper starts by defining JDs and provides a se- mantic characterization of JDs. I derive from this char- acterization a set of requirements on the form of the input representation that must be sent to a generator to allow it to produce JDs. I then discuss the syntax of judgment determiners and explain how the gener- ator maps the input conceptual representation of sets to a set of syntactic features controlling the selection of JDs. From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. Semantic Characterization of Judgment Determiners Observing that 242 and many do not satisfy the same pragmatic function, one gets the intuition that many is a member of a “different” class of determiners - ones that do not express only objective information. This section uses three semantic properties defined in (Bar- wise & Cooper 1981) and (Keenan & Stavi 1986) in order to precisely identify the class of judgment deter- miners (JDs) and derive constraints on the form of the input required by a generator to produce JDs. Non Extensionality and Argumentation The first property of JDs is that they are non- extensionaZ, in the sense defined in (Keenan & Stavi 1986, p-257): To say that a det d is extensional is to say, for ex- ample, that whenever the doctors and the lawyers are the same individuals then d doctors and d lawyers have the same properties, e.g., d doctors attended the meet- ing necessarily has the same truth value as d lawyers attended the meeting. The following example shows is not extensional: that many for example, Imagine that in the past the annual doctors meeting has been attended by tens of thousands of doctors, and only two or three lawyers. But, during the course of the year, and unbeknownst to everyone, all the doctors get law degrees and all the lawyers get medical degrees (so that doctors and lawyers are now the same) and at this year meeting only 500 doctors/lawyers show up. Reasonably then (a) is true and (b) is false: (a) Many lawyers attended the meeting this year. (b) Many doctors attended the meeting this year. Thus many (few) cannot be treated extensionally. (Keenan & Stavi 1986, pp.257-8) Keenan & Stavi, therefore, propose to consider an expression such as many Xs as “simply indetermi- nate in truth value.” In other words, using a non- extensional determiner such as many Xs does not say much about the number of Xs, but instead, expresses a decision by the speaker to highlight the number of Xs as significant. The input to a generator must therefore record this decision if any non-extensional determiner is to be produced. The work presented here uses the notion of argu- mentative intent to account for this speaker’s deci- sion. An argumentative intent is the goal to convince a hearer of a certain conclusion. Following Anscombre & Ducrot (1983), t i is hypothesized that simple eval- uations, of the form (X is high/low on scale S), and simple argumentative rules, of the form (the higher X is on P, the higher Y is on Q) are sufficient to account for many linguistic phenomena related to argumenta- tion. In previous work, I have discussed the impact of the speaker’s argumentative intent on different gen- eration tasks: content selection and organization (El- hadad 1992), connective selection (Elhadad & McK- eown 1990), adjective selection (Elhadad 1991). The same general mechanism can be applied to the selec- tion of JDs. In this case, I assume that the generator’s input includes argumentative evaluations of the form (X is high/low on S) where X is a finite set of discrete individuals and S is the scale of cardinality. When this is the case, a feature degree is set in the description of the set X, which records the speaker’s argumentative intent regarding the number of elements in the set X. Monotonicity: the orientation feature Barwise & Cooper define the notion of monotonicity using the following linguistic test (Barwise & Cooper 1981, pp.184-191): consider two verb-phrases VP1 and VP2, such that the denotation of VP1 is a subset of the denotation of VPZ, that is, in logical terms, VPl(x) 3 V&(x). Then by checking whether the following seem logically valid, one can determine if the determiners are monotonic: If NP VI+, then NP V&,. (NP is monotonic increasing.) If NP VP2, then NP VP,. (NP is monotonic decreasing.) Barwise & Cooper give the following examples, taking VP1 to be entered the race early and VP2 to be entered the race (Barwise & Cooper 1981, p.185): some Republican every linguist John entered the race early, most farmers many men some Republican every linguist John entered the race. most farmers many men All these implications are valid, while the reverse im- plications do not hold. Similarly, the following implica- tions indicate that the determiners no, few and neither are monotonic decreasing: no plumber If few lTinguists entered the race, neither Democrat no plumbeT few linguists > entered the race early. neither Democrat Natural Language Generation 345 Note that the determiners exuctly two and at most three are not monotonic at all, since there is no implica- tive relation between exactly three men entered the race ear/y and exactly three men entered the race. All argumentative JDs must be monotonic. The fea- ture orientation is required in the input specification of a set to indicate the orientation of an argumentative evaluation. Its value can be +, - or none, and it cor- responds exactly to the distinction between monotonic increasing, decreasing, and non-monotonic quantifiers. Note that orientation is distinct from degree, because different degrees can be expressed for the same orien- tation: AI has a Iittle progrumming: orient + degree - AI has a lot of programming: orient + degree + AI has Little programming: orient - degree - AI has almost no progrcamming: orient - degree + The Intersection Condition Following (Barwise & Cooper 1981, Sect.l.3), NPs as a whole are viewed as the expression of generalized quantifiers, as opposed to simply determiners like all or some. Consequently, the input to the determiner generation procedure is a complete set specification. Sets are characterized in intension by a domain and the properties that must be satisfied by all elements. These properties are in general mapped to modifiers of the NP realizing the set using a procedure simi- lar to that discussed in (Elhadad 1993). This section presents the intersection ccndition, defined in (Barwise & Cooper 1981, p.190) and explains why a distinction between two types of modifiers must be enforced to allow for the generation of JDs. The linguistic test corresponding to the formal defi- nition of the intersection condition is the following: let Pr and P2 be two properties; then if a determiner D satisfies the intersection condition, the sentences there are D PI P2 N and D PI N are P2 are semantically equivalent. For example: There are exactly 3 interesting AI topics. Exactly 3 interesting topics are in AI. Exactly 3 AI topics are interesting. These three forms are equivalent, indicating that ex- actly n satisfies the intersection condition. In contrast, consider: (1) There are many interesting topics which are in AI. (2) There are many AI topics which are interesting. These NPs are not equivalent, as shown, for example, by considering the following situation: a person has interest in 100 topics; AI covers 10 topics; the intersec- tion between the interesting topics and the AI topics contains 7 elements. Then (1) is probably not valid (7 topics out of PO0 is not many) while (2) is valid (7 out of 10 is many). Note that the “classical” quanti- fiers, corresponding to the mathematical 3 and V, both satisfy the intersection condition, but JDs, e-g., many, few, most, do not satisfy it. Consider now the fact that in both (1) and (2) the NPs with the many determiner denote the same set of individuals (the 7 topics of the intersection). The va- lidity of the sentences, however, is different when the scope of the many changes from one modifier to the other. This indicates again that many is not exten- sional, but also, that the input conceptual description of sets must attribute a different status to the two mod- ifiers if modifier generation is to interact properly with determiner selection and prevent the generation of in- valid sentences like (1). I distinguish between reference and intension modi- fiers to account for this difference in status. For exam- ple, consider the set defined by: S = {x E TOPICS 1 Interest(x,student) A Area(x,AI)} Different perspectives can be held on this set: when the interest property is the intension and the area property is the reference, the definition can be written as follows: Sl = (x E AI-TOPICS 1 Interest(x,student)) And, under normal circumstances this representation leads to the English realization: Most AI topics are interesting. If in contrast the perspective is switched, and inter- est becomes the reference and area the intension, then the definition and realization become: 5’2 = {X E INTERESTING-TOPICS 1 Area(x,AI)} Few of the topics that interest you are in AI. In this example, the same observation of a set of top- ics satisfying two properties can lead to the generation of two contradictory argumentative evaluations. This indicates that, because JDs do not satisfy the inter- section condition, the structuring of properties in a set specification between reference and intension must be present in the input to the generator (as in Sl and S2), and that a neutral representation for sets such as S would not be appropriate. In summary, the following three properties char- acterize argumentative JDs: (1) they are non- extensional; (2) they are monotonic; (3) they do not satisfy the intersection condition. Consequently, the conceptual description of sets sent as input to a gener- ator must contain the features degree and orientation and distinguish between reference and intension mod- ifiers if the generator is to be able to produce JDs. Input/Output The overall architecture of the generation part of AD- VISOR II is the following: the input is a conceptual rep- 346 Elhadacl resentation encoded in a KL-ONE-like network enriched with pragmatic annotations describing the speaker’s intentions and assumptions. This conceptual network is passed to a Zexical chooser which selects open-class words and performs phrase pdanning to combine them into phrase structures such as NPs and clauses. These structures are finally passed to the syntactic realixa- tion grammar SURGE for closed-class word selection, agreements and linearization. In this paper, I only de- scribe the determiner selection subprocess of the lexical chooser. The input to the determiner generation procedure, therefore, is a set specification. The output is a set of syntactic features appearing at the NP level and controlling the selection of the determiner sequence in the SURGE grammar. Conceptual Representation for Generalized Quantifiers ADVISOR II is implemented in FUF, an extension of the functional unification formalism of Kay (1979) de- scribed in (Elhadad 1993, Chap.3 and 4). This sec- tion describes the conceptual representation of sets as a FUF functional description input to the generator. The input specification contains objects of four types: individuals, sets, relations and argumentative evalua- tions. I briefly present here the representation of sets and evaluations. Sets are described by the following features (all are optional except cat and index): ((cat set) (index <unique-id>) (kind <prototype>) (cardinality <n>) (extension <list-of-individuals>) (intension <a-relation>) (reference <a-set>)) kind is used for sets of objects of the same type. extension is the explicit list of the set elements. The logical definition of a set described by intension and reference is the following: S = {z E Reference 1 Intension( where intension is a relation, and reference, recursively, a set and the distinction between intension and reference is jus- tified above. Argumentative evaluations encode the speaker’s argumentative intent: ((cat evaluation) (evaluated <path-to-set-or-individual>) (scale <a-scale>) (orientation <+ or ->)) This indicates that the speaker judges the element pointed to by evaluated as high (or low) on scale. An input for the following set is shown below with the argumentative evaluation that the set is high on the scale of cardinality: Sl = {x E AI-TOPICS I Interest(x,student)} Intu- itively, this set contains the 7 topics that are of interest to the user among the 10 topics which are covered in AI. ((topics ((cat set) (kind ((cat topic))) (cardinality 7) (reference ((cat set) (kind ((cat topic))) (cardinality 10) (intension ((cat class-relation) (name area) (1 (^ argument)) (2 ((cat field) (name AI))))))) (intension ((cat user-relation) (name interest) (1 CA argument)) (2 ((cat student))))))) (argumentation ((cat evaluation) (evaluated Ctopics3) (scale ((name cardinality) 1) (orientation +)))) Output: Syntax of the Determiner Sequence The determiner sequence is a subconstituent of NPs. It is also in itself a complex constituent. It has the specificity that it is mainly a cdosed system - i.e., the lexical elements are part of a small set of words which are determined completely by a small set of syntactic features. When implementing the SURGE realization grammar, the issue was to identify a minimal set of features accounting for the variety of determiner se- quences observable in English. The syntactic descrip- tion implemented in SURGE is an augmented version of that presented in (IIalliday 1985, pp.159-176), with additions derived from observations in (Quirk et al. 1972, pp.136-165). A set of 24 features controlling the realization of the determiner sequence was thus iden- tified, which is presented in detail in (Elhadad 1993, Sect.5.4). I only present here a brief overview of the grammar for determiners, and focus on the features relevant to the realization of JDs. The structure of the determiner sequence is shown in Fig.1. Pre-determiners can be one of the following ele- ments: all, both or haZf, multipliers (twice, three times) or fractions (one fourth). Complex co-occurrence re- strictions exist between the different predeterminers and different classes of nouns (mass, count nouns de- noting a number or an amount) and between prede- Natural Language Generation 347 pre-det (of) det deictic:! ord card all of the fm-n0?.4s first ten half of mY twice as quant NP-head commandments many properties much work Figure 1: Syntactic structure of the NP terminers, cardinals and quantifiers. There are also special cases of noun classes that take zero articles, in- cluding seasons, institutions, transport means, illnesses (Quirk et al 1972, pp.156-159). The implementation of such co-occurrence restrictions explains the complexity of SURGE’S determiner grammar. To control the selection of the various elements of the determiner sequence, I make use of Halliday’s dis- tinction between three functions of the determiner se- quence: 1. Deictic: to identify whether a subset of the thing is denoted, and if yes, which subset. The relevant de- cisions are depicted below, in Fig.2, in the form of a systemic network, where curly braces indicate choice points between alternatives and square brackets indi- cate simultaneous decisions which must be taken. The top level distinction is between specific and non-specific determination. A specific deictic denotes a known, well identified subset of the thing. A non-specific deictic denotes a subset identified by quantity. For specific deictics, the subset can be identified by different means: deixis and distance (near or far from the speaker - this vs. that), possession (my, John’s) or not at all (the). Non-specific deictics are either total (all, no, both, each, every, neither) or partial. Partial deictics come in two sorts: selective (one, either, any and some as in some people) and non-selective (a, some as in some cheese). 2. Deictic2: to specify the subset of the thing by “referring to its fame, familiarity or its status in the text” (Halliday 1985, p.162). The deictic2 element is an adjective such as same, usual, given. Such adjec- tives are part of the determiner sequence because they systematically occur before the cardinal element of the determiner, in contrast to any other describing adjec- tive, which must occur after the cardinal. 3. Numerative: to specify the quantity or the place of the thing. The numerative specification can be either quantitative (expressing a quantity, three) or 348 Elhadad specific I demonstrative this possessive “Y the the determinative this interrogative which . II nonplural { singular mass I, ! plural total positive all negative no 1 selective nonselective I degree evaluative unmarked singular nonsingular unmarked nonspecific ( partial \ Figure 2: The deictic network quantitative three, a lot ordinative third, next exact three, third I inde f kite a few, many f UZZY about three range between 6 and 8 inexact comparative more, less multiplier twice fraction half context many, enough Figure 3: The numerative network ordinative (expressing a relative position, third). In both cases, the expression can be either exact (one, two..., first, second. ..) or inexact (a dot, the next). The source of the inexactness can be an approxima- tion device (about three, roughly third, approximately ten) or a range expression (between six and ten). Al- ternatively, it can be a context dependent expression like the next, many, few, more, and an evaluative ex- pression like enough, too many, too much. Figure 3 summarizes the relevant decisions. The features controlling the selection of JDs are lo- cated in the non-specific region of the deictic network and in the inexact region of the numerative network. The subset of SURGE features which trigger the se- lection of argumentative JDs is total, orientation, superlative and degree. Mapping from Conceptual to Syntactic atures When mapping from a conceptual description to the features controlling the determiner selection, the first decision is whether the speaker’s argumentative intent is to be realized through the use of a JD or with other linguistic devices (such as connotative verbs, scalar adjectives or connectives). This decision can inter- act with most generation decisions and is discussed at length in (Elhadad 1993). When argumentation is to be expressed in a deter- miner site, the following mapping rules are applied: e Total: when the set is the object of a positive evaluation, its cardinality is known and equal to that of the reference set, then total is set to +. If the evaluation is negative and the cardinality is known to be 0, total is set to -. In all other cases, total is set to none. e Orientation: when the set is the object of an argumentative evaluation, orientation records whether the evaluation is high or low. Otherwise, it is set to none. e Superlative: set to yes when the reference set is given, its cardinality is known, the cardinality of the set is larger than half that of the reference set, and the set is the object of a positive argumenta- tive evaluation. The general heuristics behind these rules is to use the pragmatically strongest determiner possible to realize the speaker’s argumentative intent. For example, if $ AI topics are interesting can be produced, it will be preferred to some AI topics. For egree, the determination of a value is more difficult. Degree determines the selection among a few, some, many, a (barge, great, incredible...) num- ber if orientation is +, and among few, a (small, tiny, ridiculous...) number if orientation is -. In ADVISOR II, degree is limited to have values +, - or none. A finer account of the degree of determiners is probably needed, but it creates many problems which, for lack of space, cannot be discussed here. Conclusion This paper has presented a method to generate judg- ment determiners (JDs). It focuses on the use of JDs as one way (among many others) to express the speaker’s argumentative intent. The paper provides a semantic characterization of JDs through the use of three tests (non-extensionality, monotonicity and non-satisfaction of the intersection condition) and derives constraints from this characterization on the form of input a gen- erator requires to be capable of producing JDs. The paper describes the part of a lexical chooser that takes as input a conceptual description of a set with pragmatic annotations such as argumentative evalua- tions and produces as output a set of syntactic fea- tures which control the behavior of the SURGE surface realization component. The component of SURGE re- sponsible for the complex syntax of the English deter- miner sequence is discussed and a technique to map the conceptual input to the relevant set of features is presented. Acknowleclgments. I am indebted to Kathleen McKeown, Jacques Robin and Rebecca Passonneau for their precious help both during the research and the writing of this paper. J. C. Anscombre and 0. Ducrot. L’argumentation dangue. Pierre Mardaga, Bruxelles, 1983. eferences dans la J. Barwise and R. Cooper. Generalized quantifiers in en- glish. Linguistics and Philosophy, 4:159-219, 1981. R. Dale. Generating referring expressions in a domain of objects and processes. PhD thesis, University of Edin- burgh, Scotland, 1988. M. Elhadad. Types in functional unification grammars. In Proceedings of the 28th Annual Meeting of the Associ- ation for Computational Linguistics, Detroit, MI, 1990. ACL. M. Elhadad. Generating adjectives to express the speaker’s argumentative intent. In Proceedings of the 9th Annual Conference on Artificial Intelligence. AAAI, 1991. M. Elhadad. Generating argumentative paragraphs. In Proceedings of COLING’92, Nantes, France, July 1992. M. Elhadad. Using argumentation to control lexical choice: a unifa’cation-based implementation. PhD thesis, Com- puter Science Department, Columbia University, 1992. M. Elhadad. Generating complex noun phrases. Technical Report FC-93-05, Dept of Mathematics and Computer Science, Ben Gurion University of the Negev, Israel. M. Elhadad and K. R. McKeown. Generating connectives. In Proceedings of COLING’90 (Volume 3), pages 97-101, Helsinki, Finland, 1990. M. Halliday. An introduction to functional grammar. Ed- ward Arnold, London, 1985. M. Kay. Functional grammar. In Proceedings of the 5th Annual Meeting of the Berkeley Linguistic Society, 1979. E. Keenan and Y. Stavi. A semantic characterization of natural language determiners. Linguistics and Philoso- phy, 9:253-326, 1986. R. Quirk, S. G reenbaum, G. Leech, and J. Svartvik. A grammar of contemporary English. Longman, 1972. Natural Language Generation 349 | 1993 | 52 |
1,379 | Bidirectional C art Generation Natural Language xts Masahiko HarunoO* Yasuharu Dent Fuji Matsumoto$ Makoto Nagao’ ODepartment o f Electrical Engineering, Kyoto University t ATR Interpreting Telecommunication Research Laboratories SAdvanced Institute of Science and Technology, Nara e-mail: haruno@kuee.kyot+u.ac.jp Abstract This paper presents Bidirectional Chart Genera- tion (BCG) algorithm as an uniform control mech- anism for sentence generation and text planning. It is an extension of Semantic Hea.d Driven Gen- eration algorithm [Shieber e2 al., 19891 in that re- computation of partial structures and backtrack- ing are avoided by using a chart ta.ble. These properties enable to handle a large scale grammar including text planning and to implement the al- gorithm in parallel programming languages. Other merits of the algorithm are to deal with multiple contexts and to keep every partial struc- ture in the chart. It becomes easier for the gener- ator to find a recovery strategy when user cannot understand the generated text. Introduction As opposed to traditional naive top-down or bottom- up mechanism [Wedekind, 19SS][va.n Noord, 19891, the Semantic-Head-Driven (SHD) algorithm[Shieber e2 al., 19891 combines both top-down and bottom-up deriva- tions effectively. However, a straightforwazd imple- mentation of the algorithm causes intensive backtrack- ing when the scale of the grammar is large. Bidirectional Chart Generation (BCG) a.lgorithm avoids the inefficiency of backtracking by using a chart table. Like Chart Parsing algorithm[ICay, 19SO], BCG algorithm can be implemented as a. no-backtracking program in both parallel and sequentia,l programming languages. The algorithm is used in our explanation system not only for surface sentence generation but also for RST[Mann and Thompson, 19S’7] based text planning. As pointed out in [Moore and Paris, 19S9], a generation facility must be able to determine what portion of text failed to achieve its purpose when follow-up question (user’s feedback) arises. BCG algorithm deals with multiple contexts just like ATMS[de Kleer, 19861 and *Current affiliation is NTT (Nippon Telegraph and Tele- phone) corporation. keeps every partial structure in a chart. It is easier for the generator to infer why the explanation fails and to find a recovery strategy. After reviewing SHD algorithm, we present BCG al- gorithm comparing with Bottom-up Chart Parsing al- gorithm. Then, we show an implementation of the algorithm in a parallel logic programming language GHC[Ueda, 1986]l. Finally, we discuss the applica- tion of BCG algorithm to answering user’s follow-up questions in a RST based text planning. Semantic- ead-Driven Algorithm (1) s/Sem --> pp/ga(Sbj),pp/wo(Obj), #v(Sbj, Ob j > /Sem. (2) s/Sem -->pp/wo(Obj) ,pp/ga(Sbj) , #v(Sbj, Obj>/Sem. (3) pp/Sem --> np/NP,#p(NP)/Sem. (4) v(Sbj, Obj)/call(Sbj ,Obj) --> fn%X] . (5) rip/t --> C?ii?RI . (6) rip/h --> cz5-1. (7) p(NP)/ga(NP) --> Cfll . (8) p(NP)/wo(NP) --> [%I . Figure 1: Sample Grammar We give a brief outline of SHD algorithm based on the sample Japanese grammar shown in Figure 1. A nonterminal symbol is written in the form of cate- gory/semantics. semantic-head (marked by # in the gra.mma.r rules) has a.n important role in the algorithm. semantic-head When the semantics of a right-hand- side element in a rule is identica.1 to that of the left- hand-side, then the right-hand-side element is called the semantic head of the rule. Grammar rules are divided into two types: Cha.in rules that have a semantic-head and non-chain rules’ that do not. In the sample grammar, (1) through (3) are ‘It is straightforward to transform it into a concurrent prograrn in Prolog. ‘we consider only lexical rules as non-chain rules for a while. 350 Harm0 From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. chain rules and (4) through (8) are non-chain rules. The algorithm proceeds bidirectionally, applying chain rules bottom-up and non-chain rules top-down. Those operations3 are defined as follows: Top-down operation A syntactic tree is traversed top-down using non-chain rules. A node that is about to expand is called the goal. Select a rule whose left-hand-side semantics is unifiable with that of the goal and make a node (called pivot) corre- I spending to the category of the left-hand-side. Then apply bottom-up operation from the pivot. Bottom-up operation A syntactic tree is traversed bottom-up using chain rules. Select a rule whose semantic head is unifiable with the pivot, and then make other categories of the right-hand side as new goals. When all these goals are-constituted applying operations recursively, the parent node at the left- hand side is introduced. If the parent node is not unifiable with the goa.1, then a.pply the bottlom-up operation, regarding the parent node as a new pivot. ppka(t) n (6) (8) Figure 2: Generation Process We show a sample generation process starting from semantic representation call( t ,h) (Figure 2). First, a pivot v(t,h)/ca.ll(t,l ) ._ ’ t d 1 is m ro uceh by applying top- down operation with rule(4). Two bottom-up opera- tions using rules (1) and (2) are applica.ble to the pivot. Assume that the rule (1) is selected. The new goa.ls PP/@W and PP/ (1) wo I are introduced from the right- hand side of the rule. Top-down operation introduces new pivots p(t)/ga(t) a.nd p(h)/wo(h) wit#h rules and (S). Going on the same process, a sentence [A B 7) [5, 7F, lE?-, 73, @ar] * g is enerated as shown in Figure 2. Another sentence [ET, 2, A@, @, @&I is generated as well a.pplying rule (2) by backtra.cking. This kind of backtracking causes serious inefficiency when the scale of grammar is large. As discussed above, SHD algorithm consists of two parts, the top-down operation and the bottom-up oper- to 3Top-down operation is augmen handle general non-chain rules. ted afterwards in order ation. Because the bottom-up operation resembles the basic operation of left-corner parsing algorithm, con- sidering the similarity between left-corner categories and semantic heads, SHD algorithm can be realized in the same way as Bottom-up Chart Parsing algorithm. In the next section, we present BCG algorithm, which avoids the inefficiency caused by ba.cktracking. CG Algorithm Basic Algorithm Bottom-up Chart Parsing algorithm [Kay, 1980] con- sists of the following three procedures. Procedure-l: Let zui be i-th word. For all rules of the form b + [w;] create new inactive edges between TJ and w whose term is b provided that u and w are the (i - 1)-th and i-th vertices. Procedure-2: Let ei be an inactive edge of category u incident from vertex v to vertex w. For all rules of the form b -t cl, c2 . . . c,, in the grammar such that Cl = a, introduce a new edge e, with the term [a [?]c2 * . . [?]cJb, incident from w to tu, provided that there is no such edge in the chart already. Procedure-3: Let e, and ei be adjacent active and inactive edges. e, is incident from vertex v and ei is incident to vertex ZU. Let [?]a be the first open box in e,. If ei is of category cy, create a new edge between w and 2u whose term is that of e, with the first open box replaced by the term of ei. Procedure-l looks up lexical rules at the first stage of the algorithm. Procedure-2 predicts phrase structures by ma.king use of the left-corner category. Procedure-3 fills up a prediction. On the other hand, SHD algo- rithm discussed in the previous section makes use of se- mantic head in order to predict new goals and the pre- diction is filled by recursive top-down operations. BCG algorithm is realized from Bottom-up Chart Parsing algorithm by identifying a semantic-head with a left- corner category. But important differences remain to be considered between generation and parsing as fol- lows: 1. In parsing, all initial inactive edges are introduced at the first place by Procedure-l. This process cor- responds to introducing pivots from semantic repre- sentation in the case of generation. This means that inactive edges must be built dynamically. 2. If Procedure-2 predicts two distinct goal sequences from one pivot by using two different rules, it hap- pens that the pivot has two distinct adjacents be- cause different goals may introduce different pivots. The first point demands a dynamic process of intro- ducing pivots. Once a goal is produced, its semantic representation is used to introduce a new pivot. The second point says that adjacent edges in BCG cannot be placed in a linear sequence. We introduce forward links to indicate the adjacency relation of edges; that Natural Language Generation 351 is, when Procedure-l introduces an inactive edge ei ac- active edge2 and inactive edge10 by Procedure-3. cording to an active edge e,, it puts a pointer f;-om the tail of e, to the head of ei. Two edges are adjacent in generation if there exists a forward link from one to the other. In addition, we must take account of the case where the required pivot has already been intro- duced before. In such a ca.se, we reuse the previously produced pivot by simply a.dding a. new forward link going to it. Therefore, it occurs that more than one forward link is put to one edge. Although ‘-inactive edge p(h) Two(h) - is introduced from goal pp/wo(h) of active edge12 and rule(8) by Procedure-l, it is the same as inactive edge5. Then forward link E is put from the tail of activeedgel2 to the head of inactive edge5 instead of generating a new inactive edge. At the end of the process, inactive edges 14 a.nd 15 a.re produced, each of which corresponds to a sentence4. They are generated with no backtracking. BCG algorithm becomes as follows. Procedure-l re- alizes the vdynamic introduction of pivots. Procedure-2 and Procedure-3 are straightforward auglllellt,a.t,ions of the bottom-up chart parsing algorithm except for the use of forward links. 14 Procedure-l: Let e, be an active edge of ca.tegory [al **a [?]cj . *. [?]~.,~]b incident from vertex 11. to 21. Let [?]cj be the first open box in e, and S’e??lj be its semantics. For a.11 rules of the form b/Seln --> Cwordl such that Sem and Selnj are unifiable, cre- ate new inactive edges between vertex w and vertex w’ whose term is b/SelTz and put, a. forward link from vertex 21 to vertex w. If the same inactive edge ever exists from vertex z to vertex 9 put a. forward link from vertex w to vertex x instea.d. Procedure-2 Let ei be an inactive edge of category a incident from vertex w to vertex w. For a.11 rules of the form b --> cl , . . . , #ch , . . . ,cn in the gram- mar such that Senzh and Sem are unifiable, intro- duce a new active edge e, with the term [[?]ci . . . a . . . [?]cn]D, incident from 21 to w, provided that, there is no such edge in the chart a.lrea.dy. Seln and Sent,, are semantics of a. and ch. . . . . . . . . I Inactive edge Active edge - Fomnrdhk Figure 3: Graph Representation of the Chart Procedure-3: Let e, be an active edge with the term [al * * * [?]Cj * * * [?]CJb incident from vertex u to ver- tex v and ei be an ina.ctive edge with the term CI incident from vertex w to vertex x. Let [?]cj be the first open box in e,. If a forward link exists from vertex v to vertex w such that Cj = a., create a. new edge between u and x whose term is [al . . . a [‘?]cj+l . . - [?]c,]b. General Non-Chain Rule We show in this section how general non-chain rules are handled in BCG a.lgorithm, though we ha.ve con- sidered only lexical rules as non-chain rules. General non-chain rules are necessary for handling a large scale grammar, particularly for text planning. Consider the following non-chain rule which describes a Japanese relative clause: An example starting from semantic representation call(t,h) is explained in the rest of this section. The chart constructed in the process is shown in Figure 3 and in Table 1. The first inactive edge v (t , h) /call (t , h) is introduced from rule( 4) by Procedure-l and the process proceeds as shown in Figure 3. The inactive edge4 p(t)/ga(t> is pro- duced from the goal pp/ga( t > of active edge2 and rule( 7) by Procedure- 1. Then the forward link A is put from the tail of active edge2 to the head of inactive edge4. Inactive edge5 p(h) /wo (h) is pro- duced in the same wa.y from active edge3. After the inactive edge10 pp(t>/ga(t> and 11 pp(h)/wo(h) are generated, which ha.ve the sa.me 1lea.d as inac- tive edge4 and 5, active edge12 [pp/ga(t) [?]pp/wo(h) v(t,h)/call(t,h)] s/call(t,h) is introduced from np/ind(X, CRIRstr]) --> s-rel (X> /R, np/i.ndo[ ,Rstr) . First, we extend the top-down operation defined be- fore: top-down operation A syntactic tree is traversed top-down using non-chain rules. A node that is about to expand is called the goal. Select a non- chain rule whose left-hand-side semantic represen- tation is unifia.ble with tha.t of the goal and make a node called pivot corresponding to the category 4Note that the order of edges in the chart doesn’t mean the surface word order. It is shown explicitly by difference lists as discussed in the next section. 352 Harm0 Edge Term Procedure Rule 1 v(t,h)/call(t,h) 1 ( 1 2 [[?]pp/ga(t),[?]pp/no(h),v(t,h)/call(t,h)]s/call(t,h) 2 3 [[?]pp/wo(h),[?]pp/ga(t),v(t,h)/call(t,h)]s/call(t,h) 2 iilj 4 p(t)/ga(t) 1 5 p(h)/wo(h) 1 $1 6 [[?]np/t,p(t)/ga(t)]pp/ga(t) 2 7 [[?]np/h,p(h)/wo(h)]pp/wo(h) 2 $1 8 rip/t 1 9 rip/h 1 10 pp/ga(t) 3 11 pp/wo(h) 3 12 [pp/ga(t),[?]pp/wo(h),v(t,h)/call(t,h)]s/call(t,h) 3 13 [pp/wo(h),[?]pp/ga(t),v(t,h)/call(t,h)]s/call(t,h) 3 14 s/calJ(t,h) 3 15 s/call(t,h) 3 Ta.ble 1: Table Representation of the Chart of the left-hand-side. In addition, make cate- gories of right-hand side as uew goals and ap- ply top-down operation to them recursively. If the pivot is not unifiable with the goal, then apply bottom-up operation from the pivot. The bold-face part is supplementary to the origina. top-down operation. It expands the categories at right- hand side after unifying the goa, with left-hand side. Note that this pa.rt is ahnost same a,s top-down deriva,- tion of a syntactic tree. The procedure for the genera.1 non-chain rules is formalized in the sa.me wa.y a.s Top- down Chart Parsing algorithm [Kay, 1980]. The defi- nition of the operation is the following Procedure-l’. Procedure-l’ Let e, be a.n active edge with the term [al * * - [?]Cj * * * cn]d incident from vertex u. to V. Let [?]cj be the first open box in e, and Sen?.j be its semantic representation. For every rule of the form b/Sem --> cl,. . . ,c, such that Se172jujzdSem are unifiable, create a new active edge with the term [[?]q . *. [?]cra]b looping at vertex UI, a.nd put a for- ward link from v to 20. If the same inactive edge ever exists from y, simply put a. forward link from v to y instead. Implelllelltation Previous sections show tha.t BCG a.lgorithm is formal- ized in the similar way to Chart Parsing algorithm. PAX parsing system[Matsumoto, 1986] is an imple- mentation of Bottom-up Chart Parsing algorithm in a. parallel logic programming langua,ge GHC5. We show in this section a GHC implementation of BCG algo- rithm in the similar way to PAX system. The imple- mented system consists of the following two parts. 1. The program translated from grammar rules. 5A GHC clause can be understood just like a Prolog clause if the commit operator ‘1’ is repla.ced by ‘!‘. 2. The meta-process that introduces inactive edges dy- namically. It absorbs the difference between parsing and generation. Basic Transformation of Grammar Rules In our implementation, each terminal and non-terminal symbol is realized as a parallel process that communi- cate with each other for building up larger structures. The communica.tion channel is called a stream. Let us take the following grammar rule. s/Sem --> pp/ga(Sbj), pp/wo(Obj), #v(Sbj,Obj)/Sem. Three non-terminal symbols at right-hand-side are re- alized as parallel processes and each of them receives a stream from the left and passes an output stream to the right. For tra.nsforma.tion, the following modification is done to the grammar rule: Identifiers standing for inte- mediate positions in a grammar rule are inserted in the rule and the semantic head of the rule is moved to the top of the right-hand-side to be a.ssocia.ted with left- corner pa.rsing. Moreover, in order to keep the surface order information, difference lists representing words are added to each symbol. The example rule results in the following form: s(SO-S3)/Sem --> v(Sbj, Obj, S2-S3)/Sem, idl, pp(SO-Sl)/ga(Sbj), id2, pp(Sl-S2)/wo(Obj). By translating this rule into following GHC clauses, we can achieve SHD generation in parallel. The behavior of the grammar rule is depicted in Figure 4. First, v(Sbj ,Obj ,S2-S3) is translated into the pro- gra.m below. v(In,Sbj,Obj,S2-S3,Out) :- true I out = [idl(ga(Sbj>,In,Sbj,Obj,S2-S3)]. ral Language Generation 353 semantic head Figure 4: Behavior of Processes When v( Sbj , Obj , S2-S3) is produced, tree traverse process (Procedure- 1 of BCG algorithm). proceeds to the position of idl. It corresponds to Secondly, pp(SO-Sl) is translated as below. the Procedure-2 of BCG algorithm that selects a rule with a semantic head whose semantic representation is unifiable with that of the inactive edge (pivot), and introduces a new active edge (goal). The pro- cess v(In,Sbj ,Obj ,S2-S3,Out) gene&es idl, which corresponds to the active edge. In general, processes perform inactive edges and data in streams stands for active edges. The first open box of the new ac- tive edge is pp(SO-Sl), whose semant,ics ga(Sbj> is passed along with id1 and used afterwards in the meta- pivot(call(Sbj,Obj),In,Out) :- true i v(In,Sbj,Obj,[~~lSOI-S0,Ou-t). The pivot process is generated dynamically by the meta-process corresponding to the Procedure- 1 of BCG algorithm. Meta-Process The met~process monitors the data in all streams and controls the whole generation process. meta-proc(Cl,-> :- true I true. meta-proc([IdiTaill,Table) :- It checks the semantic representation in each identifier (semantics (Id, Sem)), and generates or reuses an in- active edges according to the semantic representation, then passes the identifier to the inactive edges. It is attained by calling the pivot process described below. Here, streams perform forward links of BCG algorithm. Forward links are introduced dynamically and a stream is realized by an open list to receive identifiers incre- mentally. The meta-process, maintains the table that consists of pairs like wait (Sem,Str), where Sem is the semantic representation of an ever produced inac- tive edge and Sty is the tail of its input stream. When the meta-process derives Selnj from an identifier and is a.bout to produce a pivot process, it checks whether Selnj is already registered in the table. The meta- process generates a new pivot process only if the pair wait (Semj ,Str) is not registered. The following pro- gram realizes the task. pp([idl(-,In,Sbj,Obj,S2-S3)lTaill,SO-Sl,Out) :- true I out = Cid2(wo(Obj),In,Obj,SO-Sl,S2-S3) lOutl1, pp(Tail,SO-Sl,Outl). Because pp(SO-Sl) is to the right of idl, tree tra,- verse proceeds to the position of id2 when pp(SO-Sl) receives idl. This corresponds to the Procedure-3 of BCG algorithm that derives a new active edge from an active edge and an inactive edge. The first open box of the new active edge is pp(si-~21, the semantics of which is wo(Obj > is inserted as the first argument of id2. In the same way, pp(Sl-S2) is translated as be- low. pp([id2(-,In,Obj,SO-Sl,S2-S3)lTaill,Sl-S2,Out~ :- true I s(In, SO-S3,0utl), pp(Tail,Sl-S2,0ut2),merge(Outl,Out2,Out). When pp(Sl-S2)/wo(Obj) is generated, the parent node s (SO-S31 /Sem is generated. The fina. definition of process pp is the collection of a.11 of such clauses each of which corresponds to an occurrence of pp in the right-hand-side of gra.mma.r rules. The following clauses are necessary to handle exceptional situa.tions: semantics(Id,Sem), get(wait(Sem,StrTail),Table,Tablel) 1 StrTail = [IdlNewStrTaill, put(wait(Sem,NewStrTail),Tablel,NewTable), meta-proc(Tail,NewTable). meta-proc(CIdlTaill,Table) :- otherwise I semantics(Id,Sem), pivot(Sem,CIdlStrTaill,Out), put(wait(Sem,StrTail) ,Table,NewTable), merge(Out,Tail ,Next), meta-proc (Next ,NewTable). The second clause of meta-proc corresponds to the case of reusing the existing process and the third to the case of generating a new process. In the second clause, get(wait(Sem,StrTail),Table,Tablel) looks up if wait (Sem, StrTail) is previously registered in the ta- ble. When the table includes the element, meta,proc reuses it by instantiating the top of the open list with StrTail. Otherwise meta,proc generates a new pro- cess by calling pivot(Sem, CIdlStrTaill ,Out) in the third clause, and register the process in the table by put(wait(Sem,StrTail),Table,NewTable). pp(Cl,-,Out) :- true I Out = Cl. pp([-ITail],String,Out):- otherwise I pp(Tail,String,Out). The pivot, process introducing new processes is de- rived by transforming lexical (non-chain) rules as de- scribed in the previous section. The transformation of Finally, let us take the following non-chain (1exica.l) rule. v(Sbj, Obj)/call(Sbj,Obj) --> [IF%.%]. This rule is translated into the program below, which generates a process corresponding to v (Sb j , Ob j > from the semantic representation call(Sbj ,Obj). genera.1 tion. non-chain rules are described in the next, sec- 354 Haruno Transformation of Non-Chain Rules Let us consider the following rule. np/ind(X,CRIRstr]) --> s-rel (Xl /R, np/ind(X,Rstr). General non-chain rules are treated by Procedure-l’ whose central part is the same as Procedure-l . The only difference is that Procedure-l’ introduces a new active edge, from a semantic representation of a pre- dicted goal. The process is also realized by the pivot process as below: pivot(ind(X,[RIRstrl),In,Out) :- true I out = [id3(R,In,X,R,Rstr)l. The identifier id3 is inserted just before the leftmost, category s-rel(X) for top-down traversal of a syn- tactic tree. The pivot process corresponding to the semantic representation ind(X , [R 1 Rstr] > generates this identifier. This kind of identifier corresponds to the active edge of Top-down Chart Parsing algorithm. When all categories at right-hand side are constituted, then a new process corresponding to np at left-hand side is produced. Applying BCG Algorithm to Based Text Planning This section examines the applica.bility of BCG algo- rithm to text planning. The depth-first, search st,ra.tegy has been used mainly in text planning, in which it is difficult, for a. generator to select, the releva.nt operator at every choice point. On the other hand, BCG algo- rithm deals with more than one candidate in parallel until enough information is obtained. Moreover, in explanation dia.logue systems, users of- ten ask follow-up questions when he or she cannot fully understand the explanation. The generator must infer why its explanation has failed to achieve the commu- nicative goal; an error in user model, ambiguity of the meaning and so on. In BCG algorithm, it is easier for a generator to find a recovering strategy because all partial structures are preserved. Plan Language Our plan language is based on Rhetorical Structure Theory RST)[M \ a.nn a.nd Thompson, 19871. Explana.- tion dia. ogue requests a. p1a.n la.ngua.ge t,o express both intentional and rhetorical structures of the text once produced to answer follow-up questions. We adopt the similar representation of RST t(o Moore’s opera- tors [Moore and Paris, 19891, one of which is shown below. EFFECT:(BMB S H ?x> CONSTRAINTS:nil NUCLEUS:(INFORM S H ?x> SATELLITES:(PERSUADE S H ?x> In order to apply BCG algorithm to text planning, such operators are represented in DCG rules, where CONSTRAINTS are inserted as extra conditions. (1) corresponds to the a.bove example. (1) bmb/bmb(Speaker,Hearer,X) --> inf/inform(Speaker,Hearer,X), psd/persuade(Speaker,Hearer,X). (2) bmb/bmb(Speaker,Hearer,X) --> explain/explain(Speaker,Hearer,X), inf/inform(Speaker,Hearer,X). Let the speaker’s goal be bmb(Speaker, Hearer, X), there are alternative rules (1) and (2) a.pplica.ble to this situation. A naive top-down plamrer recom- putes inf /inform(Speaker,Hearer ,X> due to back- tracking. On the other hand, BCG algorithm proceeds in parallel reusing the structures ever constructed. Be- cause most rules are applied top-down in text planning, the behavior of BCG algorithm, in this case, is almost identical to that of Top-down Chart algorithm [Kay, 19801. Answering Follow-up BCG algorithm can select, the best recovering strategy by comparing multiple contexts when receiving follow- up questions. Suppose user model contains concepts that the user does not actually know, the generator must change the user model and select, the proper strat- egy for explaining the concept,. The generator employs partial information in the chart particularly incomplete active edges, which stand for suspended plans. Let, us consider the following plan operators and general knowledge. % plan operator goal/goal(Hearer,do(Hearer,Act)) --> recommend/recommend(Speaker,Hearer,Act), psd/persuaded(Hearer,goal(Hearer,do(Hearer,Act))). psd/persuaded(Hearer,goal(Hearer,do(Hearer,Act))) --> Cstep(Act,Goal)), motivation/motivation(Act,Goal). psd/persuaded(Hearer,goal(Hearer,do(Hearer,Act))) --) Cstep(Act,Goal), bel(Hearer,benefit(Act,Hearer))), inf/inform(benefit(Act,Hearer)). motivation/motivation(Act,Goal) --> {step(Act,Goal)) bel/bel(Hearer,step(Act,Goal)). bel/bel(Hearer,step(Act,Goal)) --> Cknow(Hearer,Goal)), inf/inform(Speaker,Hearer,step(Act,Goal)). bel/bel(Hearer,step(Act,Goal)) --> inf/inform(Speaker,Hearer,step(Act,Goal)), c\+ know(Hearer,Goal)), elaboration/elaboration(Goal). % general knowledge step(insert,optimization). know(user,optimization). The domain of the dialogue is Prolog programming. The system’s goal is goal(user(do(user,insert)), t,o recommend the user to insert a ‘!’ before a recur- sive call. insert means inserting ! before the recursive call a.nd optimization means the ta.il recursion opti- mization. The system generates the following texts as the first text (Figure 5). Nahsal Language Generation 355 (Insert a cut symbol before recursive call. It is necessary for tail recursive optimi zation.) The user cannot understand the explanation and poses the follow-up question. user: &<<ti)g d-&x/, (I don't understand very well.) The system accepts the follow-up question and searches the suspended active edges. Now, there are two suspended active edges (1) and (2): (J)[C?linf/inform(benefit(insert,user))l psd/persuaded(user,goal(user,do(user,insert))) (2)Cinf/info ( rm system,user,step(insert,optimization)) C?lelaboration/elaboration(optimization)l bel/bel(user,step(insert,optimization) Each of the edges is suspended because of the con- tradiction to the user model; bel(user,benefit(insert,user)) a.nd \+ know (user, insert). The genera.tor selects the ac- tive edges that require few hypotheses to expand, and gives the relaxation to the user model. In this case, the generator assumes that \+ know(user, insert) holds and produces the following a.dditional explana.tion as shown in Figure 5. molivalion b. Jm =C inf i inf i elaboration i L ‘“‘“yc”“.” ,..... .’ l . Sm...; Choice Poinr Figure 5: The Explanation Tree system: *B@$&jLC&J ‘JYf/I'YK/f';, 3 I- 7 7 3S&~:jBffi Lb;: I: ?3% L, 5%s%k%m~$Q~~&Tt, (Tail recursive optimization saves memory by showing compilers that no backtracking is necessary.) The generator can reproduce the explanation by relax- ing the user model according to actual state of user’s knowledge. This is the simi1a.r situation to rela.xation based parsing of ill-formed inputs in which chart-based method is powerful[Mellish, 19891 because it maintains all partial structures. Concluding Remarks We have presented BCG algorithm as a basic con- trol mechanism of generation system. In contrast to Shieber’s SHD algorithm, BCG algorithm dea.ls with multiple contexts at a time. This property resolves two problems: First, the efficiency is remarkably improved in the case of a large scale grammar. Secondly, the comparison between multiple contexts becomes possi- ble. Hence, revision like answering follow-up questions is performed easier by referring to the contexts in the chart. To sum up, we obtain the efficiency and robust- ness by adopting the BCG algorithm. We are now studying the patterns of the follow- up questions and investigating the recovery heuristics based on the chart. References Johan de Kleer. An assumption-based TMS. Artificial Intelligence, 28:127-162, 1986. Martin 1ca.y. Algorithm schemata and data structure in syntactic processing. Technical Report CLS-80-12, Xerox PARC, 1980. W. C. Mann and S. A. Thompson. Rhetorica. struc- ture theory: Description and construction of text structures. In Natural Language Generation, chap- ter 7, pages 85-96. Martinus Nijhoff Publishers, 1987. Yuji Matsumoto. A parallel parsing system for nat- ural language analysis. In Proc. 3rd ICLP, Lec- ture Notes in Computer Science 225, pages 396409. Springer-Verlag, 1986. Chris Mellish. Some chart-based techniques for pars- ing ill-formed input. In Proc. 27th A CL, pages 102- 109, 1989. Johanna Moore and Cecile Paris. Planning text for advisory dialogues. In Proc. 27th ACL, pages 203- 211, 1989. S. M. Shieber, van Noord, R. C. Moore, and F. C. N. Pereira. A semantic-head-driven generation algorithm for unification-based formalisms. In Proc. 27th ACL, pages 7-17, 1989. I< Ueda. Guarded Horn Clauses. In E. Wada, editor, Logic Programming ‘85, Lecture Notes in Computer Science 221, pages 168-179. Springer-Verlag, 1986. van Noord. BUG: A directed bottom-up genera- tor for unification-based formalisms. Working Pa- per 4, Katholieke Universiteit Leuven Stiching Taal- technologie Utrecht, the Netherlands, 1989. J Wedekind. Generation as structure driven deriva- tion. In Proc. 11th COLING, pages 732-737, 1988. 356 Haruno | 1993 | 53 |
1,380 | unicative The MITRE Corporation Artificial Intelligence Center MS K329, Burlington Road Bedford, MA 01730 maybury@inus.mitre.org Abstract The ability to argue to support a conclusion or to encourage some course of action is fundamental to communication. Guided by examination of naturally occurring arguments, this paper classifies the communicative structure and function of several different kinds of arguments and indicates how these can be formalized as plan-based models of communication. The paper describes the use of these communication plans in the context of a prototype which cooperatively interacts with a user to allocate scarce resources. This plan-based approach to argument helps improve the cohesion and coherence of the resulting communication. Knowledge-based systems are often called upon to support their results or conclusions or to justify courses of action which they recommend. These systems, therefore, are often placed in the position of attempting to influence the beliefs and/or behavior of their users and yet they often do not have sophisticated capabilities for doing so. The first step in arguing for some proposition or some course of action is ensure that it is understood by the addressee. A number of techniques have been developed to automatically generate natural language descriptions or expositions, for example, to describe what is meant by some abstract concept (McKeown 1985) or to explain a complex mechanism in a manner tailored to an individual user (Paris 1987). However, once the addressee understands the point, a system needs to be able to argue for or against that point. A number of researchers have investigated computational models of representation and reasoning for argument. For example, Birnbaum (1982) represented propositions as nodes in an argument graph connected by attack and support relations. He suggested three ways to attack an argument (called argument tactics): attack the main proposition, attack the supporting evidence, and attack the claim that the evidence supports the main point. In contrast, Cohen (1987) investigated the interpretation of deductive arguments and suggested how clue words (e.g., “therefore”, “and”, “so”) could be used to recognize the structure underlying arguments that are presented in “pre- order” (i.e., claim followed by evidence), “post-order” (evidence before claims), and “hybrid-order” format (using both pre-order and post-order). Reichman (1985), on the other hand, characterized natural dialogues using a number of conversational moves (e.g., support, interrupt, challenge) indicating “clue words” such as “because”, “but anyway”, and “no but” as evidence. In contrast, this paper extends previous research in plan- based models of communication generation (Bruce 1975, Cohen 1978, Appelt 1985, Hovy 1988, Moore & Paris, 1989, Maybury 199 lab) by formalizing a suite of communicative acts a system can use to influence user beliefs or actions. The remainder of this paper first outlines several classes of argumentative actions which are differentiated on the basis of their semantics and purpose. Next, several of these actions are formalized as plan operators. The paper then illustrates their use to improve explanations when advising an operator on a course of action during a scheduling task. The paper concludes by identifying limitations and areas for further research. Arguments as Communicative Acts There are many conventional patterns of argument, depending upon the goal of the speaker. Aristotle identified several methods of argument including exemplum (illustration), sententia (maxim), and enthymeme (a syllogism with a premise elided because it is assumed to be inferable by the addressee). An Aristotelian example of sententia is “No man who is sensible ought to have his children taught to be excessively clever.” Contemporary rhetoricians (e.g., Brooks & Hubbard 1905, Dixon 1987) similarly enumerate a number of general techniques which can be used to convince or persuade a hearer (e.g., tell advantages, then disadvantages). In addition to discussing general argument forms (e.g., deduction and induction), rhetoricians also indicate presentational strategies such as give the argument which will attract attention first and the most persuasive one last. While these ideas are suggestive, they are not formalized precisely enough to form the basis for a computational theory. This paper formalizes argument as a series of communicative acts that are intended to perform some communicative goal, such as convincing an addressee to change or modify a belief, or persuading them to perform some action. For example, when attempting to change someone’s beliefs, humans provide evidence, give explanations, or disprove counter-arguments to convince an addressee to believe a particular proposition. Arguments may employ descriptive or expository techniques, for Natural Language Generation 357 From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. Argue Deduce 0-N ’ Persuade n categorical-syllogism indicate-motivation disjunctive-syllogism I indicate-desirable-consequents hypothetical-syllogism indicate-desirable-enablements Induce indicate-purpose-and-plan provide-evidence illustrate indicate-cause give-analogy Figure 1. Communicative Acts for Argument example to define terms (i.e., entities) or to explain propositions. Just as humans reason about the most efficacious manner in which to achieve their goals, we have formalized and implemented a broad range of communicative acts to achieve communicative goals (Maybury 199 1 ab). A communicative act is a sequence of physical, linguistic or visual acts used to effect the knowledge, beliefs, or desires of an addressee. Linguistic acts include speech acts (Searle 1969) and surface speech acts (Appelt 1985). More abstract rhetorical acts coordinate linguistic and other acts. Examples include identifying an entity, describing it, dividing it (into subparts or subtypes), narrating events and situations, and arguing to support a conclusion. Figure 1 classifies several different kinds of argumentative communicative acts. These actions can be distinguished by their purpose and semantics and are sometimes signalled by surface cues (e.g., “for example”, “because”). Deductive arguments such as categorical syllogism, disjunctive syllogism and hypothetical syllogism are intended to affect the beliefs of the addressee. The classic example of categorical syllogism is “All men are mortal. Socrates is a man. Therefore, Socrates is mortal.” In contrast to arguments which deduce propositions, inductive arguments also attempt to convince the hearer of some claim but by providing evidence and examples, or, in the broadest sense of the term, by showing cause and using analogy. In contrast to these argumentative actions which attempt to affect the beliefs of the addressee, persuasive argument has the intention of affecting the goals or plans of the addressee. For example, inducing action in the addressee can be accomplished by indicating (Figure 1): e the motivation for an act, or its purpose or goal 0 the desirable consequents of performing an act l the undesirable consequents caused by not performing an act e how the act fits into some more general plan 0 how the act enables some important/desirable act or state Finally, there are many equally effective but perhaps less ethical methods of encouraging action such as threat, coercion, or appeal to authority. In the context of simulating human behavior, several implementations have investigated some of these kinds of persuasive techniques. 358 Maybury For example, characters in Meehan’s (1976) TALE-SPIN simulation could persuade one another to perform acts, for example, using threats. Sycara’s (1989) PERSUADER program simulated labor negotiations in which three agents (company, union, and mediator) could select from nine persuasive techniques (e.g., appeal to “status quo”, appeal to “authority”, threat) to effect other agent’s plans and goals. While these coercive techniques may be useful in simulations of human behavior, their use by an advisory system is probably not appropriate except in special cases (e.g., persuading someone to take their prescribed medicine). The next section details a plan-based approach to influencing beliefs and encouraging action in human users. co unicative We represent communicative actions for argumentation as operators or schemata in a plan library of a hierarchical planner (Sacerdoti 1977). Each plan operator defines the constraints and preconditions that must hold before a communicative act applies, its intended eflects (also known as postconditions), and the refinement or decomposition of the act into subacts. The decomposition may have optional components. Preconditions and constraints encode conditions regarding the underlying knowledge base (e.g., is there evidence to support a given proposition), the current status of a user model (e.g., does the addressee believe some proposition), and the current status of the discourse (e.g., has a particular piece of evidence already been introduced). Constraints, unlike preconditions, cannot be achieved or planned for if they are false. For example, the uninstantiated argue-for-a-proposition plan operator shown in Figure 2 is one of several methods of performing the communicative action argue. Plan operators are encoded in an extension of first order predicate calculus with variables italicized (e.g., S, H, and proposition). As defined in the HEAD E R of the plan operator, the argue action takes three arguments, the speaker (S), the hearer (H), and a proposition. Thus, provided the third argument is indeed a proposition (CONSTRAINTS) and the speaker understands it and wants the hearer to believe it (PRECONDITIONS), the speaker (S) will first claim the proposition, optionally explain it to the hearer (H) if they don’t already understand it (as indicated by the user model, examined by the constraints on the explain act), and finally attempt to convince them of its validity (DECOMPOSITION). The intended effect of this action is that the hearer will believe it (EFFECTS). Argue(S, H, proposition) CONZXFWNTS Proposition?(proposition) PRECONDITIONS DECOMPOSITION Figure 2. Uninstantiated Argument Plan Operator Intensional operators, such as WANT, KNOW, and BELIEVE appear in capitals. KNOW details an agent’s specific knowledge of the truth-values of propositions (e.g., KNOW(H, Red(ROBIN-1)) OrKNOW(H, yYellow(ROBIN- 1) ) ) where truth or falsity is defined by the propositions in the knowledge base. That is, KNOW (H, P ) implies P A BELIEVE(H, P). Agents can hold an invalid belief (e.g., BELIEVE(JANE, Yellow(~OB1N-1))). An agent can KNOW-ABOUT an object or event (e.g., KNOW-ABOUT (H, DOG-l) OrKNOW-ABOUT(H, MURDER-445)) ifthey KNOW its characteristics, components, subtypes, or purpose (loosely, if they are “familiar” with it). KNOW-HOW indicates an agent’s ability to perform an action. A number of techniques can be used to explain a proposition. For example, one method is to simply define the predicate and terms of the proposition. For example if the claim is the proposition Dictator(Hitler), then this can be explained by defining dictator and then defining Hitler. As expository techniques are beyond the scope of this paper, see Maybury (199 la) for details. Even if the hearer understands the proposition, however, they may not believe it is true. To achieve this, the speaker must convince them of it. As indicated above, two types of reasoning can convince a hearer to believe a proposition: deduction and induction. The former moves top-down, from general truisms to specific conclusions whereas the latter builds arguments bottom-up, from specific evidence to a general conclusion. A simple deductive technique is to provide evidence for the proposition. Figure 3 illustrates a slightly more sophisticated deductive technique implemented to support a medical diagnostic application. This first explains how a particular situation could be the case (by detailing the preconditions, motivations, and causes of the proposition) and then informs the hearer of any evidence supporting the proposition (optionally convincing them of this). Evidence is ordered according to importance, a metric based on domain specific knowledge of the relevance and confidence of evidence. In contrast to deductive techniques, inductive approaches can also be effective methods of convincing an addressee to believe a proposition. These include the use of illustration, comparison/contrast and analogy. For example, you can support a claim that American academics are devalued by comparing American and Japanese education to highlight America’s low valuation of the teaching profession. In contrast to this comparison, analogy entails comparing the proposition, P (which we are trying to convince the hearer to believe) with a well-known proposition, Q, which has several properties in common with P. By showing that P and Q share properties a and p, we can claim by analogy that if Q has property x, then so does P. Maybury (199 lc) details similarity algorithms that are used to support this. reference etrics for Argument @ Because there may be many methods of achieving a given goal, those operators that satisfy the constraints and essential preconditions are prioritized using preference metrics. For example, operators that utilize both text and graphics are preferred over simply textual operators NAME HEADER CONSTRATNTS convince-by-cause-and-evidence Convince(S, H, proposition) Proposition?(proposition) A 3x 1 Cause(x, proposition) A 3x 1 Evidence(proposition, x) PRECONDITIONS 3x E evidence 7 KNOW-ABOUT(H,Evidence(propositionJ)) EFFECTS Vx E evidence KNOW-ABOUT(H, Evidence(proposition, x)) DECOMPOSmON Explain-How(S, H, proposition) ‘v’x E evidence Inform(S, H, Evidence(proposition, x)) optional(Convince(S, H, x)) evidence = order-by-importance( Vx 1 Evidence@roposition, x) A BELIEVE(S, EvidenceQwoposition, x))) Figure 3. Uninstantiated Convince Plan Operator (Maybury 1991b). Also, those operators with fewer subgoals are preferred (where this does not conflict with the previous preference). The preference metric prefers plan operators with fewer subplans (cognitive economy), with fewer new variables (limiting the introduction of new entities in the focus space of the discourse), those that satisfy all preconditions (to avoid backward chaining for efficiency), and those plan operators that are more common or preferred in naturally-occurring explanations (e.g., rhetoricians prefer deductive arguments over inductive ones). While the first three preferences are explicitly inferred, the last preference is implemented by the sequence in which operators appear in the plan library. Working from this prioritized list of operators, the planner ensures preconditions are satisfied and tries to execute the decomposition of each until one succeeds. This involves processing any special operators (e.g., optionality is allowed in the decomposition) or quantifiers (V or 3) as well as distinguishing between subgoals and primitive acts. For example, if the planner chooses the plan operator in Figure 2 from those that satisfy its constraints, it first ensures its preconditions hold (i.e., the user knows about or “understands” the proposition), which may require backward chaining. co nicative s to ctio While different forms of argument such as deduction and induction can be belief or action-oriented, the previous sections have defined deductive and inductive forms narrowly as primarily affecting hearer beliefs; this section will similarly define persuasive techniques in the narrow sense as primarily affecting hearer actions. (Of course in the act of convincing someone to believe a proposition using deductive or inductive techniques you can also persuade them to act. Similarly, in the course of persuading someone to act you can change their beliefs.) The following invitation exemplifies arguments that encourage action: Natural Language Generation 359 Come to my parry tonight. It’s at 1904 Park Street. We are serving your favorite munchies and we have plenty of wine and beer. Everybody is going to be there. You’ll have a great time. This text tells the reader what to do, enables them to do it, and indicates why they should do it. This common communicative strategy occurs frequently in ordinary texts intended to get people to do things. It consists of requesting them to do the act (if necessary), enabling them to do it (if they lack the know-how), and finally persuading them that it is a useful activity that will produce some desirable benefit (if they are not inclined to do it). In the above example, the action coming to the party, is enabled by providing the address. The action is motivated by the desirable attributes of the party (i.e., tasty munchies and abundant supply of liquor), the innate human desire to belong, and by the desired consequence of coming to it (i.e., having fun). This general strategy corresponds to the request-enable- persuade plan operator shown in Figure 4. The operator gets the hearer to do some action by requesting, enabling, and then persuading them to do it. Enable, the second communicative act in its decomposition, refers to communicative actions which provide the addressee with enough know-how to perform the action (see Maybury 199 la for details). The plan operator in Figure 4 distinguishes among (1) the hearer’s knowledge of how to perform the action (i.e., KNOW-HOW, knowledge of the subactions of the action) (2) the hearer’s ability to do it (ABLE), and (3) the hearer’s desire to do it (WANT). For example, the hearer may want and know how to get to a party, but they are not able to come because they are sick. If the speaker knows this, then they should not use the plan operator below because its constraints fail. The assumption is that a general user modelling/acquisition component will be able to provide this class of information. CONSTRAINTS Action?(action) A ABLE(H, action) PRECONDI’IIONS WANT(S, Do(H, action)) EFFECTS KNOW(H, WANT(S, Do(H, action))) A KNOW-HOW(H, action) A DECOMPOSlTION Figure 4. request-enable-persuade Plan Operator The order and constituents of a communication that gets an individual to act, such as that in Figure 4, can be very different indeed depending upon the conversants involved, their knowledge, beliefs, capabilities, desires, and so on. Thus to successfully get a hearer to do things, a speaker needs to reason about his or her model of the hearer in order to produce an effective text. For example, in an autocratic organization, a request (perhaps in the linguistic form of a command) is sufficient. In other contexts no request need 360 Maybury be made because the hearer(s) may share the desired goal, as in the case of the mobilization of the Red Cross for earthquake or other catastrophic assistance. Similarly, if the hearer wants to do some action, is able to do it, and knows how to do it, then the speaker can simply ask them to do it. Because the hearer is able to do it, the speaker need not enable them. And because the hearer wants or desires the outcome of the action, the speaker need not persuade them to do it. Thus we also represent a simple request plan operator which argues that the hearer perform an action by simply asking them to do it. A variation on this plan operator could model delegation, whereby the speaker may know the hearer is not willing to do or does know how to perform some task, but the speaker simply asks them because it is expected that they figure out how to do it. As with the autocratic example above, this would require a model of the interpersonal relations of the speaker and hearer (Hovy 1987). In addition to a request for action, enablement may be necessary if the audience does not know how to perform the task. The following text from the NYS Department of Motor Vehicles Driver’s Manual (p. 9) informs the reader of the prerequisites for obtaining a license: To obtain your driver’s license you must know the rules of the road and how to drive a car or other vehicle in traffic. The writer indicates that being knowledgeable of both road regulations and vehicle operation are necessary preconditions for obtaining a license. In some situations, however, the reader may be physically or mentally unable to perform some action, in which case the writer should seek alternative solutions, eventually perhaps consoling the reader if all else fails. On the other hand, if the user is able but not willing to perform the intended action, then a writer must convince them to do it, perhaps by outlining the benefit(s) of the action. Consider this excerpt from the Driver’s Manual: The ability to drive a car, truck or motorcycle widens your horizons. It helps you do your job, visit friends and relatives and enjoy your leisure time. Of course it could be that the hearer already wants to do something but does not know how to do it. In this situation a communicator must explain how to perform the action (see, for example, Moore & Paris (1989) and Maybury (1991a)). However, in some cases the addressee must be persuaded to act and so the next section formalizes several techniques to do so and illustrates their use in an advisory context. First, however, we briefly compare the request-enable- persuade strategy in Figure 4 to Moore & Paris’ (1989) “recommend-enable-motivate” strategy. Their plan-based system has three “motivation” strategies: motivating (by telling the purpose and/or means of an action), showing how an action is a step (i.e., subgoal) of some higher-level goal (elaborate-refinement-path), and giving evidence. Some of these techniques are domain specific (e.g., a “motivate-replace-act”, where “replace” is specific to the Program Enhancement Advisor domain), and others are architecture/knowledge representation specific (e.g., “elaborate-refinement-path” is a technique based on the Explainable Expert System’s architecture (Neches et al. 1985)). In contrast, the strategies presented here are domain and application independent and include persuasion by showing motivation, enablement, cause, and purpose. Furthermore, they distinguish rhetorical acts (e.g., enable, persuade) from illocutionary acts (e.g., request) from surface speech acts (e.g., command, recommend) from the semantic relations underlying these (e.g., enablement, cause). Finally, this paper formalizes communicative acts for both convincing and persuading (i.e., convincing a hearer to believe a proposition versus persuading them to perform an action) and it is the latter which we now detail. ersuasive ~~~~~~ica~iv~ Acts When an addressee does not want to perform some action, a speaker must often persuade them to act. There are a variety of ways to persuade the hearer including indicating (1) the motivation for the action, (2) how the action can enable some event, (3) how it can cause a desirable outcome, or (4) how the action is a part of some overall purpose or higher level goal. For example, the plan operator named persuade-by- desirable-consequents in Figure 5 gets the hearer to want to do something by telling them all the desirable events or states that the action will cause. An action can either cause a positive result (e.g., approval, commendation, praise) or avoid a negative one (avoid blame, disaster, or loss of self esteem). Advertisement often uses this technique to induce customers to purchase products by appealing to the emotional benefits (actual or anticipated) of possession. An extension of this plan operator could warn the hearer of all the undesirable events or states that would result from their inaction. persuade-by-desirable-consequents Persuade(S, H, Do(H, action)) CONS-S Act(action) A 3x I Cause(action, x) PRECONDITIONS -, WANT(H, Do(H, action)) EFFECTS WANT(H, Do(H, action)) A Vx E desirable-events-or-states KNOW(H, Cause(action, x)) DECOMPOSITION Vx E desirable-events-or-states Inform(S, H, Cause(action, x)) desirable-events-or-states = {x I Cause(action, x) A WANT(H, x) ) Figure 5. persuade-by-desirable-consequents Plan Operator Some actions may not cause a desirable state or event but may enable some other desirable action (that the hearer or someone else may want to perform). For example, in the NYS driving example, obtaining a license is a precondition of driving a car, which enables you to visit friends, go shopping, etc. This plan could also be extended to warn the hearer of all the undesirable events or states that would be enabled by their inaction. NAME persuade-by-purpose-and-plan HEADER Persuade(S, H, Do(H, action)) CONSTRAINTS Act(action) PRECONDITIONS 1 WANT(H, Do(H, action)) EFFECTS WANT(H, Do(H, action)) A KNOW(H, Purpose(action, goal)) DECOMPOSITION Inform(S, H, Purpose(action, goal)) Inform(S, H, Constituent(pZan, action)) gOal = g 1 PUIJJOSe(aCtiOtZ, g) A WANT(H, g) plan = p I Constituent@, action) A WAJWH, DOW) Figure 6. persuade-by-purpose-and-plan Plan Operator One final form of persuasion, persuade-by-purpose-and- plan, shown in Figure 6, gets the hearer to perform some action by indicating its purpose or goal(s) and how it is part of some more general plan(s) that the hearer wants to achieve. For example, one subgoal of shopping is writing a check, an action which has the effect or purpose of increasing your liquid assets. These operators give a range of persuasive possibilities. The next section illustrates their application in the context of a cooperative problem solver. ersuasio arming The persuasive plan operators described above (i.e., indicating motivation, enablement, cause, or purpose) were tested using the cooperative Knowledge based Replanning System, KRS (Dawson et al. 1987), a resource allocation and scheduling system. KRS is implemented in a hybrid of rules and hierarchical frames. KRS employs meta-planning (Wilensky 1983) whereby high-level problem solving strategies govern lower-level planning activities. Therefore, it can justify its actions by referring to the higher level strategy it is employing. Figure 7 illustrates these strategies (e.g., plan an air tasking order, replan an air tasking order, replan an attack mission, and so on) which govern lower-level planning activities (e.g., prescan a package of missions, plan a package of missions, plan an individual mission, and so on). Associated with each meta-plan shown in Figure 7 are several types of information including its name, type, purpose, subgoals, relations among subgoals (e.g., enablement, sequence, etc.), planning history, associated entities (e.g., the name of the mission being replanned), and failure handlers. Therefore when the actions encoded by the plans are executed, the meta-planner knows why particular actions occur when they do. For example, if the user is not persuaded that scanning a plan is a useful activity and they may ask “Why is scanning the plan necessary?” (simulated by posting the action PERSUADE (#<SYSTEM>, #<USER>, Do(#<SYSTEA4>,#<PRESCAN-ATO>))). To achieve this action, our explanation planner uses the persuade-by-purpose-and-plan operator of Figure 6. This examines the meta-plan structure and produces the response shown in Figure 8 (the surface speech acts, assert, which realize the inform acts, are elided). Maybury (199 Id) details the linguistic realizer. Natural Language Generation 361 #dX’RATEGY> -PLAN-ATO> #<PLAN-ATO> #<REPLAN-Al-TACK> #&RESCA%ATO> #@LAN-PACKAGE> KEY/ abstraction Figure 7. Structure of Plans and Meta-Plans in KRS Persuade(#<SYSTEM>, #<USER>, W, #<pRESCAN-ATO>)) Inform(#cSYSTEM>, #cUSER>, Purpose(#cPRES YAN-ATO>, #mST-VALIDITY>)) II Inform(#cSYSTEM>, #<USER>, Constituent(#<PRESCAN-ATO>, #<PLAN-ATO>)) ARGUMENT PLAN SURFACE FORM : The purpose of prescanning the Air Tasking Order is to test the validity of the Air Tasking Order. Prescanning the Air Tasking Order is part of planning an Air Tasking Order. Figure 8. Argument in support of a Domain Action Just as showing how an action supports some more general plan or goal can support that action, another way to persuade an addressee to perform (or support) an action is to indicate the cause and/or motivation for the action. Because KRS is a mixed-initiative planner, it cooperates with the user to produce an Air Tasking Order, a package of air missions (e.g., reconnaissance, refueling, escort) that achieve some desired goal (e.g., defend friendly territory). Because of this multi-agent problem solving, the system and user can make choices which result in an ill-formed mission plan. If directed by the user, KRS can replan such an ill-formed mission plan using dependency-directed backtracking (e.g., making changes in the plan by reasoning about temporal and spatial relationships). KRS initially attempts to retract system-supplied choices. As a last resort, KRS suggests to the user that they remove user- supplied choices to recover from the ill-formed plan. In this case the system tries to justify its recommendation on the basis of some underlying rule governing legal plans. For example, assume the user has interacted with the system to produce the mission shown in Figure 9 (simplified for readability). The frame, OCA1002, is an offensive counter air mission, an instance of (AIO) the class offensive counter air (OCA), with attributes such as a type and number of aircraft, a home airbase, and a target. Each attribute encodes actual and possible values as well as STATUS slot which indicates who supplied the value (e.g., user, planner, meta-planner). Frames also record interactional information, for example in Figure 9 the HISTORY slot records that the user just selected a target and the WINDOW slot indicates where the mission plan is visually displayed. KRS represents domain-dependent relations among slots so that values for some of the slots can be automatically calculated by daemons in reaction to user input (e.g., when the UNIT and ACNUMBER slot of a mission are filled in, the CALL-SIGN slot can be automatically generated). During planning the system monitors and detects ill- formed mission plans by running rule-based diagnostic tests on the mission plan. For example, in Figure 9 the offensive counter air mission has an incompatible aircraft and target. KRS signals the constraint violation by highlighting the conflicting slots (e.g., AIRCRAFT and TARGET) of the mission frame which is represented visually in a mission window to the user. The built-in explanation component would then simply state the rule- based constraint which detected the error in the mission plan, and then list some of the supporting knowledge (see Figure 10). The first two sentences of the explanation in Figure 10 are produced using simple templates (canned text plus variables for the mission (OCAl002), rule name (TARGET-AIRCRAFT- 1), and conflicting slots (TARGET and AIRCRAFT)). The list 1-4 is simply a sequence of information supporting the constraint violation although (VALUE (AIRCRAFT (POSSIBLE (VALUE (STATUS (TARGET iVALUE (ACNUM (STATUS (POSSIBLE (VALUE ~TAWS (AIRBASE (POSSIBLE (ORDNANCE ;;OF’S$LE (HISTORY (DISPLAY (VALUE (OWN ((F-4C F-4D F-4E F-4G F- 11 IE F- 1llF))) (F-1llE)) (USER))) (BE30703)) (USER))) y&2 ... 25))) (USER)))) ((ALCONBURY)))) ((Al A2 . . . A14)))) (#<EVENT INSERT TARGET BE30703 USER>111 (#<MISSION-WINDOW 1 1142344 deexposed;jj) Figure 9. Simplified Mission Plan in FRL The choice for AIRCRAFT is in question because: BY TARGET-AIRCRAFT-l: THERE IS A SEVERE CONFLICT BETWEEN TARGET AND AIRCRAFT FOR OCA1002 1. THE TARGET OF OCA1002 IS BE30703 2. BE30703 RADIATES 3. THE AIRCRAFT OF OCA1002 IS F- 1llE 4. F- 111 E IS NOT A F-46 Figure 10. Current Explanation of Rule Violation 362 Maybury Argue(#cSYSTEM>, #<U 1002 AIRCRAFT F- 111 E F-4G>)) >, Do( #<USER>, #<REPLACE...>) Recommend(#<SYSTEM>, #< R>, Do( #<USER>, #<REPLA Inforrn(#cSYSTEM>, #<USER>, Motivation(#<CONPLICT TARGE IRCRAFT- l>, #<REPLACE. ..> Inform(#cSYSTEM>, #<USER>, Cause(#cEVENT INSERT TAR #<CONFLICT TARGET ARGUMENTPLAN Inform(#cSYSTEM>, #<USER>, Cause(#<STATE RADIATE BE30703>, SURFACE FORM: You should replace F-llle aircraft with F-4g aircraft in Offensive Counter Air Mission 1002. A conflict between the aircraft and the target in Offensive Counter Air Mission 1002 motivates replacing F-111E aircraft with F-4g aircraft. You inserted BE30703 in the target slot and BE30703 was radiating which caused a conflict between the aircraft and the target in Offensive Counter Air Mission 1002. there is no indication as to how these relate to each other or to the rule. The fact that BE30703 (Battle Element 30703) is radiating indicates that it is an operational radar. KRS expects domain users (i.e., Air Force mission planners) to know that only anti-radar F-4g (“Wild Weasel”) aircraft fly against these targets. Rather than achieving organization from some model of naturally occurring discourse, the presentation in Figure 10 is isomorphic to the underlying inference chain. Because the relationships among entities are implicit, this text lacks cohesion. More important, it is not clear what the system wants the user to do and why they should do it. In contrast, our text planner was interfaced to KRS by relating rhetorical predicates (e.g., cause, motivation, attribution) to the underlying semantic relations of the domain embodied both in rules justifying constraint violations and in frames representing the mission plan and other domain entities (e.g., aircraft and target frames). Unlike the template and code translation approach used to produce the text in Figure 10, now KRS posts the action ARGUE(#<SYSTEM>, #<USER>, Do(#<USER>, #<REPLACE OCA1002 AIRCRAFT F-IIIE F-4G>)) to the text planner. The text planner then instantiates, selects and decomposes plan operators similar to the one in Figure 5 to generate the argument plan and corresponding surface form shown in Figure 11. The output is improved not only by composing the text using communicative acts, but also by linguistic devices, such as the lexical realization “Offensive Counter Air Mission 1002” instead of OCA1002 as well as verb choice, tense and aspect (e.g., “should replace”, “inserted”). For example, the recommended Figure 11. Argument to Encourage User to Act -- Initiated by Rule Violation surface speech act is realized using the obligation modal, “should”. As above, assertions, the surface speech acts for inform actions, are elided from the argument plan. Finally, note how the surface realizer joins the final two informs into one utterance because they have similar propositional content (i.e., causation of the same conflict). utuse Argument, perhaps the most important form of communication, enables us to change other’s beliefs and influence their actions. This paper characterizes argument as a purposeful communicative activity and formalizes argumentative actions (e.g., deduce, induce, persuade) as plans, indicating their preconditions, constraints, effects, and decomposition into more primitive actions. We illustrate how these plans have been used to improve a communicative interface to a cooperative problem solver. As the focus of the paper is on the presentation of arguments (i.e., their form), we make no claims regarding their representation, including associated inference or reasoning strategies. Furthermore, no claims are made concerning the representation of intentions and beliefs. Indeed, an important issue that remains to be investigated is how content and context modifies the effect of different communicative plans (e.g., deduction can both change beliefs and move to action depending upon context). This seems analogous to the alteration of the force of illocutionary speech acts by variation in syntactic form or intonation. Another unresolved issue concerns the multi-functional nature of communicative acts and their interaction. This is Natural Language Generation 363 more complicated than representing multiple effects of actions, as do our plans. For example, the advertisement below compels the reader to action using a variety of techniques including description, comparison, and persuasion. Buy Pontiac. We build excitement. The new Pontiacs have power brakes, power steering, AM/FM stereos, and anti-lock brakes. And if you buy now, you will save $500. An independent study shows that Pontiacs are better than Chevrolet. See your Pontiac dealer today! In this example, the initial request for action (i.e., purchase) is supported by indicating the desirable attributes of the product, the desirable consequences of the purchase, comparing the action with alternative courses of action/competing products, and finally imploring the hearer to act again. While some of these techniques may be implemented as plan operators in a straightforward manner (e.g., describe desirable attributes), the interaction of various text types remains a complex issue. For example, how is it that some texts can persuade by description, narration or exposition, and entertain by persuasion? What also remains to be investigated is the relationship of linguistic and visual acts to influence beliefs or actions, as in advertisement. Acknowledgements I want to thank Karen Sparck Jones and John Levine for detailed discussions on explanation and communication as action. References Aristotle. 1926. The ‘Art’ of Rhetoric. trans. J. H. Freese, Cambridge, MA: Loeb Classical Library series. Appelt, D. 1985. Planning English Sentences. England: Cambridge University Press. Birnbaum, L. 1982. Argument Molecules: A Functional Representation of Argument Structure. In Proceedings of the Third National Conference on Artificial Intelligence, 63-65. Pittsburg, PA: AAAI. Brooks, S. D. and Hubbard, M. 1905. Composition Rhetoric. New York: American Book Company. Bruce, B. C. 1975. Generation as a Social Action. In Proceedings of Theoretical Issues on Natural Language Processing-l, 64-67. Urbana-Champaign: ACL. Cohen, P. R. 1978. On Knowing What to Say: Planning Speech Acts. University of Toronto TR- 118. Cohen, R. 1987. Analyzing the Structure of Argumentative Discourse. Computational Linguistics 13( l-2): 1 l-23. Dawson, B.; Brown, R.; Kalish, C.; & Goldkind, S. 1987. Knowledge-based Replanning System. RADC TR-87- 60. Dixon, P. 1987. Rhetoric. London: Methuen. Hovy, E. 1987. Generating Natural Language Under Pragmatic Constraints. Ph.D. diss., Dept. of Computer Science, Yale University TR-52 1. Hovy, E. 1988. Planning Coherent Multisentential Text. In Proceedings of the 26th Annual Meeting of the ACL, 163-169. Buffalo, NY: ACL. Maybury, M. T. 199 1 a. Generating Multisentential English Text Using Communicative Acts. Ph.D. diss., Computer Laboratory, Cambridge University, England, TR-239. Also available as RADC TR 90-411. Maybury, M. T. 1991b. Planning Multimedia Explanations Using Communicative Acts. In Proceedings of the Ninth National Conference on Artificial Intelligence, 61-66. Anaheim, CA: AAAI. Maybury, M. T. 1991~. Generating Natural Language Definitions from Classification Hierarchies. In Advances in Classification Research and Application: Proceedings of the 1st ASIS Classification Research Workshop, ed. S. Humphrey. ASIS Monographs Series. Medford, NJ: Learned Information. Maybury, M. T. 1991d. Topical, Temporal, and Spatial Constraints on Linguistic Realization. Computational Intelligence 7(4):266-275. McKeown, K. 1985. Text Generation. Cambridge University Press. Moore, J. D. and C. L. Paris. 1989. Planning Text for Advisory Dialogues. In Proceedings of Twenty-Seventh Annual Meeting of the ACL, 203-211, Vancouver. Neches, R.; Moore, J.; & Swartout, W. November 1985. Enhanced Maintenance and Explanation of Expert Systems Through Explicit Models of Their Development. IEEE Transactions on Software Engineering SE-11(11):1337-1351. Paris, C. 1987. The Use of Explicit User Models in Text Generation: Tailoring to a User’s Level of Expertise. Ph.D. diss., Dept. of Computer Science, Columbia University. Reichman, R. 1985. Getting Computers to Talk Like You and Me. Cambridge, MA: MIT Press. Sacerdoti, E. D. 1977. A Structure for Plans and Behavior. New York: Elsevier North-Holland. (Originally SRI TN-109, 1975.) Searle, J. R. 1969. Speech Acts. Cambridge University Press. Toulmin S.; Rieke, R.; & Janik, A. 1979. An Introduction to Reasoning. New York: Macmillan. Wilensky, R. 1983. Planning and Understanding. Reading, MA: Addison-Wesley. Meehan, J. R. 1977. TALE-SPIN, an Interactive Program that Writes Stories. In Proceedings of Fifth International Joint Conference on Artificial Intelligence, 9 1-98. Sycara, K. 1989. Argumentation: Planning Other Agents’ Plans. In Proceedings of the Eleventh International Joint Conference on Artificial Intelligence, 5 17-523. 364 Maybury | 1993 | 54 |
1,381 | Jacques Robin and Kathleen McKeown Department of Computer Science Columbia University New York, NY 10027 { robin,kathy}@cs.columbia.edu Abstract The complex sentences of newswire reports con- tain floating content units that appear to be op- portunistically placed where the form of the sur- rounding text allows. We present a corpus anal- ysis that identified precise semantic and syntactic constraints on where and how such information is realized. The result is a set of revision tools that form the rule base for a report generation system, allowing incremental generation of complex sen- tences. Introduction Generating reports that summarize quantitative data raises several challenges for language generation sys- tems. First, sentences in such reports are very com- plex (e.g., in newswire basketball game summaries the lead sentence ranges from 21 to 46 words in length). Second, while some content units consistently appear in fixed locations across reports (e.g., game results are always conveyed in the lead sentence), others float, ap- pearing anywhere in a report and at different linguis- tic ranks within a given sentence. Floating content units appear to be opportunistically placed where the form of the surrounding text allows. For example, in Fig. 1, sentences 2 and 3 result from adding the same streak information (i.e., data about a series of simi- lar outcomes) to sentence 1 using different syntactic categories at distinct structural levels. Although optional in any given sentence, floating content units cannot be ignored. In our domain, they account for over 40% of lead sentence content, with some content types only conveyed as floating struc- tures. One such type is historical information (e.g., maximums, minimums, or trends over periods of time). Its presence in all reports and a majority of lead sen- tences is not surprising, since the relevance of any game fact is often largely determined by its historical signif- icance. However, report generators to date [Kukich, 19831, [Bourbeau et ad., 19901 are not capable of includ- ing this type of information due to its floating nature. The issue of optional, floating content is prevalent in 1. Draft sentence: “San Antonio, TX - David Robinson scored 32 points Friday night lifting the San Antonio Spurs to a 127 111 victory over the Denver Nuggets.” Clause coordination with reference adjustment: “San Antonio, TX - David Robinson scored 32 points Friday night LIFTING THE SAN ANTONIO SPURS TO A 127 111 VICTORY OVER DENVER and handing the Nuggets their seventh straight loss”. Embedded nominal apposition: “San Antonio, TX - David Robinson scored 32 points Friday night lifting the San Antonio Spurs to a 127 111 victory over THE DENVER NUGGETS, losers of seven in a row”. Figure 1: Attaching a floating content unit onto different draft sentence SUBCONSTITUENTS many domains and is receiving growing attention (cf. [Rubinoff, 19901, [Elhadad and Robin, 19921, [Elhadad, 19931). These observations suggest a generator design where a draft incorporating fixed content units is produced first and then any floating content units that can be accommodated by the surrounding text are added. Ex- periments by [Pavard, 19851 provide evidence that only such a revision-based model of complex sentence gen- eration can be cognitively plausible’. To determine how floating content units can be in- corporated in a draft, we analyzed a corpus of basket- ball reports, pairing sentences that differ semantically by a single floating content unit and identifying the minimal syntactic transformation between them. The result is a set of revision too/s, specifying precise se- mantic and syntactic constraints on (1) where a par- ticular type of floating content can be added in a draft and (2) what linguistic constructs can be used for the addition. The corpus analysis presented here serves as the ba- sis for the development of the report generation sys- ‘cf. [Robin, 19921 for discussion. Natural Language Generation 365 From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. Basic Sentence Example Patrick Ewing scored 41 points Tuesday night to lead the New York Knicks to a 97-79 win over the Hornets Complex Sentence Karl Malone scored 28 points Saturday and John Stockton leading the Utah Jazz to its fourth straight victory, added a season-high 27 points and a league-high 23 assists a 105-95 win over the Los Angeles Clippers Figure 2: Similarity of basic and complex sentence structures tern STREAK (Surface Text Revision Expressing Addi- tional Knowledge). The analysis provides not only the knowledge sources for the system and motivations for its novel architecture (discussed in [Robin, 1993]), but also with means for ultimately evaluating its output. While this paper focuses on the analysis, the on-going system implementation based on functional unification is discussed in [Robin, 19921. After describing our corpus analysis methodology, we present the resulting revision tools and how they can be used to incrementally generate complex sen- tences. We conclude by previewing our planned use of the corpus for evaluation and testing. Corpus analysis methodology We analyzed the lead sentences of over 800 basket- ball games summaries from the UPI newswire. We focused on the first sentence after observing that all reports followed the inverted pyramid structure with summary lead [Fensch, 19881 where the most crucial facts are packed in the lead sentence. The lead sen- tence is thus a self-contained mini-report. We first noted that all 800 lead sentences contained the game result (e.g., “Utah beat Miami I&5-95”), its location, date and at least one final game statistic: the most notable statistic of a winning team player. We then semantically restricted our corpus to about 300 lead sentences which contained only these four fixed con- tent units and zero or more floating content units of the most common types, namely: 366 Robin e Other final game statistics (e.g., “Stockton finished with 27 points”). o Streaks of similar results (e.g., “Utah recorded its fourth straight win”). e Record performances (e.g., Ttockton scored a season-high 27 points”). Complex Sentence Structure We noted that bu- sic corpus sentences, containing only the four fixed con- tent units, and complex corpus sentences, which in ad- dition contain up to five floating content units, share a common top-level structure. This structure consists of two main constituents, one containing the notable statistic (the notable statistic cluster) and the other containing the game result (the game result cluster), which are related either paratactically or hypotacti- tally with the notable statistic cluster as head. Hence, the only structural difference is that in the complex sentences additional floating content units are clus- tered around the notable statistic and/or the game re- sult. For example, the complex sentence at the bottom of Fig. 2 has the same top-level structure as the basic sentence at the top, but four floating content units are clustered around its notable statistic and a fifth one with its game result. Furthermore, we found that when floating elements appear in the lead sentence, their se- mantics almost always determines in which of the two clusters they appear (e.g., streaks are always in the game result cluster). These corpus observations show that any complex sentence can indeed be generated in two steps: (1) pro- duce a basic sentence realizing the fixed content units, (2) incrementally revise it to incorporate floating con- tent units. Furthermore, they indicate that floating content units can be attached within a cluster, based on local constraints, thus simplifying both generation and our corpus analysis. When we shifted our atten- tion from whole sentence structure to internal cluster structures, we split the whole sentence corpus into two subsets: one containing notable statistic clusters and the other, game result clusters. Cluster structure To identify syntactic and lexi- cal constraints on the attachment of floating content units within each cluster, we analyzed the syntactic form of each cluster in each corpus lead sentence to derive realization patterns. Realization patterns ab- stract from lexical and syntactic features (e.g., connec- tives, mood) to represent the different mappings from semantic structure to syntactic structure. Examples of realization patterns are given in Fig. 3. Each col- umn corresponds to a syntactic constituent and each entry provides information about this constituent: (1) semantic content2, (2) grammatical function, (3) struc- tural status (i.e. head, argument, adjunct etc) and (4- 5) syntactic category3. Below each pattern a corpus example is given. Realization patterns represent the structure of en- tire clusters, whether basic or complex. To discover how complex clusters can be derived from basic ones through incremental revision, we carried out a differ- ential analysis of the realization patterns based on the notions of semantic decrement and surface decrement illustrated in Fig. 4. A cluster Cd is a semantic decrement of cluster Ci if Cd contains all but one of Ci’s content unit types. Each cluster has a set of realization patterns. The surface decrement of a realization pattern of Ci is the realiza- tion pattern of Cd that is structurally closest. Figure 3 shows a semantic decrement pairing Cd, a single con- tent unit, with Ci, which contains two content units. Both clusters have two realization patterns associated with them as they each can be realized by two differ- ent syntactic structures. These four syntactic structure patterns must be compared to find the surface decre- ments. Since &i is entirely included in Ril* it is the surface decrement of Ril. To identify the surface decre- ment of Ri2, we need to compare it to h& and &i in turn. All the content units common to Ri2 and Rd2 are realized by identical syntactic categories in both pat- terns. In particular, the semantic head (game-result) is mapped onto a noun ( “victory” in Rd2, “triumph” in Riz). In contrast, this same semantic head is mapped 2An empty box corresponds to a syntactic constituent required by English grammar but not in itself conveying any semantic element of the domain representation. 3The particular functions and categories are based on: [Quirk et al., 19851, [Halliday, 19851 and [Fawcett, 19871. 4Remember that we compare patterns, not sentences. onto a verb ( “to defeat”) in &I. &, rather than Rdi is thus the surface decrement of Ri2. We identified 270 surface decrement pairs in the cor- pus. For each such pair, we then determined the struc- tural transformations necessary to produce the more complex pattern from the simpler base pattern. We grouped these transformations into classes that we call revision tools. evisions for incremental generation We distinguished two kinds of revision tools. Simple revisions consist of a single transformation which pre- serves the base pattern and adds in a new constituent. Complex revisions are in contrast non-monotonic; an introductory transformation breaks up the integrity of the base pattern in adding in new content. Subsequent restructuring transformations are then necessary to re- store grammaticality. Simple revisions can be viewed as elaborations while complex revisions require true re- vision. Simple revisions We identified four main types of simple revisions: Adj oin5, Append, Conjoin and Absorb. Each is characterized by the type of base structure to which it applies and the type of revised structure it produces. For example, Adjoin applies only to hypotactic base patterns. It adds an adjunct A, under the base pattern head Bh as shown in Fig. 5. Adjoin is a versatile tool that can be used to in- sert additional constituents of various syntactic cate- gories at various syntactic ranks. The surface decre- ment pair < &I, Ril > in Fig. 3 is an example of clause rank PP adjoin. In Fig. 6, the revision of sentence 5 into sentence 6 is an example of nominal rank pre-modif ier adjoin: “franchise record” is ad- joined to the nominal “sixth straight home defeat”. In the same figure, the revision of sentence 2 into sentence 3 is an example of another versatile tool, Conjoin: an additional clause, “Jay Humphries added 24 “7 is coordinated with the draft clause “Karl Malone tied a season high with 39 points”. In general, Conjoin groups a new constituent A, with a base constituent B,l in a new paratacti$ complex. The new complex is then inserted where B ci alone was previously located (cf. Fig. 5). N t h o e ow in Fig. 1 paraphrases are ob- tained by applying Conjoin at different levels in the base sentence structure. Instead of creating a new complex, Absorb relates the new constituent to the base constituent B,i by de- moting B,i under the new constituent’s head Ah which is inserted in the sentence structure in place of B,i as 50ur Adjoin differs from the adjoin of Tree-Adjoining Grammars (TAGS). Although, TAGS could implement three of our revision tools, Adjoin, Conjoin and Append, it could not directly implement non-monotonic revisions. 6 Eit her coordinated or appositive. Natural Language Generation 367 Ci: <game-result(winner,loser,score),streak(winner,as~ect,type,le~~th)>. cd: <game-reSUlt(winuer,lOSer,score)>. Ril(Ci): winner game-result loser score ] length 1 streak+aspect 1 type agent process affected score result a% head a% adjunct adjunct proper verb proper number PP prep ] [det 1 ordinal ] adj noun Chicago beat Phoenix 99-91 for its third straight win Rdl(Cd) surface decrement of Ril (Ci): winner game-result loser score agent process affected score a?% head a% adjunct proper verb proper number Seattle defeated Sacramento 121-93 Ri2(Ci): winner aspect I type I streak length score I game-result I loser agent process affected located location means a% head a% adjunct adjunct proper verb NP PP PP det 1 participle I noun _ prep I [det I number I noun PP] Utah extended its winning streak to six games with a 118-94 triumph over Denver Rd2( Cd) surface decrement of Ri2( Ci): winner score I game-result I loser agent process range a% head a% proper support-verb NP det I number I noun PP Chicago claimed a 128-94 victory over New Jersey Figure 3: Realization pattern examples Content Unit Type Clusters Space of Realization Patterns Ci = {Ul . . . Un-1,Un) Semantic Decrement I Cd = {Ul . . . Un-1) Linguistic Realization Figure 4: Differential analysis of realization patterns 368 Robin Base Structures Adjoin Conjoin Adjunctization evised Structures Nominalization odifier Figure 5: Structural schemas of five revision tools Natural Language Generation 369 1. Initial draft (basic sentence pattern): “Hartford, CT - Karl Malone scored 39 points Friday night as the Utah Jazz defeated the Boston Celtics 118 94.” 2. adjunctization: “Hartford, CT - Karl Malone tied a season high with S9 points Friday night as the Utah Jazz defeated the Boston Celtics 118 94.” 3. conjoin: “Hartford, CT - Karl Malone tied a season high with 39 points and Jay Humphries added 24 Friday night as the Utah Jazz defeated the Boston Celtics 118 94.” 4. absorb: “Hartford, CT - Karl Malone tied a season high with 39 points and Jay Humphries came off the bench to add 24 Friday night as the Utah Jazz defeated the Boston Celtics 118 94.” 5. nominalization: “Hartford, CT - Karl Malone tied a season high with 39 points and Jay Humphries came off the bench to add 24 Friday night as the Utah Jazz handed the Boston Celtics their sixth straight home defeat 118 94.” 6. adjoin: “Hartford, CT - Karl Malone tied a season high with 39 points and Jay Humphries came off the bench to add 24 Friday night as the Utah Jazz handed the Boston Celtics their franchise record sixth straight home defeat 118 94.” Figure 6: Incremental generation of a complex sentence using various revision tools shown in Fig. 5. For example, in the revision of sen- tence 3 into sentence 4 Fig. 6, the base VP “udded 24” gets subordinated under the new VP Qume oJgT the bench” taking its place in the sentence structure. See [Robin, 19921 for a presentation of Append. Complex revisions We identified six main types of complex revisions: Recast, Argument Demotion, Nominalization, Adjunctization, Constituent promotion and Coordination Promotion. Each is characterized by different changes to the base which displace constituents, alter the argument structure or change the lexical head. Complex revisions tend to be more specialized tools than simple revisions. For ex- ample, Adjunct ization applies only to clausal base patterns headed by a support verb VS. A support verb [Gross, 19841 does not carry meaning by itself, but pri- marily serves to support one of its meaning bearing arguments. Adjunct izat ion introduces new content by replacing the support verb by a full verb Vj with a new argument A,. Deprived of its verbal support, the original support verb argument Bc2 migrates into adjunct position, as shown in Fig. 5. The surface decrement pair < h&, Ri2 > of Fig. 3 is an example of Adjunct izat ion: the RANGE argu- ment of Rd2 migrates to become a MEANS adjunct in R. 221 when the head verb is replaced. The revision of sentence 1 into sentence 2 in Fig. 6 is a specific Adjuuctization example: ‘20 scure” is replaced by “‘to tie “, forcing the NP “39 points” (initially argu- ment of ‘^to score”) to migrate as a PP adjunct “‘with 39 points”. In the same figure, the revision of sentence 4 into sen- tence 5 is an example of another complex revision tool, Nominalization. As opposed to Adjunctization, Nominalization replaces a full verb I/f by a synony- 370 Robin mous <support-verb,noun> collocation < V, ,Nf > where Nf is a nominalization of Vf . A new constituent A, can then be attached to Nf as a pre-modifier as shown in Fig. 5. For example, in revising sentence 4 into sentence 5 (Fig. 6), the full-verb pattern “X de- feated Y” is first replaced by the collocation pattern “X handed Y a defeat”. Once nominalized, defeat can then be pre-modified by the constituents “their”, “sixth straight” and “home” providing historical background. See [Robin, 19921 for presentation of the four remain- ing types of complex revisions. Side transformations Restructuring transforma- tions are not the only type of transformations following the introduction of new content in a revision. Both simple and complex revisions are also sometimes ac- companied by side transformations. Orthogonal to re- structuring transformations which affect grammatical- ity, side transformations make the revised pattern more concise, less ambiguous, or better in use of collocations. We identified six types of side transformations in the corpus: Reference Adjustment, Ellipsis, Argument Control,Ordering Adjustment,Scope Marking and Lexical Adjustment. The revision of sentence 1 into sentence 2 in Fig. 1 is an example of simple revision with Reference Adjustment. Following the introduc- tion of a second reference to the losing team “the Nuggets . . ,“, the initial reference is abridged to sim- ply “Denver” to avoid the repetitive form “u 1.27 111 victory over the Denver Nuggets, handing the Denver Nuggets their seventh straight doss”. See [Robin, 19921 for a presentation of the other types of side transfor- mations. Revision tool usage Table 1 quantifies the usage of each tool in the corpus. The total usage is broken adjoin append conjoin absorb 11 recast argument demotion nominalization adjunctization constituent promotion coordination promotion 88 10 127 1 none 20 reference adjust 1 6 270 1 138 transformations 5 none none none none scope 1 streak ext rema nominal 2 - clause 91 5 NP coordination 28 - NP apposition 8 8 clause 10 3 nominal 1 - clause 3 2 ““” Table 1: Revision tool usage in the corpus 1 46 1 8 1 58 - 4 - 3 - 3 145 down by linguistic rank and by class of floating content units (e.g., Adjoin was used 88 times in the corpus, 23 times to attach a streak at the nominal rank in the base sentence). Occurrences of side transformations are also given. Figure 6 illustrates how revision tools can be used to incrementally generate very complex sentences. Start- ing from the draft sentence 1 which realizes only four fixed content units, five revision tools are applied in sequence, each one adding a new floating content unit. Structural transformations undergone by the draft at each increment are highlighted: deleted constituents are underlined, added constituents boldfaced and dis- placed constituents italicized. Note how displaced con- stituents sometimes need to change grammatical form (e.g., the finite VP “added 24” of (3) becomes infinitive “to add 24” in (4) after being demoted). Conclusion and future work The detailed corpus analysis reported here resulted in a list of revision tools to incrementally incorporate addi- tional content into draft sentences. These tools consti- tute a new type of linguistic resource which improves on the realization patterns traditionally used in gen- eration systems (e.g., [Kukich, 19831, [Jacobs, 19851, [Hovy, 19881) due to three distinctive properties: e They are compositional (concerned with atomic con- tent additions local to sentence subconstituents). e They incorporate a wide range of contextual con- straints (semantic, lexical, syntactic, stylistic). o They are abstract (capturing common structural re- lations over sets of sentence pairs). These properties allow revision tools to opportunisti- cally express floating content units under surface form constraints and to model a sublanguage’s structural complexity and diversity with maximal economy and flexibility. Our analysis methodology based on surface decrement, pairs can be used with any textual corpus. Revision tools also bring together incremental gen- eration and revision in a novel way, extending both lines of research. The complex revisions and side trans- formations we identified show that accomodating new content cannot always be done without modifying the draft content realization. They therefore extend pre- vious work on incremental generation [Joshi, 19871 [De Smedt, 19901 that was restricted to elaborations preserving the linguistic form of the draft, content. As content-adding revisions, the tools we identify also ex- tend previous work on revision [Meteer, 19911 [Inui et a/., 19921 that was restricted to content-preserving re- visions for text editing. In addition to completing the implementation of the tools we identified as revision rules for the STREAK gen- erator, our plans for future work includes the evalua- tion of these tools. The corpus described in this paper was used for acquisition. For testing, we will use two other corpora. To evaluate completeness, we will look at another season of basketball reports and compute the proportion of sentences in this test corpus whose realization pattern can be produced by applying the tools acquired in the initial corpus. Conversely, to eval- uate domain-independence, we will compute, among the tools acquired in the initial corpus, the proportion of those resulting in realization patterns also used in a Natural Language Generation 371 test corpus of stock market reports. The example be- low suggests that the same floating constructions used across different quantitative domains: are e “ Los Angeles - John Paxson hit 12 of 16 shots Fri- day night to score a season high 26 points help- ing the Chicago Bulls snap a two game losing streak with a 105 97 victory over the Los Angeles Clippers.” e “New York - Stocks closed higher in heavy trading Thursday, as a late round of computer-guided buy programs tied to triple-witching hour helped the market snap a five session losing streak.” Although the analysis reported here was carried out manually for the most part, we hope to automate most of -the evaluation phase using the software tool CREP [Duford, 19931. CREP retrieves corpus sentences matching an input, realization pattern encoded as a regular expression of words and part-of-speech tags. Acknowledgments Many thanks to Tony Weida and Judith Klavans for their comments on an early draft of this paper. This research was partially supported by a joint grant from the Office of Naval Research and the Defense Advanced Research Projects Agency under contract N00014-89- J-1782, by National Science Foundation Grants IRT- 84-51438 and GER-90-2406, and by New York State Center for Advanced Technology Contract NYSSTF- CAT(92)-053 and NYSSTF-CAT(91)-053. References Bourbeau, L.; Carcagno, D.; Goldberg, E.; Kittredge, R.; and Polguere, A. 1990. Bilingual generation of weather forecasts in an operations environment. In Proceedings of the 13th International Conference on Computational Linguistics. COLING. De Smedt, K.J.M.J. 1990. Ipf: an incremental parallel formulator. In Dale, R.; Mellish, C.S.; and Zock, M., editors 1990, Current Research in Natural Language Generation. Academic Press. Duford, D. 1993. Crep: a regular expression-matching textual corpus tool. Technical Report CUCS-005-93, Columbia University. Elhadad, M. and Robin, J. 1992. Controlling content realization with functional unification grammars. In Dale, R.; Hovy, H.; Roesner, D.; and Stock, O., ed- itors 1992, Aspects of Automated Natural Language Generation. Springler Verlag. 89-104. Elhadad, M. 1993. Using argumentation to control lexical choice: a unification-based implementation. Ph.D. Dissertation, Computer Science Department, Columbia University. Fawcett, R.P. 1987. The semantics of clause and verb for relational processes in english. In Halliday, M.A.K. and Fawcett, R.P., editors 1987, New developments in systemic linguistics. Frances Pinter, London and New York. Fensch, T. 1988. The sports writing handbook. Lawrence Erlbaum Associates, Hillsdale, NJ. Gross, M. 1984. Lexicon-grammar and the syntac- tic analysis of french. In Proceedings of the l&h In- ternational Conference on Compututionak Linguistics. COLING. 275-282. Halliday, M.A.K. 1985. An introduction to functional grammar. Edward Arnold, London. Hovy, E. 1988. Generating natural language under pragmatic constraints. L. Erlbaum Associates, Hills- dale, N.J. Inui, K.; Tokunaga, T.; and Tanaka, H. 1992. Text revision: a model and its implementation. In Dale, R.; Hovy, E.; Roesner, D.; and Stock, O., editors 1992, Aspects of Automated Natural Language Generation. Springler-Verlag. 215-230. Jacobs, P. 1985. PHRED: a generator for natu- ral language interfaces. Computationad Linguistics 11(4):219-242. Joshi, A.K. 1987. The relevance of tree-adjoining grammar to generation. In Kempen, Gerard, edi- tor 1987, Natural Language Generation: New Results in Artificial Intellligence, Psychology and Linguistics. Martinus Ninjhoff Publishers. Kukich, K. 1983. The design of a knowledge-based report generation. In Proceedings of the 2lst-Confer- ence of the ACL. ACL. Meteer, M. 1991. The implications of revisions for natural language generation. In Paris, C.; Swartout, W.; and Mann, W.C., editors 1991, Nututul Language Generation in Artificial Intelligence and Computu- tional Linguistics. l%luwer Academic Publishers.- Pavard, B. 1985. La conception de systemes de traite- ment de texte. Intedlectica l( 1):37-67. Quirk, R.; Greenbaum, S.; Leech, G.; and Svartvik, J. 1985. A comprehensive grammar of the English language. Longman. Robin, J. 1992. Generating newswire report leads - with historical information: a draft and revision approach. Technical Report CUCS-042-92, Com- puter Science Department, Columbia Universtity, New York, NY. PhD. Thesis Proposal. Robin, J. 1993. A revision-based generation archi- tecture for reporting facts in their historical context. In Horacek, H. and Zock, M., editors 1993, New Con- cepts in Natural Language Generation: Planning, Re- alization and Systems. Frances Pinter, London and New York. Rubinoff, R. 1990. Natural language generation as an intelligent activity. PhD. Thesis Proposal, Computer Science Department, University of Pennsylvania. 372 Robin | 1993 | 55 |
1,382 | ma Machine Translation of tial essions: efining the Relation between an erli and a Representation System* Bonnie J. Dorr and Glare R.. Voss Department of Computer Science A .V. Williams Building University of Maryland College Park, MD 20742 {bonnie,voss}@cs.umd.edu Abstract: In this paper we present one aspect of our research on machine translation (MT): defining the re- lation between the interlingua (IL) and a knowledge representation (KR) within an MT system. Our in- terest lies in the translation of natural language (NL) sentences where the “message” contains a spatial rela- tion - in particular, where the sentence conveys infor- mation about the location or path of physical entities in the real, physical world. We explore several argu- ments for clarifying the source of constraints on the particular IL structures needed to translate these sen- tences. This paper develops one approach to defining these constraints and building an MT system where the IL structures designed to satisfy these constraints may be tested. In this way, we have begun to address one of the basic issues in MT research, providing in- dependent justification for the IL itself. Keywords: Natural Language Processing, Knowl- edge Representation, Machine Translation, Lexical Knowledge, Spatial Knowledge 1 Introduction In this paper we present one aspect of our research on machine translation (MT): defining the relation be- tween the interlingua (IL) and a knowledge represen- tation ITRA I?! RR) system within an MT system called LEX- . Our interest lies in the translation of natural language (NL) sentences where the “message” contains a spatial relation - in particular, where the sentence conveys information about the location or path of phys- ical entities in the real, physical world. We will be looking at sentences such as: (1) Die Kirche liegt im Siiden der Stadt which may have either of the following interpretations: (2) (i) The church lies in the south of the city (ii) The church lies to the south of the city Here the location of a church is ambiguous with respect to the city: the church may lie in the southern part of the city, within the city limits, or it may lie south of the city. The need to translate such sentences accurately *This research has been partially supported by the Na- tional Science Foundation under grant IRI-9120788, by the DARPA Basic Research Program under grant N00014-92- J- 1929, by the Army Research Office under contract DAAL03- 91-C-0034 through Batclle Corporation, and by the Army Re- search Inst,itut,r: under contract MDA-903-92-R-0035 through Microelectronics and Design, IIIC. 374 Dsrr presents a clear case of where general as well as specific real world knowledge should assist in eliminating inap- propriate translations. For example, had the sentence above been about a mountain lying im Siiden of the city, the MT system should be able to use the default knowledge in a KR sys- tem that mountains typically are physical entities dis- tinct and external to cities, to produce only the second, “outside the city” translation of the sentence.’ The MT system should also be able, when the information is avail- able, to take advantage of specific facts to override the default reasoning. For example, if the RR system con- tains a fact, or is able to infer from other facts, that the particular mountain named in the sentence is in a city, the MT system should only produce the first (i.e., “inside the city”) translation of the sentence.2 What is intriguing about this translation is that the ambiguity concerns such a conceptually clear distinction ( i.e., lying inside of, vs. outside of, a geographical re- gion), yet this conceptual distinction is not “lexicalized”, i.e., it is not readily noticeable in the words of the sen- tence. This observation has led us to ask how the encod- ing of spatial relations - such as being lexicalized or not - should result in different formal representations for these relations in the components of an MT system, and what the interdependencies of these encodings should be. In producing the correct translation of a sentence, an MT system may need to have access to information about a spatial relation that is only logically implicit, i.e., not lexicalized - as was the case with the moun- tain/city sentence above. We will argue in this paper that, for a particular sentence, its logically implicit re- lations should be kept distinct from its lexicalized rela- tions. In the sections to follow, we will explain how this position is reflected in our system design by maintaining the main components - the syntax, the interlingua, and the RR system - as separate modules. In this context we address the question of what the relation between the interlingua (IL) and KR means. In general terms, our discussion will focus on the specific ways a KR system can assist the MT system in filtering out incorrect translations. In particular, of all the IL ‘We are assuming a non-interactive MT system here, not a system that has recourse to asking a person on-line during the translation process which of the possible meanings was intended or is most likely. ‘One of our reviewers noted, for example, that towns like Edinburgh have mountains in them. From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. structures built during the analysis phase, where each IL structure represents a distinct interpretation of the input sentence, we ask how the I<R system eliminates those interpretations that are incorrect or highly unlikely before the generation phase begins. The issues we will be examining are: e What primitives in the domain of spatial relations must be in both the IL and KR components? g What structures are passed between the IL and KR? 8 What relations will the KR need to infer, i.e., that are not in the IL? The system design of LEXITRAN reflects two re- search issues. First, we wish to capture the insights of Dorr (1993) and those of Nirenburg et ak. (1992) in the same model because they are complementary: Dorr has streamlined the syntax-IL mapping and Nirenburg et al. (1992) has demonstrated the advantages of including a taxonomic, or ontological, knowledge base (I(B) in a MT system. No MT system currently exists that combines these two approaches or is able to make the claims of both Dorr and Nirenburg et ~1.~ Second, MT theory has not yet defined the issues surrounding how IL and KR formalisms should be re- lated, either in terms of primitives, structures, or over- all MT system computational issues, such as efficiency. In the development of grammatical theory, for example, the “points of contact” between the syntax and the real world knowledge4 have been addressed in natural lan- guage processing (NLP) systems (e.g. Winograd (1973), and others in Grosz, Sparck-Jones, and Webber (1987)). However, with respect to a theory of the IL, these issues are more complex because no consensus exists yet on the criteria for evaluating ILs. It is this second concern for defining the relation be- tween the IL and the KR component5s of a MT system that we focus on in this paper. 2 a&ground This section first describes our system, focusing on cer- tain issues relevant to defining an interlingua, and then introduces the formalism we are using as an interlingua for our system. 2.1 From UNITRAN to LEXITRAN In translating from the input, i.e., a source language (SL) sentence, to the output, i.e., one or more target language (TL) sentences, an IL-based MT system pro- ceeds through two phases? 3 For an introduction’to the various approaches used in MT systems, see Hutchins and Somers (1992), chapter 4. Refer- ence is made to other IL-based MT systems in section 2.2 below. 4For example, the notion of selectional restrictions - such as the requirement that the verb sleey have an animate sub- ject to explain the anomaly of The ideas are sleeping - hinges on one’s definition of terms that are taxonomic (or ontological) and syntactic - “animate” and “subject” in the case of sleep. 5Another type of MT system is the transfer-based model. For examples of this approach, see AbeillC, Schabes, and Joshi (1990), Al onso (1990), Arnold and Sadler (1990), Boitet (1987), Colmerauer et al. (1971), Kaplan et al. (1989), Mc- .a+ An analysis phase the IL formalism. : the SL sentence is translated into 63 A generation phase: the IL structures are translated into TL sentences. MT system vary widely with respect to the processing components that are used in these two phases and the manner in which these components exchange intermedi- ate representations during the translation. A syntactic processor is required both for analysis of the SL sen- tence and synthesis of the final TL sentence. Also in both phases, an IL processor is needed to compose the IL representation (during the interpretation of the SL input) and to decompose the IL representation (during the production of the TL output . In our current system, LEXIT k AN, we have adopted the syntactic and IL processors from UNITRAN (see Dorr (1987, 1990, 1993)). However, our system differs from the UNITRAN design in two ways. First, in terms of the translation steps, LEXITRAN has an intermedi- ary “filter” phase between the analysis of the SL sen- tence and the generation of the TL sentences. Second, in terms of system components, LEXITRAN has a com- ponent containing a KR system, which is separate from the syntactic and IL processors. The filter phase makes use of a KR component and an MT system IL-KR interface program. During this phase, each structure that is output by the semantic analysis/composition phase is passed separately via the interface program to the KR component.’ This compo- nent in turn filters out, or discards, those structures con- taining spatial relations that are incompatible with the system’s facts; the resulting representations comprise the interlingua. The entire translation process is illustrated in Dorr and Voss (1993). We should note that an hlT system that includes a filter phase, by taking the extra step of having the KR system check its interpretations of a SL sentence (the IL set produced during the analysis phase), tackles two significant problems efficiently: (1) the MT system may be scaled up in terms of the number of natural languages it handles, without requiring changes to the I<R system which is isolated from the syntax; and (2) the MT system Cord (1989), van Noord et al. (1990), Thurmair (1990), among others. Simplifying somewhat, the following pyramid diagram is often used to illustrate a range of levels at which transfer is possible in an MT system, suggesting that as more of the source text is analyzed, the transfer becomes simpler. Semantic Syntactic Interlingual transfer Transfer Direct transfer See Hutchins and Somers (1992), chapter 6, for a critical discussion of different MT models. ‘For the KR component we are using PARKA (Spec- tor, Hendler, and Evett (1990) and Spector et al. (1992)), a frame-based KR system which was designed to provide a principled approach to multiple inheritance and the rep- resentation of part-whole relations. (Also, see Woods and Schmolze (1992) f or an overview of the KL-ONE family of KR systems that PARKA belongs to.) Since at this time our KR needs are quite narrow, we have opted to use this adequate and readily available resource. Natural Eanguage Sentence Analysis 375 may be scaled up in terms of the number of words in a language,7 - without requiring changes to the syntactic component, similarly because that component is isolated from the KR. 2.2 Defining an Interlingua As mentioned in section 1, the field of MT research lacks a consensus on what evaluation criteria should be applied to IL formalisms. Other IL-based MT sys- tems have drawn on a variety of semantic formalisms as the basis for their IL. For example, the project Rosetta (Appelo and Landsbergen (1986)) uses an IL based on M-grammar, a representation derived from Montague grammar (Dowty, Wall, and Peters (1981)). Barnett et al. (1991) at MCC have taken Discourse Represen- tation Theory (DRT 1 (H eim (1982), Kamp (1984)) as their starting point or an IL. In both these cases, the original representations were developed as a theoretical formalism and then were adapted for an MT system. This raises the question of how the IL in an MT sys- tem relates to a theory of semantics. The semantic or IL structures of MT systems must meet two different types of criteria. First, IL structures must map somehow to KR structures and we must justify what differentiates the representations in the IL and KR components since neither is constrained by perceptual data. Defining a IL- KR mapping is a precondition to building an MT system that can take advantage of the KR capabilities to filter out incorrect translations. Second, the IL structures in an MT system must map to and from syntactic struc- tures of all the languages in the system - not just one language. Our approach in LEXITRAN has been to assume that the “languages” of the IL and KR systems share many of the same predicates, but are not identical. Instead, the IL predicates are a proper subset of those in the KR system because we wish to allow, in principle, for KR concepts that are not needed for language-to-language translations. This avoids the problem of trying to repre- sent a “full” meaning for each word in a sentence being translated. Another advantage to making this distinction is that we wish to have the predicates of the IL system be driven by the demands of the syntax-to-IL mapping, rather than by the KR system. This design consideration pro- tects our system from becoming unnecessarily brittle as the KR system grows or changes with the domain of translation. It also reflects our bias toward maintain- ing the advantages of assumptions made by Dorr (1993) over those of Nirenburg et al. (1992) when the two have different consequences for LEXITRAN. 2.3 Lexical Conceptual Structures LEXITRAN bases its IL formalism on the theory of se- mantic structures developed by Jackendoff (1983, 1990). The representation he developed, referred to as lexical conceptual structure (LCS) and later conceptual struc- ture (CS), is defined at the word level. That is, for each word, there exists one or more CSs that defines its meaning as a structure. For the meaning of a sen- tence, simplifying somewhat, the CSs for the words in that sentence are composed into one CS. The resulting CS then represents the meaning of the sentence. When a word has multiple meanings, it has, for each of those meanings, a separate CS associated with it in the lexicon. This occurs, for example, in the case of the English word under which is ‘overloaded’ and can convey several distinct interpretations. For these same interpre- tations German uses its word unter and then relies on the grammatical mechanism of case markings and an ad- ditional word to make further distinctions. Consider the translation of the English sentence The mouse run under the table to its three German equivalents: (3) (i) Die M aus ist unter dem Tisch gelaufen ‘The mouse ran (about in the area) under the table’ (ii) Die Maus ist unter den Tisch gelaufen ‘The mouse ran (to a place somewhere) under the table’ (iii) Die Maus ist unter dem Tisch durch gelaufen ‘The mouse ran (past a place somewhere) under the table’ The English preposition under together with the verb run conveys ambiguously three possible spatial relations, i.e. three different paths that the mouse may take. In German, two of these paths are distinguished from the third by explicit case markings: the accusative and the dative cases show up on the determiner den and dem of the 110~11 Tisch, and distinguish between the path havin an endpoint (as when the mouse stops under the table 7 and the path being open-ended (as when the mouse con- tinues to move either past or about under the table). The mechanism of a verbal prefix is then also available in German for conveying additional information about the path.8 Here the prefix durch is needed to convey that the path is not only under the table, but that it also continues ‘past’ or beyond being under the table. Note that these two ways of explicitly distinguishing the path types - namely, the presence of the word durch and the different case marking options in the German translations - give us evidence that we do indeed need to have enough information in our CSs for the English word under to differentiate among these path types. Without the path details being stored in the CSs, the information needed to generate the German translations correctly would not appear in the IL and hence would be lost in the analysis phase. Now consider a change of window to door in the sentence: The mouse run under the door. This change does not affect the IL composition process in the transla- tion. We would expect however that a KR system would be able to filter out 2 of the 3 interpretations - namely, those corresponding to running to a place under the door and those for running about in a place under the door. These should follow from the assumption that a typical door is upright on its hinges and so has inadequate space for a mouse to run ‘to a place under’ or ‘about in an area under’ and yet still be understood as having run under the door. 3 Analysis The aim of the last section was to provide a brief in- troduction to LEXITRAN and the issues of defining an 7We assume here that the syntactic words have already been included in the categories of those syntax component. ‘The grammatical status of these prefixes is subject to debate within linguistic theory (van Riemsdijk (1990)). 376 Dorr interlingua as a level of representation that is distinct from the syntax and the KR components of the system. Now we will examine more carefully the domain of the sentences being translated and the evidence for the rep- resentations in the different components. 3.1 Spatial Domain and Spatial Predicates We have been using the phrase “spatial relation” to refer to the relative positions of objects in 3-climensional phys- ical space. Thus, when referring to the “spatial relation” of a cup being on a table, we are locating one object, a cup, in terms of the top surface of another object, a ta- ble. This phrase is meant to capture a conceptual level of representing such relations. We could describe spatial relations in a mathematical notation, such as with Carte- sian coordinates. However the symmetry of mathemat- ical formalizations for spatial relations does not extend to the natural language exmessions of spatial relations (Talmy (1978)). ” ” A In contrast to “spatial relation” we use term “spatial expression” to mean the linguistic surface structures that express spatial relations. ’ Not all natural languages have or use the same set of linguistic forms to convey the loca- tion or path of motion of objects in physical space. For example, the spatial relation expressed in a preposition in English may appear as a verbal prefix in a Russian translation, or as a postfix on the head noun of an NP in a language such as Korean. Or the equivalent of the En- glish preposition may not actually appear as a distinct surface element in a French translation, but instead be incorporated into the meaning of a verb. In order to identify more narrowly the parts of a spa- tial expression that we will be discussing, we will use the term spatial predicate. A predicate is a structure (composed of a predicate-relation and arguments) that exists at the IL or semantic level of representation: it is a theoretical construct. Modifying the categories of Talmy (1985) and adapt- ing work on PLACES and PATHS by Jackendoff (1983, 1990) to our MT framework, we identify the following components within spatial predicates: Tl: T2: T3: T4: T5: The type of spatial predicate being conveyed, one of two high-level characterizations we will be esamining: a PLACE or a PATH. Ezample: he stood on the bout contains a PLACE, whereas the cargo was loaded onto the bout contains a PATH. The “target” object or event, the item being located. Example: the boy in the phrase the boy is on the bout. One or more “reference” objects, items whose locations are known. Example: the bout in the phrase the boy is on the bout. The spatial operator, one of a few high-level characteri- zations, including a LOCATION or a DIRECTION. Ex- ample: a direction such as south, left, down, or away; a location in a physical relation such as against or a geometric configuration such as around A “perspective” location or frame of reference. Example: German bin/her distinction; English here/there distinc- tion; also the distinction between come and go as in he came into the room and he went into the room. ‘Many other similar, less inclusive terms exist in the lit- erature. (See Dorr and Voss (1993).) In a simple sentence such as the cup is on the table, the spatial predicate Tl, corresponds to a PLACE, meaning that location on the table where the cup is. T2, the target or object being located, and T3, the reference object, correspond to the phrases the cup and the table, respectively. The spatial operator T4, corresponds to the word on. This sentence conveys a spatial relation that is independent of the viewer’s perspective and so T5 has no value. The mapping from Tl-T5 to the parts of a sentence is not always one-to-one however. Here are a few examples where the mapping is not so obvious: In (a), T4 corresponds to a lexically implicit value (the word up does not appear in the sentence). Similarly, in (b) and (c) there are other non-explicit values for the components, T3, T4, and T5. Currently our IL structures contain spatial predicates corresponding to relations for the Tl, T3, and T4 com- ponents; we have not yet implemented the T5 relation and have chosen to treat the T2 part of the predicate as an “external argument” (i.e., it is outside the IL spatial predicate constituent structure).” We should note here that the components in our pred- icates will need to be refined as we develop a richer model of spatial expressions. For example, some lan- guages make fine-grained distinctions with respect to dis- tances in their frames of reference (our T5). We have not dealt with the structure of measurement and quantity, so we have not formalized phrases like under many tables. And, in order to extend our work to an intersentential, or discourse level of analysis, our predicates may need ad- ditional components for tracking spatial focus (Maybury (1991)). Dorr and Voss (1993) p resent a discussion of the changes made to the CSs that were adapted from Jack- endoff’s framework for LEXITRAN. One critical argu- ment made there concerns the need for within-language synonymy tests, as well as cross-linguistic evaluations, in an iterative approach to developing the lexical-semantics for PATH and PLACE predicates. The results of such tests provide the first step in establishing evidence for the particular structures being hypothesized as IL pred- icates. The set of structures developed in this way can then be tested across languages. Furthermore, as noted in Dorr and Voss (1993), since there is a finite set of “lex- icalization classes” that enumerate where spatial pred- icates may appear in the spatial expressions of a lan- guage, research can proceed by testing structures that fall into each of the relevant lexicalization classes. 3.2 Evidence for Encodings In order to talk about the encoding of spatial relations in the various parts of the MT system and examine the “This is analogous to the syntactic treatment of a sentence subject which is generally considered to be external to the verb phrase. Nttural Language Sentence Analysis 377 role of the KR system in filtering out incorrect interpre- tations during the translation process, we need to clarify which encodings appear in which part of the MT system. The following terms are used to classify the encoding of spatial relations on the basis of the “evidence” we have for them: Zexicdy explicit: a spatial relation encoded explicitly in a word. Zexicdy implicit: a spatial relation encoded implicitly, or internal to the structure representing the meaning of a word. logically inferable: a spatial relation logically inferred from lexically explicit or implicit relations, but not itself part of the structure representing the meaning of a word. In the first two cases, the relation appears in the lexical entry for the relevant word; in the third case, the relation does not appear in the lexical entry. An example of the first case is the direction SOUTH as an abstract concept, which is lexically explicit in the word south.” An example of the second case is the di- rection UP as a lexicallyimplicit component of the word Z$t. The implicit presence of this constituent is appar- ent in tests for synonymy: he @ted the baby, he Zifted the baby up, and he lifted up the baby. Finally, as an example of this last category, the direction FROM is logically inferable in the sentence John arrived home, where the lexically implicit relation PATH contains the explicit PLACE from home, and where we can infer log- ically that in a PATH ending at home, there was also a DIRECTION from which the arriver, John, came. The definition of these categories is tied to the way we have modularized LEXITRAN into components. In the chart below, the X’s mark which types of encoding of a spatial relation may appear in which of the components in our MT system. Spatial Relation Component of L,??XTRAN Wntax I 1L I kR I ” lexically explicr t X x x lexically implicit x x logicallv inferable12 X Following up on the examples above, the relation SOUTH in south will be represented at all levels in LEX- ITRAN, whereas UP in 2$2 will only be represented at the IL and RR levels, and FROM in John arrived home will only be represented at the RR level (as the result of inferencing). We can readily see that the Syntax-IL mapping re- quires tracking which components in the spatial predi- cates (at the IL level) appear in the surface SL and TL sentences and where in &e sentence svntax thev will be positioned. The IL-RR mapping involves formation of structures. Instead. the IL ” no such trans- structures are passed to the KR component for the checking of its spa- tial predicates; thus, the term spatial predicate extends to these structures once transferred into the RR compo- nent as well. However, one must not confuse the spatial llThe words in capital letters refer to the spatial relation, the abstract term. 12The logically inferrable relations can be broken out into the “logically explicit” facts explicitly encoded in the KB of the KR system and the “logically implicit” facts that are derived from other facts and inference rules in the system. relations that were inferred in the RR system from those brought in by the IL representation. To clarify this last point, consider the following En- glish sentences: (4) (i) He took the book to Tanya’s table (ii) He took the book from Florence’s floor If the sentences are translated into German, the tuke- to component of the first sentence translates to bringen whereas the take-from component in the second sentence translates to nehmen. In both sentences there is an im- plicit PATH relation where a book moves from one lo- cation to another. The FROM direction is logically in- ferable in the first sentence but lexically explicit in the second sentence. The situation is reversed with a TO di- rection: the TO is lexically explicit in the first sentence, but only logically inferable in the second sentence. If our IL representation of the first sentence were to include the FROM relation - and similarly if our IL representation of the second sentence were to include the TO relation - then at the point in translation where the system must generate a German sentence, we would have lost the in- formation from the lexicalization and could no longer use it to select between the two German verbs. This last example and the chart above help illustrate the double set of justifications that are required in a theory of the interlingua. In one direction the syntax-IL mapping provides one set of constraints on the IL, and in the other direction, the IL-KR mapping provides an- other set. Currently no theory of the interlingua defines these constraints and addresses the criteria to be used in evaluating them. 4 Results and Discussion If we consider the status of the sentence Daniel drove to the south of Colorado, we quickly determine that the phrase the south of Colorado is ambiguous. One inter- pretation of this sentence is that Daniel drove to south- ern Colorado. That is, the phrase the south of Colorado refers to the region inside of Colorado that is consid- ered its south. The IL structure for this part-to-whole relation, where the “part” is the meaning of the en- tire phrase and the “whole” is Colorado, is viewed as a “place-place” relation by the KR component which checks for a part-whole interpretation when it encounters two “place” predicates in an IL structure. Now consider the following examples: (5) (i) Maria drove to the south of Florida (ii) The skipp er navigated to the south of Gibraltar. In the first case, the south of Florida refers to a part- whole spatial relation, with an “inside of” Florida inter- pretation. In the second case, the south of Gibraltar does not mean a part-whole relation - it refers to a region “outside of” Gibraltar. This distinction is captured in the translation into Spanish: (6) (i) Maria manejo hacia el sur de la Florida (ii) El capitdn navejo al sur de Gibraltar In other words, what appears as a conceptual distinction must be detected in the MT system in order to appro- priately select the proper translation into Spanish. Syntactically the English sentences are identical. At the IL level they are ambiguous. The IL processor will 378 Dorr create the “inside of” as well as the “outside of” IL in- terpretation for both of these sentences since it has 110 knowledge of which interpretation makes sense for which sentence. In other words, it does not check the Tl- T4 values in predicates against real world knowledge. Rather it is the KR processor that performs this check- ing in the filter phase of the translation: it must allow for the part-whole IL structure for the first sentence and discard that structure for the second. The KR processor makes use of the information in the IL structure that, in this case, was contributed by the verbs. Sentences (5)(i) and (G)(i) contain the equiva- lent of “go by land-vehicle” in its IL structure, whereas sentence (5)(ii) and (6)(“) ii contain the equivalent of “go by water-vehicle” in its structure.13 The KR, using in- ference rules that disallow a “go by X-vehicle” event composed with a path not 011 X (X would be “land” or “water” here), would rule out the two anomalous cases that concern us: (1) the IL interpretation of “driving to the south of Florida” as going by car to the outside of Florida, and (2) the IL interpretation of “navigating to the south of Gibraltar” as going by boat to the inside of Gibraltar. This result - enabling the KR to filter out anomalous IL interpretations by virtue of IL structures where it can readily identify the arguments within spa- tial predicates - also extends to other internal-external distinctions among spatial entities. Our approach has been to assume that, in general, the syntactic properties of phrases reflect the underly- ing predicate-argument structural meaning of the words that head those phrases. Since we can “see” and test properties of phrasal structure, but not those of semantic structure, we must take advantage of what information we can glean from phrasal structures. The idea here is to use the differences in word meaning that correlate with syntactic distribution patterns to refine t,he hypotheses we have for the meaning structure - rather than de- veloping lexical semantic structures solely based 011 our intuitions.14 We have described our approach to translating spatial expressions in an IL-based MT system and presented several arguments for the next steps in developing a theory of the interlingua. Such a theory must specify what can count as a constraint on the IL structures and thus provide independent justification for the particular structures being used. Our approach combines promis- ing syntactic and semantic aspects of existing translation 13See Dorr (1993) for a more complete discussion of verbs’ CS structures. 14Two related issues should be addressed here. First, there is the notion that current lexical semantics basically draws on the well-known linguistic work on case (e.g. Fillmore (1968)) and so seems uninteresting. The second issue, is that the cur- rent computational work done in lexical semantics basically does not go beyond the insights achieved in the 70s work in AI (e.g. Schank (1973)), and so again seems uninteresting. However, we argue that this earlier work does not place any constraints on (1) what can be represented as a predicate, (2) the number of arguments, (3) the obligatory or optional nature of those arguments, or (4) the definitions of what con- stitutes a valid argument, each theory must provide indepen- dent justification for its hypothesized structures. It is at the lower level of how the ideas about case are integrated with the rest of modern linguistic theory that the current research challenges exist. systems; we see this as the most appropriate framework for addressing some of the tough issues in MT, including the development of criteria for evaluating IL representa- tions. eferences Abe&?, A., Schabes, Y., and Joshi, A. K. 1990. Using Lexicalized Tags for Machine Translation. In Proceedings of Thirteenth International Conference on Computational Linguistics, l-6. Helsinki, Finland. Appelo, L. and Landsbergen, J. 1986. The Machine Translation Project Rosetta. First International Conference on State of the Art in Machine Translation. Saarbruecken, Germany. Barnett, J., Mani, I., Rich, E., Aone, C., Knight, K., and Mar- tinez, J. C. 1991. Capturing Language-specific Semantic Distinctions in Interlingua-Based MT. In Proceedings of the Machine Translation Summit, 25-32. Washington, DC. Dorr, Bonnie J. 1987. UNITRAN: An Interlingual Approach to Ma- chine Translation. In Proceedings of the Sixth Conference of the Amer- ican Association of Artificial Intelligence, 534-539. Seattle, Washing- ton. Dorr, Bonnie J. 1990. Solving Thematic Divergences in Machine Translation. In Proceedings of the 28th Annual Conference of the As- sociation for Computational Linguistics, 127-134. University of Pitts- burgh, Pittsburgh, PA. Dorr, Bonnie J. 1993. Machine Translation: A View from the Lezicon. Cambridge, MA: MIT Press. Dorr, Bonnie J. and Voss, Clare. 1993. Constraints on the Space of MT Divergences, Working Notes for the AAAI 1993 Spring Sym- posium, Building Lexicons for Machine Translation, Technical Report SS-93-02, Stanford University, CA. Dowty, D. R., Wall, R. E. and Peters, S. 1981. Zntroduclion lo Montague Semantics. Dordrecht, Holland: Reidel. Fillmore, C. 1968. The Case for Case. In Universals in Linguistic Theory, Bach, E. and R. Ilarms, eds. New York, NY: Holt, Rinehart, and Winston, l-88. Grosz, B., Sparck-Jones, I<., and Webber, B. eds. 1987. Readings in Natural Language Processing. Los Altos, CA: Morgan Kaufmann. Heim, I. 1982. The Semantics of Definite and Indefinite Noun Phrases. Ph.D. diss., Dept. of Linguistics, University of Mas- sachusetts, Amherst, MA. IIutchins, J. W. and Somers,H. L. 1992. An Znlroduclion 20 Ma- chine Translation. London, England: Academic Press. Jackendoff, Ray S. 1983. Semantics and Cognition. Cambridge, MA: MIT Press. Jackendoff, Ray S. 1990. Semantic Slruc2ures. Cambridge, MA: hiIT Press. Kamp, Ii. 1984. A Theory of Truth and Semantic Representation. In Formal Methods in the Study of Language. Dordrecht, Holland: Foris Publications. Maybury, M. T. 1991. Topical, Temporal, and Spatial Constraints on Linguistic Realization. Compulalional Intelligence 7(4): 266-275. Nirenburg, S., Carbonell, J., Tomita, M., and Goodman, K. 1992. Machine Translation: A Knowledge-Based Approach. San Mateo, CA: Morgan Kaufmann. Riemsdijk, II. van 1990. Functional Prepositions. In Unity in Da- versily, Pinkster, II. and Genee, I., eds. Dordrecht, Holland: Foris Publications, 229-241. Schank, R. 1973. Identification of Conceptualizations Underlying Natural Language. In Computer Models of Thought and Language, R. Schank and K. Colby, eds. San Francisco, CA: W. H. Freeman, 187-247. Spector, L., Hendler, J. and Evett, M. 1990. Knowledge Representa- tion in PARKA, UMIACS TR 90-23, CS TR 2410, Dept. of Computer Science, University of Maryland, College Park, MD. Spector, L., Anderson, W., Hendler, J., Kettler, B., Schwartzman, E., Woods, C. and Evett, M. 1992. Knowledge Representation in PARKA - Part 2: Experiments, Analysis, and Enhancements, CS-TR- 2837, UMIACS-TR-92-16, Dept. of Computer Science, University of Maryland, College Park, MD. Talmy, L. 1978. Figure and Ground in Complex Sentences. Univer- sals of Human Language, Vol. 4: Syntax J. H. Greenberg, C. Fergu- son, and E. hforavcsik, eds. Palo Alto, CA: Stanford University Press, 625-49. Talmy, L. 1985. Lexicalization Patterns: Semantic Structure in Lexical Forms. Crammalical Categorres and Ihe Lexicon Timothy Silopen, ed. Cambridge, England: University Press, 57-149. Winograd, T. 1973. A Procedural Model of Language Understand- ing. In Computer Models of Thought and Language, R. Schank and Ii. Colby, eds. San Francisco, CA: CV. II. Freeman. Woods, W.A. and Schmolze, J. A. 1992. The KL-ONE Family. Compulers and Mathemalics with Applicalions, 23(2-5): 133-177. Natural Language Sentence Analysis 379 | 1993 | 56 |
1,383 | Having Your Ca e and Eatin nd Interactio a Model of Kurt R Eiselt* Kavi Mahesh” College of Computing College of Computing Georgia Institute of Technology Georgia Institute of Technology Atlanta, Georgia 30332-0280 Atlanta, Georgia 30332-0280 eiselt@cc.gatech.edu mahesh@cc.gatech.edu Department of Psychology Albion College Albion, Michigan 49224 jen@cedar.cic.net Abstract Is the human language understander a collection of modular processes operating with relative autonomy, or is it a single integrated process? This ongoing debate has polarized the language processing community, with two fundamentally different types of model posited, and with each camp concluding that the other is wrong. One camp puts forth a model with separate processors and distinct knowledge sources to explain one body of data, and the other proposes a model with a single processor and a homogeneous, monolithic knowledge source to explain the other body of data. In this pa- per we argue that a hybrid approach which combines a unified processor with separate knowledge sources provides an explanation of both bodies of data, and we demonstrate the feasibility of this approach with the computational model called COMPERE. We be- lieve that this approach brings the language processing community significantly closer to offering human-like language processing systems. The Big Questions Years of research by linguists, psychologists, and artificial intelligence specialists have provided significant insight into the workings of the human language processor. Still, funda- mental questions remain unanswered. In particular, the de- ‘bate over modular processing versus integrated processing rages on, and experimental data and computational models exist to support both positions. Furthermore, if the inte- grated processing position is correct, just what exactly is integrated? And if the modular position is the right one, what are the different modules? Do they interact, and if so, to what extent and when? Or are those modules entirely autonomous? Wrestling with these questions induces considerable frus- tration in researchers. This frustration stems not only from the research community’s apparent inability to answer them satisfactorily, but also from the overwhelming importance of the answers themselves-these answers, once uncovered, undoubtedly will impact thinking in all areas of artificial in- in *During the course of this work, these authors were part by a research grant from Northern Telecom. supported telligence and cognitive science research, including visual processing, reasoning, and problem solving, to name just a few. In this paper, we intend to provide the reader with an- swers to some of these questions-answers based on nearly ten years of our own interdisciplinary research in sentence processing, and built upon the work of many others who went before us. In brief, we propose a model of language understanding (or, more specifically, sentence processing) in which all linguistic processing is performed by a single unified process, but the different types of linguistic knowl- edge necessary for processing are separate and distinct. This model accounts for conflicting experimental data, some of which suggests an autonomous, modular approach to lan- guage processing, and some of which indicates an integrated approach. Because it is a closer fit to the experimental data than any model which has gone before, this model conse- quently points the way to more human-like performance from language processing systems. a&ground Our new model of sentence processing has its roots in work begun nearly ten years ago. That research effort started as an attempt to explain how the human language understander selected the most context-appropriate meaning of an am- biguous word, and then was able to correct both the choice of word meaning and the surrounding sentence interpreta- tion, without reprocessing the input, when later processing showed that the initial choice of word meaning was erro- neous. The resulting computational model, ATLAST (Eiselt, 1987; Eiselt, 1989), resolved word sense ambiguities by activating multiple word meanings in parallel, selecting the meaning which matched the previous context, and deacti- vating but retaining the unchosen meanings for as long as resources were available for retaining them. If later context Proved the initial decision to be incorrect, the retained mean- ings could be reactivated without reaccessing the lexicon or reprocessing the text. ATLAST proved to have great psy- chological validity for lexical processing-its use of mul- tiple access was well grounded in the psychological litera- ture (e.g., Seidenberg, Tanenhaus, Leiman, & Bienkowski, 1982), and, more importantly, it made psychological predic- tions about the retention of unselected meanings that were 380 Holbrook From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. experimentally validated (Eiselt & Holbrook, 1991; Hol- brook, 1989). ATLAST provided an architecture of sen- tence processing which was also used to explain recovery from erroneous decisions in making pragmatic inferences as well as explaining individual differences in pragmatic inferences (Eiselt, 1989; cf. Granger, Eiselt, & Holbrook, 1983). Error recovery in semantic processing had occasionally aroused the attention of researchers in conceptually-based natural language understanding, but the questions that arose were usually dismissed as unimportant or something which could be resolved as an afterthought (Birnbaum 8~ Self- ridge, 1981; Lebowitz, 1980; Lytinen, 1984). These re- searchers were content to assume that the first inference decision made was the correct one. Meanwhile, other re- searchers investigating syntactically-based approaches had long since concluded that the means by which erroneous syntactic decisions were accommodated had a dramatic im- pact on the architecture of the syntactic processor being proposed. For example, the backtracking models embodied the theory that only a single syntactic interpretation need be maintained at any given time, so long as the processor could keep track of its decisions, undo them when an erroneous decision was discovered, and then reinterpret the input (e.g., Woods, 1973). The lookahead parsers tried to sidestep the problems inherent in backtracking by postponing any de- cision until enough input had been processed to guarantee a correct decision, thereby avoiding erroneous decisions to some extent (e.g., Marcus, 1980). Another approach to avoiding erroneous decisions was offered by parallel parsers which maintained all plausible syntactic interpretations at the same time (Kurtzman, 1985). ATLAST, however, was a model of semantic processing and did not address the issue of recovery from erroneous syntactic decisions, nor did it substantially address the issue-of syntactic processing at all. Recently, S towe (199 1) presented experimental evidence showing that in dealing with syntactic ambiguity, the sen- tence processor accesses all possible syntactic structures simultaneously and, if the structure preferred for syntactic reasons conflicts with the structure favored by the current semantic bias, the competing structures are maintained and the decision is delayed. Furthermore, the work suggests an interaction of the various knowledge types, as in some cases semantic information influences structure assignment or triggers reactivation of unselected structures. This model of limited delayed decision in syntactic ambiguity resolution had much in common with the ATLAST model of semantic ambiguity resolution. Both models proposed an early com- mitment where possible. Both models had the capability to pursue multiple interpretations in parallel when ambiguity made it necessary. Both models explained error recovery as an operation of switching to another interpretation main- tained in parallel by the sentence processor. Finally, both models made decisions by integrating the preferences from syntax and semantics. One explanation for this high degree of similarity between the syntactic and semantic error recovery mechanisms is that there are two separate processors, one for syntax and one for semantics, each with its corresponding source of linguistic knowledge, and each doing exactly the same thing. A more economical explanation, however, is that there is only one process which deals with syntactic and semantic information in the same manner. We have chosen to explore the latter explanation, as others have done, but we have also chosen to maintain the separate knowledge sources for reasons which will be explained below. (See also Holbrook, Eiselt, & Mahesh, 1992.) verview 0 Our new model of sentence processing, called COMPERE (Cognitive Model of Parsing and Error Recovery), consists of a single unified process operating on independent sources of syntactic and semantic knowledge. This is made possible by a uniform representation of both types of knowledge. The unified process applies the same operations to the different types of knowledge, and has a single control structure which performs the operations on syntactic and semantic knowl- edge in tandem. This permits a rich interaction between the two sources of knowledge, both through transfer of control and through a shared representation of the interpretations of the input text being built by the unified process. An advantage of representing the different kinds of knowledge in the same form is that the boundaries between the different types of knowledge can be ill-defined. Often it is difficult to classify a piece of knowledge as belonging to a particular class such as syntactic or semantic. With a uniform representation, such knowledge lies in between and can be treated as belonging to either class. Syntactic and semantic knowledgeare represented in sep- arate networks in which each node is a structured represen- tation of all the information pertaining to a syntactic or semantic category or concept. A link, represented as a slot- filler pair in the node, specifies a parent category or concept of which the node can be a part, together with the condi- tions under which it can be bound to the parent, and the expectations that are certain to be fulfilled should the node be bound to the parent. In addition, nodes in either network are linked to corresponding nodes in the other network so that the unified process can build on-line interpretations of the input sentence in which each syntactic unit has a corre- sponding representation of its thematic role and its meaning. In addition, there is a lexicon as well as certain other minor heuristic and control knowledge that is part of the process. (COMPERE’s architecture and knowledge representation are displayed graphically in Figures 1 and 2.) The unified process is a bottom-up, early-commitment parsing mechanism integrated with top-down guidance through expectations. The operators and the control struc- ture that constitute the unified process are described briefly in the algorithm shown in Figure 3. The COMPERE prototype has been implemented in Common LISP on a Symbolics LISP Machine. At this time, its unified process can perform on-line interpretations of its input, and can recover from erroneous syntactic decisions when necessary. COMPERE is able to process relatively complex syntactic structures, including relative clauses, and Natural Language Sentence Analysis 381 Words Syntactic Semantic Conceptual PaW? Roles Meaning Knowledge Figure 1: Architecture of COMPERE. syntactic-node NP: VP: (must-precede Vj’ S: (must-precede NIL) (expect VP) PP: (must-precede Prep) semantic-node Active-SUBJ: Agent: ((satisfies-event-role agent) (satisfies-filler-constraints agent)) Non-Agent-SUBJ: . . . . . Figure 2: Knowledge Representation in COMPERE.’ 1. Access lexical entries of next word. 2. Create instance nodes for syntactic category, meaning, and (primitive) thematic role. 3. Compute feasible bindings to parents for syntactic in- stance node and role instance node. (This operation checks any conditions to be satisfied to make the binding feasible; it also takes existing expectations into account.) 4. Rank syntactic and semantic feasible bindings by their respective preference criteria. Combine feasible bindings and select the most preferred binding. 5. Make the binding by creating parent node instances and appropriate links, and generating any expectations. Create links between corresponding instances in syntax and their thematic roles and meanings. 6. Retain alternative bindings for possible error recovery. 7. If there is no feasible binding for a node, explore previ- ously retained alternatives to recover from errors. 8. Continue to bind the parent nodes to nodes further up as far as possible (such as until the S node in syntax or the Event node in semantics). Figure 3: Unified Process: Algorithm. can resolve the associated structural ambiguities. Autonomy and interaction effects from one process COMPERE is able to exhibit seemingly modular processing behavior that matches the results of experiments showing the autonomy of different levels of language processing (e.g., Forster, 1979; Frazier, 1987). It is also able to dis- play seemingly integrated behavior that matches the results of experiments showing semantic influences on syntactic structure assignment (e.g., Cram & Steedman, 1985; Tyler & Marslen-Wilson, 1977). For example, consider the pro- cessing of the following sentence: (1) The bugs moved into the new lounge were found quickly. This sentence has a lexical semantic ambiguity at the sub- ject noun bugs that could mean either insects or electronic microphones. In addition, it is also syntactically ambigu- ous locally at the verb moved since there is no distinction between its past-tense form and its past-participle form. In the simple past reading of moved, it would be the main verb with the corresponding interpretation that “the bugs moved themselves into the new lounge.” On the other hand, if moved is read as a verb in its past-participle form, it would be the verb in a reduced relative clause corresponding to the meaning “the bugs which were moved by somebody else into the new lounge....” Parse trees for the two structural interpretations and the corresponding thematic-role assign- ‘The arrows in Figure 2 simply indicate which types of knowl- edge point to which other types; they do not mean that the specific nodes shown point to the other nodes. 382 Holbrook ments are shown in Figures 4 and 5.2 Figure 4: Garden Path: Main-Clause Interpretation. Null Context: When sentence (1) is presented to COM- PERE in a null semantic context, one where there is no bias for either meaning of the noun bugs, COMPERE reads ahead without resolving the lexical ambiguity at the word bugs. When it encounters the structural ambiguity at the verb moved, COMPERE does not have the necessary infor- mation to decide which of the two structures in Figures 4 and 5 is the appropriate one to pursue. However, COMPERE has a syntactic preference for the main-verb interpretation over the relative clause one. Though this preference can be explained by the minimal attachment principle (Frazier, 1987), COMPERE offers a more general explanation. Extrapolating from Stowe’s model, we have endowed COMPERE with the pervasive goal of completing an incomplete item at any level of pro- cessing. In syntactic processing, it has a goal to complete the syntactic structure of a unit such as a phrase, clause, or a sentence. COMPERE prefers the alternative which helps complete the current structure (called the Syntactic Default) over one that adds an optional constituent leaving the in- completeness intact. For instance, in (l), a VP is required to complete the sentence after seeing The bugs. Since the main-clause interpretation helps complete this requirement and the relative-clause interpretation does not, the main- clause structure gets selected. In other words, COMPERE would rather use the verb to begin the VP that is required to complete the sentence structure than treat it as the verb in a reduced relative clause which would leave the expecta- tion of the VP unsatisfied. This behavior is the same as the one explained by the “first analysis” models of Frazier and colleagues (Frazier, 1987) using a minimal-attachment pref- erence. COMPERE can produce this behavior by applying structural preferences independently since it maintains sep- arate representations of syntactic and semantic knowledge. As a consequence of choosing the main-clause interpre- tation, the lexical ambiguity is also resolved. The electronic 2For simplicity, these figures show the parse trees and the thematic roles separate from each other. In COMPERE’s actual output, the parse trees and thematic roles are interlinked. bug meaning is now ruled out since there is a selectional re- striction on the verb moved that is not satisfied by electronic bugs (namely, they cannot move by themselves).3 Move: Agent: . . . Theme: bug To-Lot: lounge Figure 5: Garden-Path: Reduced Relative Clause. Thus, until seeing the word were, the verb moved is treated as the main verb since it satisfies the expectation of a VP that is required to complete the sentence. However, at this point, the structure is incompatible with the remaining input. COMPERE recognizes the error and now tries the alternative of attaching the VP as a reduced relative clause so that there is still a-place for a main verb. This results in a garden-path effect upon reading this sentence. That is, the sentence processor is led up a garden path and has to backtrack when later information shows that it was the wrong path to take. This behavior is not influenced by - - semantic or conceptual preferences and can be perceived as a modular behavior. COMPERE’s error recovery method was first developed in the ATLAST model (Eiselt, 1987). It was also experimentally validated (Eiselt & Holbrook, 1991). As a consequence of switching to the new syntactic in- terpretation, COMPERE makes corresponding changes to thematic role assignments and also “unresolves” the lexical ambiguity. There is no longer any reason to eliminate the electronic bug meaning since either kind of bugs can be moved by others. iasing Context: Now consider sentence (1) in a semantically biasing context such as the one in (2).4 3COMPERE’s program does not resolve lexical semantic am- biguities at this time. We are currently rectifying this by incor- porating lexical ambiguity resolution strategies from our earlier model ATLAST (Eiselt, 1989) in COMPERE. 4At present, COMPERE is not capable of using context effects in its ambiguity resolution process. However, its architecture supports the inclusion of such effects and we are working on providing context information to the unified process. Natural Language Sentence Analysis 383 (2) The Americans built a new wing to the embassy. The Russian spies quickly transferred the microphones to the new wing. The bugs moved into the new lounge were found quickly. The semantic context in (2) resolves the lexical ambigu- ity by choosing the electronic bug meaning. This decision helps COMPERE resolve the structural ambiguity at the verb moved. Using its conceptual knowledge, represented as a selectional restriction, that only animate agents can move by themselves, COMPERE decides that moved can- not be a main verb and goes directly to the reduced relative clause interpretation (Fig. S), thereby avoiding the gar- den path. This shows how the same unified process that previously exhibited modular processing behavior can also produce interactive processing behavior when semantic in- formation is available. Syntax and semantics interact in COMPERE to help resolve ambiguities in each other. COMPERE can also use independent syntactic prefer- ences in other types of sentences such as those with prepo- sitional attachment ambiguities. The COMPERE prototype thus demonstrates that the range of behaviors that the inter- active models account for (Cram & Steedman, 1985; Tyler & Marslen-Wilson, 1977), and the behaviors that the “first analysis” models account for (Frazier, 1987), can be ex- plained by a unified model with a single processor operating on multiple independent sources of knowledge. Comparative evaluation There is certainly nothing unique about a unified process model of language understanding-the integrated process- ing hypothesis has been visited and revisited many times, for good reason, and with significant results (e.g., Jurafsky, 1992; Lebowitz, 1980; Riesbeck & Martin, 1986). Yet each of these models labors under the assumption that the inte- gration of processing necessarily goes hand in hand with the integration of the knowledge sources. While this design decision may make construction of the corresponding com- putational model easier, it also makes the model incapable of easily explaining the autonomy effects demonstrated by Forster (1979), Frazier (1987), and others. As shown above, COMPERE’s unified processing mechanism combined with its separate sources of linguistic knowledge offers an expla- nation for observed autonomy effects as well as the interac- tion effects reported by Marslen-Wilson and Tyler (Tyler & Marslen-Wilson, 1977). Furthermore, the integrated mod- els noted above cannot capture syntactic generalizations. Another form of the modularity debate concerns the ef- fect of context on syntactic decisions-does context af- fect structure assignment, or are context effects absent un- til later in language processing (Taraban & McClelland, 1985)? Though we do not have a model of context effects in COMPERE, we believe that contextual information can be incorporated as an additional source of preferences in COMPERE’s architecture. An added benefit of COMPERE’s sentence processing architecture is that it offers an explanation for the effects of linguistic aphasias. In reviewing the aphasia literature, Caramazza and Bemdt (1978) concluded that the evidence pointed strongly to the functional independence of syntactic and semantic processing. COMPERE suggests an alternate explanation-the different aphasic behaviors are not due to damage to the individual processors, but are instead due to damage to the individual knowledge sources or, perhaps, to the communications pathways between the knowledge sources and the unified processor. We believe that COMPERE’s architecture accounts for the wide variety of seemingly conflicting data on linguistic behavior better than any previously proposed model of sen- tence processing. Yet COMPERE is not the first sentence processing model to be configured as a single process inter- acting with independent knowledge sources. The localist or punctate connectionist models of Pollack (1987; Waltz and Pollack, 1985) and Cottrell(1985; Cottrell and Small, 1983) resemble COMPERE at a gross architectural level, but these models did not offer the range of explanation of different behaviors that COMPERE does; for example, these models do not recover from errors, nor can they deal with complex syntactic structures such as relative clauses. Despite all its theoretical advantages over other mod- els, the prototype implementation of COMPERE is not yet fully developed and suffers from some weaknesses. Its role knowledge is fairly limited, and its conceptual knowledge is even more so. Also, the implementation currently di- verges slightly from theory. The divergence appears in the process itself: the theoretical model has a single unified process, while the prototype computational model consists of two nearly-identical processes-one for syntax and one for semantics. These two processes share identical con- trol structures, but they are duplicated because we have not yet completed the task of representing the different types of information in a uniform format. Some readers may take this as an indication that we are doomed to failure, but the connectionist models mentioned earlier serve as exis- tence proofs that finding a uniform format for representing different types of linguistic knowledge is by no means an impossible task. Conclusion Is the human language understander a collection of mod- ular processes operating with relative autonomy, or is it a single integrated process ? This ongoing debate has polar- ized the language processing community, with two funda- mentally different types of model posited, and with each camp concluding that the other is wrong. One camp puts forth a model with separate processors and distinct knowl- edge sources to explain one body of data, and the other proposes a model with a single processor and a homoge- neous, monolithic knowledge source to explain the other body of data. In this paper we have argued that a hybrid approach which combines a unified processor with separate knowledge sources provides an explanation of both bodies of data, and we have demonstrated the feasibility of this approach with the computational model called COMPERE. We believe that this approach brings the language process- 384 Holbrook ing community significantly closer to offering human-like language processing systems. Acknowledgement: We would like to thank Justin Peterson for his comments on this work and his help in finding good examples. eferences Birnbaum, L., and Selfridge, M. 1981. Conceptual analysis of natural language. In Schank, R. C., and Riesbeck, C. K. eds. Inside computer understanding: Five programs plus miniatures, 3 18-353. Hillsdale, NJ: Lawrence Erlbaum. Caramazza, A., and Bemdt, R. S. 1978. Semantic and syntactic processes in aphasia: A review of the literature. Psychological Bulletin 85~898-918. Cottrell, G. W. 1985. A connectionist approach to word sense disambiguation, Technical Report, 154, Computer Science Department, University of Rochester. Cottrell, G. W., and Small, S. L. 1983. A connectionist scheme for modelling word sense disambiguation. Cogni- tion and Brain Theory 689-120. Cram, S., and Steedman, M. 1985. On not being led up the garden path: The use of context by the psychological syntax processor. In Dowty, D. R., Kartunnen, L., and Zwicky, A. M. eds. Natural language parsing: Psychological, compu- tational, and theoretical perspectives, 320-358. Cambridge, England: Cambridge University Press. Eiselt, K. P. 1987. Recovering from erroneous inferences. In Proc. AAAI-87 Sixth National Conference on Artificial Intelligence, 540-544. San Mateo, CA: Morgan Kaufmann. Eiselt, K. P. 1989. Inference processing and error recovery in sentence understanding, Technical Report, 89-24, Ph.D. diss., Dept. of Computer Science, University of California, Irvine. Eiselt, K. P, and Holbrook, J. K. 1991. Toward a unified theory of lexical error recovery. In Proc. of the Thirteenth Annual Conference of the Cognitive Science Society, 239- 244. Hillsdale, NJ: Lawrence Erlbaum. Forster, K. I. 1979. Levels of processing and the structure of the language processor. In Cooper, W. E., and Walker, E. C. T. eds. Sentence processing: Psycholinguistic studies presented to Merrill Garrett,27-85. Hillsdale, NJ: Lawrence Erlbaum. Frazier, L. 1987. Theories of sentence processing. In Garfield, J. L. ed. Modularity in knowledge representa- tion and natural-language understanding. Cambridge, MA: MIT Press. Granger, R. H., Eiselt, K. P, and Holbrook, J. K. 1983. STRATEGIST A program that models strategy-driven and content-driven inference behavior. In Proc. of the National Conference on Artificial Intelligence, 139-147. San Mateo, CA: Morgan Kaufmann. Holbrook, J. K. 1989. Studies of inference retention in lexical ambiguity resolution. Ph.D. diss., School of Social Sciences, University of California, Irvine. Holbrook, J. K., Eiselt, K. P., and Mahesh, K. 1992. A uni- fied process model of syntactic and semantic error recovery in sentence understanding. In Proc. of the Fourteenth An- nuaI Conference of the Cognitive Science Society, 195200. Hillsdale, NJ: Lawrence Erlbaum. Jurafsky, D. 1992. An on-line computational model of hu- man sentence interpretation. In Proc. of the Tenth National Conference on Artificial Intelligence, 302-308. San Mateo, CA: Morgan Kaufmann. Kurtzman, H. S. 1985. Studies in syntactic ambiguity res- olution. Ph.D. diss., Dept. of Psychology, Massachusetts Institute of Technology. Lebowitz, M. 1980. Generalization and memory in an inte- grated understanding system, Research Report, 186, Dept. of Computer Science, Yale University. Lytinen, S. L. 1984. The organization of knowledge in a multi-lingual, integrated parser, Research Report, YALEU/CSD/RR 340, Dept. of Computer Science, Yale University. Marcus, M. P. 1980. A theory of syntactic recognition for natural language. Cambridge, MA: MIT Press. Pollack, J. B. 1987. On connectionist models of natural lan- guage processing, Technical Report, MCCS-87-100, Com- puting Research Laboratory, New Mexico State University. Riesbeck, C. K., and Martin, C. E. 1986. Towards com- pletely integrated parsing and inferencing. In Proc. of the Eighth Annual Conference of the Cognitive Science Soci- ety, 381-387. Hillsdale, NJ: Lawrence Erlbaum. Seidenberg, M. S., Tanenhaus, M. K., Leiman, J. M., and Bienkowski, M. 1982. Automatic access of the mean- ings of ambiguous words in context: Some limitations of knowledge-based processing. Cognitive Psychology 14:489-537. Stowe, L. A. 1991. Ambiguity resolution: Behavioral evi- dence for a delay. In Proc. of the Thirteenth Annual Confer- ence of the Cognitive Science Society, 257-262. Hillsdale, NJ: Lawrence Erlbaum. Taraban, R., and McClelland, J. L. 1988. Constituent at- tachment and thematic role assignment in sentence process- ing: Influences of content-based expectations. Journal of Memory and Language 27~597-632. Tyler, L. K., and Marslen-Wilson, W. D. 1977. The on-line effects of semantic context on syntactic processing. Journal of Verbal Learning and Verbal Behavior 161683-692. Waltz, D. L., and Pollack, J. B. 1985. Massively parallel parsing: A strongly interactive model of natural language interpretation. Cognitive Science 9:5 l-74. Woods, W. A. 1973. An experimental parsing system for transition network grammars. In Rustin, R. ed. Natural language processing. New York: Algorithmics Press. Naturali Language Sentence Analysis 385 | 1993 | 57 |
1,384 | Efficient Heuristic N e arsin Christian R. Artificial Intelligence Laboratory The University of Michigan 1101 Beal Ave. Ann Arbor, MI 48109 (313)936-3667 e-mail: chris@engin.umich.edu Abstract Most artificial natural language processing (NLP) systems make use of some simple algorithm for parsing. These algorithms overlook the inextri- cable link between parsing natural language and understanding it. Humans parse language in a linear fashion. Our goal is to develop an NLP system that parses in a linear and psychologically valid fashion. When this goal is achieved, our NLP system will be ef- ficient, and it will generate the correct interpreta- tion in ambiguous situations. In this paper, we describe two NLP systems, whose parsing is driven by several heuristics. The first is a bottom-up system which is based on the work of (Ford, Bresnan & Kaplan 1982). The sec- ond system is a more expansive attempt, incor- porating the initial heuristics and several more. This system runs on a much larger domain and incorporates several new syntactic forms. It has its weaknesses, but it shows good progress toward the goal of linearity. Keywords: Natura.1 Language Processing, Parsing, Heuristic Reasoning Introduction Natural language is inherently ambiguous. Even un- ambiguous sentences often contain local ambiguities, which cannot be resolved without taking into account the overall context. However, despite the prevalance of ambiguity, humans are able to process natural lan- guage in linear time in the average case. How can we develop a natural language processing (NLP) system that is as efficient as humans? One common area of NLP research is in grammar formalisms. Unfortunately, there seems to be a tradeoff between the expressiveness of formalisms and the effi- ciency of the parsing algorithms that they yield. While there is disagreement as to the expressiveness required to describe the syntax of English and other natural languages, most computational linguists seem t,o agree that grammar formalisms which are expressive enough to capture the full syntax of natural languages are too powerful to yield efficient parsing algorithms. Even context-free grammars, one of the simpler formalisms widely used, can only be parsed in O(n3) time. Other, more powerful formalisms, yield even more inefficient parsing algoritlims. A common approach to improving parsing efficiency is to devise an algorithm which cannot parse all pos- sible sentences, but which efficiently parses those sen- tences which it can parse. Let us call this the restricted purser approach to efficiency. The hope is that the parsable sentences correspond to those constructions which people use most commonly, and are able to suc- cessfully understand; while sentences that cannot be parsed, although perhaps technically in the language, correspond to pathological examples (for people) such as garden paths. For example, Marcus achieved de- terminism in his PARSIFAL system (Marcus 1980) by limiting the parser’s lookahead to at most three con- stituents, and argued that many English garden paths require a larger lookahead window. Similarly, Blank’s register vector grammar parser (Blank 1989) used a finite (and small) number of registers to store previ- ous parser states, thereby restricting the parser’s back- tracking capabilities. While it is beyond the scope of this paper to ar- From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. gue in detail against the restricted parser approach, we feel it suffers from two difficulties. First, it is not clear that the sentences that these parsers can success- fully process really do correspond to those sentences which are commonly used by people. In particular, psycholinguistic evidence indicates that some sentences predicted to be garden paths by theories of limit’ed lookahead or limited backtracking are in fact easily understood by people (e.g., Crain & Steedman 1985). Second, these systems do not gracefully degrade: ei- ther a sentence is parsable in linear time, or it cannot be parsed at all. ’ NLP systems should be capable of parsing unusual syntactic constructions, even if they cannot be parsed as efficiently as more commonly used constructions. As an alternative to the restricted grammar ap- proach, we offer an approach based on the use of pars- ing heuristics, which are used to select which grammar rules to apply at each step in the parse. The heuristics utilize a combination of syntactic and semantic infor- mation to determine which rule to apply next. While these heuristics do not improve our parser’s worst-case complexity, we have found that they do improve the system’s average-case performance. Empirical testing has indicated that t,he system parses sentences in lin- ear time on average. In addition to achieving im- proved average-case complexity, our system also de- grades gracefully. Those sentences which mislead the heuristics are still successfully parsed, though not in linear time. Another major benefit is that when our heuristics succeed, the correct interpretation is gener- ated; i.e., the one that a human would generate. Our heuristic approach has been implemented as part of the LINK system (Lytinen 1992), which is one of several unification-based grammar systems (Shieber 1986). This system uses a bottom-up parser with a unification-based grammar and a chart mechanism. The particular heuristics we have encoded have been inspired by the work of Ford, Bresnan, and Kaplan (1982), although we are using an extension of the set of heuristics proposed by t(hem. In the rest of this paper, we first review some prior work on parsing natural language. Then we describe an initial system which we implemented, that uses heuris- tics to parse a small subset of English in linear time. Next, we describe initial attempts at heuristic parsing in a ‘real’ domain. Fina.lly, we conclude with a discus- sion of some problems we ha.ve encountered and future directions. Prior Work The knowledge of gra,mmars and the a.bility to parse input using these grammars has advanced rapidly since the advent of the computer. Work in parsing formal ‘Marcus (1980) d iscusses possible techniques for parsing garden path sentences, but, t,hese were not implemented in PARSIFAL. languages advanced rapidly in the 196Os, and this ad- vancement aided the development of programming lan- guages and compilers for these languages. These for- mal methods have been applied to natural languages, but they have met with less success. In this section we first discuss some algorithmic ap- proaches to parsing, including some results from for- mal language theory. After this, we discuss some other work that has been done in heuristic approaches to natural language parsing. Algorithmic Approaches Early work on formal languages was done by (Chom- sky 1956), h w ere he introduced context free grammars. Backus adapted context-free grammars into BNF no- tation, which was used for formally describing Algol 60 (Naur 1963). Efficient algorithms were designed to parse regular languages in linear time, significant sub- sets of context-free languages in linear t,ime, and all context-free languages in cubic time. However, natural language is not easily understood in formal language terms. The position of natural lan- guages on the scale of complexity of formal languages is under debate, though most current systems assume that natural language is at least context-free. This assumption is implied by the use of cont(ext,-free gra,m- mar rules. Others, e.g. (Blank 1989), feel t,hat natural language is a regular language. Blank’s argument is that humans are finite; therefore, our language can be defined by a finite automata. We are indeed finite, but our language may be more easily described by context free rules. There has also been work in fast algorithms for pars- ing natural language. Tomita (1987) has developed a parser which on average case behaves in n-logn time, though it is an n-squared worst case algorithm. This work is based on Earley’s (Earley 1970) fast algorithm work. Its weakness is threefold: first, it functions on a restricted context-free grammar. Second, while more efficient than many parsers, it is not linear. Third and most important, it is not psychologically valid. It gen- erates all grammatically valid parses, and humans do not do this. Similarly, it has no means of choosing the correct interpretation (the interpretat,ion humans produce) from all of the grammatical interpretations. euristic Approaches Early studies of heuristics focused on a small number of heuristics which specified parsing decisions. Kim- ball (1973) was a pioneer in this field and introduced the principles of right association, early closure, and others. These heuristics are quite useful, but nothing is said about how they are related to each other. Another approach was to build a heuristic parser to account for linear parsing. Marcus (1980) built PARSI- FAL, Ford, Bresnan and Kaplan (1982) built a system which exploited their lexical preference heuristic, and Frazier and Fodor (1978) built the sausage machine. Natural Language Sentence Analysis 387 The main drawback of all of these systems is that t,lley functioned only in small domains. It is not clear tllat they would scale up to a larger domain. Blank (1989) designed a system with linearity in mind. His system used a regular grammar with reg- isters to account for embedded constructs. This ap- proach has certain limitations since some constructs can be indefinitely embedded. Furthermore, the num- ber of rewrite rules needed for any significant subset of English would be enormous. Blank’s heuristics are embedded into his parsing mechanism, so he really has nothing to say about heuristics. His work does, however, provide an excellent statement on linearity. Blank’s results are impressive, but again his system only works on a small subset of English. A System for a Small Domain Our first attempt at an efficient heuristic parser was based on the work of (Ford, Bresnan & Kaplan 1982). The heuristics they implemented included the final ar- guments, syntactic preference, and lexical preference heuristics. The final arguments heuristic has been mentioned by others including Kimball (1973). It states that the attachment of the final argument of a verb phrase should have low priority; this delayed at- tachment allows the final argument to have extra mod- ifiers attached to it. The syntactic preference heuristic states that a word which is in multiple lexical cate- gories should be expanded to the strongest category first. For instance, the word that can be a pronoun or a complementizer; the pronoun is the stronger category so its interpretation as a pronoun should be preferred. The most important heuristic is the lexical prefer- ence heuristic; it states that lexical items will have a preference for certain semantic items. For instance, in the sentence The woman positioned the dress on the rack, the lexical item positioned will prefer a location phrase. For this reason, on the ruciG:should be attached to the verb phrase positioned, instead of being attached to the noun phrase the dress. It is important to note that this heuristic utilizes both syntactic and semantic information. We implemented Ford et al. ‘s heuristics in the LINK system, enhanced with several others of our own. One of our heuristics, the phrase creation heuristic, is a special case of Kimball’s (1973) right attachment rule. The heuristic prefers to attach incoming lexical items to the most recent open constituent, provided that constituent is an ‘elementary’ one (e.g., noun phrase, prepositional-phrase, . ..) rather than a more complex constituent (e.g., subordinate clause). Thus, in The air force pilots, although force and pilots could be in- terpreted as verbs, this heuristic prefers to interpret them as nouns, since then the string can be parsed as a single noun phrase. Note that this heuristic ap- plies only to simple phrases and would not apply to a phrase like the house on the hill which contains two simple phrases. This heuristic is not perfect, because it 388 Huyck is strictly syntactic. For example, in the sentence The document will reiterate, will is initially read as part of the noun phrase The document will. While this is valid a more robust method would read will as a modal verb. Eventually the correct interpretation is made, but in this example heuristic parsing breaks down. Another of our heuristics, the left to right heuristic, causes the parser to build constituents further left in the sentence before those further right. These heuris- tics were simply implemented as a Lisp subroutine. The routine was passed a list of the possible rules that could be applied at each point in the parse, and re- turned the rule which should apply. 25 - N 20 -- U mA b u ” e I r ;10-- 0 0. 3 5 5 6 8 8 9 IO 11 11 12 19 Length of Sentence Figure 1: Near-Optimal Heuristic Parses Figure 1 shows the performance of this simple sys- tem on a set of sentences from (Ford, Bresnan & Ka- plan 1982). The horizontal axis of Figure 1 shows the length of the sentence, and the vertical axis shows the number of rules applied in parsing the sentence. The dotted line signifies the number of rules that the sys- tem applies, and the solid line signifies the minimum number of rules that can be applied to get the cor- rect parse. In general, the minimum number of rules increases linearly with the length of the sentence. For more than half of the sentences, our system gen- erates the correct parse in the minimum number of rule applications. The system does no worse t,han 1.5 times the minimum number of sentences needed. All of these unnecessary rule applications are due to the nature of the phrase creation heuristic. The success of this system is not surprising. Bres- nan implemented these heuristics on a t.op-down sys- tem and came up with the appropriate results. It sim- ply shows that these heuristics can be implemented in a bottom-up parser, using a unification-based grammar. A System for an Open Domain After completing this simple implementation, we started scaling up our system to a more complex set of sentences. We chose the corpus from the Fourth Mes- sage Understanding Competition (MUC-4) (DARPA 1992), a collection of 1600 articles from newspapers and other sources that describe terrorist events which occurred in Latin America during 1989-1991. Here is a typical sentence from this corpus for which our heuris- tics successfully guide the parse: Farabundo Marti National Liberation Front detach- ments have conducted the largest military operation in the entire history of the Salvadoran conflict in the country’s capital. To see if our simple heursitic parsing system would scale up, we randomly chose a set of 100 sentences from the MUC-4 corpus. Using our simple rule se- lection heuristics in this test resulted in a miserable failure. Unlike Ford et ab.‘s examples, the MUC-4 sen- tences tend to be much longer, and contain a much wider variety of syntactic phenomena not encountered in the earlier examples. In fact, not one sentence was completely parsed by the above heuristics; instead the parser had to fall back into general search to arrive at a complete interpretation. It was readily apparent that the few heuristics that we had used would be insuffi- cient to handle all of these new sentences. Expanding the System The new system consisted of many parsing heuristics. All heuristics from the earlier system were ported to this system. This included the left to right mechanism, the simple phrase creation heuristic, the minimal at- tachment heuristic, the syntactic preference heuristic, and the lexical preference heuristic. We also augmented the system with several new heuristics. The least complex heuristic was the par- ticle grubbing heuristic, which is a method to handle compound verbs. For instance, in sentences contain- ing the phrasal verb blow up, often the particle up is separated from the verb blow. Thus, in the sentence The terrorists wild blow the cur up, there is an ambi- guity when up is encountered as to whether this word should be interpreted as a particle (attached to blow) or a preposition (in which case a noun phrase should follow). In this situation, the particle grabbing heuris- tic prefers to interpret up as a particle. We also added several heuristics for determining when verbs should be interpreted as part of a sub- ordinate clause, as opposed to the main verb of the sentence. For example, if a noun phrase was followed by a comma and a verb, the verb was interpreted as beginning a passive rela.tive clause. For example, in two soldiers, shot.. . , the passive interpretation of shot would be preferred. On the other hand, without an ex- plicit marker such as a comma or a relativizer (‘who’ or ‘that’), the active interpretation of the verb was pre- ferred. Similar heuristics were added for other types of clauses, such as infinitive clauses. In summary the heuristics that were implemented in the extended system were: o Psychological Parsing Mechanism Heuristics 1. Left-to-Right Parsing 2. Delay Attaching a Phrase Until it is Complete o Syntactic Combination Heuristics 1. Simple Phrase Creation 2. Particle Grabbing 3. Conjunctions of Noun Phrases o Semantic Syntactic Phrase Combination Heuristics 1. Create a Pre-Subject 2. Create a Relative Clause 3. Create a Sentential Compliment 4. Create an Infinitival Compliment 5. Combine a Verb-Phrase with the Subject Search Control Heuristics 1. Combine Large Phrases 2. Apply a Rule Which Will Complete The Parse Another heuristic that we added was a search control heuristic. It was only used after standard heuristic parsing broke down. It did not function in a left to right manner, but combined large adjacent phrases. In this context, large phrases are anything that is a simple phrase or larger. This counterbalanced the effects of syntactic constructs that were not explicitly accounted for by the parsing heuristics. This type of heuristic can be very useful in a natural language system, but it should only be used as a last resort; since it is not psychologically valid, it may give the wrong answer among ambiguous interpretations. Test esu1ts Figure 2 shows the results of our test on the MUC-4 corpus using our enhanced heuristics. The horizontal axis shows the length of the sentence, and the verti- cal axis shows the number of rules applied in parsing the sentence. The solid line shows the minimum num- ber of rules needed to generate the correct parse. Our system’s performance is broken down into two parts. First, the dotted line represents the number of rules ap- plied when the heuristics succeed in generating a cor- rect parse. 0 n o th er sentences, the heuristics fail to generate a complete parse, and the system falls back into undirected bottom-up chart parsing. These sen- tences are represented by the bold dotted line. In these cases, there is still an improvement over parsing with- out heuristics, because some structure has been built by the heuristics before the system reverts t,o it(s undi- rected processing. The result of a completely undi- rected parsing (i.e., parsing without our heuristics) is shown by the dashed and dotted line. Note that there is a ceiling on the chart, and this is represented by the top line labeled 500. As sentences get longer, depth- first search can lead to thousands of rules being gener- ated. The heuristics perform quite well, particularly when they succeed in generating a complete parse. The sys- tem with heuristics performs better than without them on all of the sentences. More importantly, when the Natural Language Sentence Anallysis 389 heuristics succeed, statistical analysis shows that the number of rules applied increases linearly with sen- tence length. On the 35 successfully parsed sentences, using a weighted R2 analysis, the best fit polynomial was 4.16 + 2.15x (x = sentence length); R2 = 0.929. The ratio is 2.15 the length of the sentence. As in the first system, most of these extra rules are due to the nature of the simple phrase creation heuristic. N 350 u m b e 300 ‘: 200 e * 150 100 - - - - - Success - Partial . . _ . - - - Unrestricted 0 6 s 1213141617 20 21 21 23 25 26 2 Length of Sentence Figure 2: Heuristic Pursing Results from. an Open 00172 uin Even when the heuristics fail, the number of rules applied is significantly less than without heuristics. The reason for this is that the heuristics have built up some partial structure. When this is the correct partial structure, search from this point is much shorter than general search. The heurist-ics the size of the search space. have, in effect, reduced Analysis of Test Results Although our test data shows several successes, it also shows a number of failures. These failures are an im- provement upon prior systems, but they do not achieve the dtisired linear parsing time. There are two reasons for the failures. First, our heuristics still fail at several points, indicating that, they are incomplete. Second, many constructs are not accounted for by the heuris- tics. Virtually all of the heuristics occasionally fail. For instance, the relative clause heuristic may not com- plete a relative clause. In the sentence fragment the iwo people who were killed in the uttuck, the heuristics guide the parser to find an interpretation in which the relative clause is finished after killed and in 2he atlack becomes the location of the people. Similarly, the sim- ple phrase creation heuristic occasionally fails; it some- times incorrectly combines two adjacent noun phrases into one phrase. For instance, in the fra.gment alleged terrorists today killed, alleged terrorists is grouped with today. This eventually leads to a missing subject. In our current system, this type of failure is particularly harmful, because the phrase combination heuristics de- pend on the correct simple phrases; all phrase combi- nations are tried before the original phrases are broken into smaller phrases. The reason for the failures of these heuristics is their lack of semantic knowledge. Except for the lexical pref- erence heuristic and the filling of verb frames, all of the heuristics we used are purely syntactic. To make them more effective, they must use more semantics. In sec- tion 3 We explained how the phrase creation heuristic fails because of its strictly syntactic basis. A given rule is preferred because it is syntactically and seman- tically the strongest rule. In this preliminary study my heuristics were mostly syntactic. The largest problem is that several constructs are not handled by the heuristic parser. For instance, only the simplest types of conjunction are handled. This leads to a deterioration in performance when other constructs are encountered. When a conjunctive verb phrase is encountered, the system has no choice but to fall into general search. The solution for this problem is simply encoding more heuristics. Finally, it must be noted that this test was run on what is still a relatively small domain. The lexicon and the set of grammatical rules is insufficient for a larger sample of sentences, because lexical, syntactic, and se- mantic ambiguity would all increase. As the system is made more general, its performance will degrade. Occasionally we can fall back on cheap h$uristics and general search, but, in the long run, we will need a much broader range of heuristics to be successful. Conclusion We have provided evidence that parsing guided by heuristics can be performed on natural language in lin- ear time in the average case. While we found many of the heuristics suggested in the psychological literature to be useful, these heuristics by themselves were inad- equate for the range of constructions found in our lim- ited domain. Our work indicates that a complete sys- tem would require a great number of heuristics. These include very broad heuristics, such a5 the left to right heuristic; heuristics which apply to several constructs, such as the lexical preference heuristics; and narrow heuristics, such as particle grabbing. The left to right heuristic is so intimately related to the serial nature of human natural language processing that it could actu- ally be built into the parsing algorithm. On the other hand, the particle grabbing heuristic is so dependent 390 Huyck on a given lexical item that a special heuristic may be needed for each case. Although our current set of heuristics is still not adequate to handle all English constructions, those that it does handle are parsed in linear time by our system. We expect that additional heuristics will extend this result to a wider variety of constructions. Heuristic parsing mechanisms have proven successful in small domains, but in larger domains, many more heuristics will be required. The problem then becomes one of software engineering. How do we encode all of these parsing heuristics, and how do we handle the potential interactions among them? The need for a useful method of encoding pars- ing heuristics becomes even more apparent in real do- mains. In addition to the large variety of syntactic and semantic phenomenon that are encountered in such do- mains, there is a greater likelihood of encountering un- grammatical texts. From the system’s point of view, a sentence may be ungrammatical for two reasons. First, the input may simply be improper English. Second, the grammar that was provided may be insufficient. In either case, there will be a mismatch between the grammar rules and the structure of the input, and in either case, the system will have to use some kind of search techniques to find a reasonable match between grammar and input. Another problem is encountering unknown words. An intelligent parser can aid or solve all of these problems. A final note must be made on the encoding of pref- erences in the form of heuristics. If a sentence is not parsed by an NLP system the way humans parse it, it is not parsed correctly. In an ambiguous grammar, it may be that several grammatically valid interpre- tations can be generated, but if people only generate one, then the NLP system must only generate one. Since the reason for interpreting sentences is to inter- pret them the way that humans do, our systems must parse like humans. Thus, any biases in interpretation by people must be reflected in search heuristics used in an NLP system. References Blank, Glenn D. 1989. A Finite and Real-Time Pro- cessor for Natural La.nguage. In Comm. A CM 32:lO pp. 1174-1189. Chomsky, Noam. 1956. Three models for the descrip- tion of language. In IRE Tran.s. on Irlformation The- ory pp. 113-124. Crain, S. and Steedman, M. 1985. On not being led up the garden path: the use of context by the psy- chological syntax processor. In Dowty, D., Karttunen, L., and Zwicky, A. (eds.), Natural Language Pursing: Psychological, Compufational, and Theoretical Per- spectives. New York: Cambridge University Press, pp. 320-358. Defense Advanced Research Projects Agency. 1992. Proceedings of the Fourth Message Understanding Conference (IlrfUC-4), McLean VA, June 1991. San Mateo, CA: Morgan Kaufmann Publishers. Earley, J. 1970. An efficient context-free parsing al- gorithm. Comm. ACM 13:2 pp. 94-102. Ford, Marilyn, Joan Bresnan, and Ronald Kaplan. 1982. A competence-based theory of syntactic clo- sure. In The mental representation of grammatical re- lations, ed. by Joan Bresnan. Cambridge, MA: MIT Press. Frazier, Lyn. and Janet Dean Fodor. 1978. The sausage machine: A new two-stage parsing model. Cognition, 6:291-295 Kimball, John P. 1973. Seven principles of surface structure parsing. In natural language. Cognition, 2( 1): 15-47 Lytinen, S. 1992. A unification-based, integrated natural language processing system. Computers and Mathematics with Applications, 23(6-g), pp. 403-418. Marcus, Mitchell P. 1980. A theory ofsyntnctic recog- nition for natural language. Cambridge, MA: MIT Press. Naur, P. ed. 1963. Revised report on the algorithmic language Algol 60. Comm. ACM6:l pp. 1-17 Shieber , Stuart M. 1986. An Introduction to Unification-Based Approaches to Grammar Stanford, CA: Center for the Study of Language and Informa- tion. Tomita, M. 1987. E’ cient Parsing for Natural Lan- guage Norwell, MA: Kluwer Academic Publishers. Natural Language Sentence Analysis 391 | 1993 | 58 |
1,385 | Towards a Reading Coach tha Automated Detection of Oral Rea Jack Mostow, Alexander G. Hauptmann, Lin Lawrance Chase, and Steven Project LISTEN, CMT-UCC 215, C,at-negie Mellon University 5000 Forbes Avenue, Pittsburgh, PA 15213-389 1 mostow@cs.cmu.edu Abstract’ What skill is more important to teach than reading? Unfortunately, millions of Americans cannot read. Although a large body of educational software exists to help teach reading, its inability to hear the student limits what it can do. This paper reports a significant step toward using automatic speech recognition to help children learn to read: an implemented system that displays a text, follows as a student reads it aloud, and automatically identifies which words he or she missed. We describe how the system works, and evaluate its performance on a corpus of second graders’ oral reading that we have recorded and transcribed. 1. Introduction Deficiency in reading comprehension has become a critical nation‘al problem; workplace illiteracy costs over $225 billion dolkars a yecar (Herrick, 1990) in corporate retraining, industrial accidents, an d reduced competitiveness. Although intelligent tutoring systems might help, their inability to see or he‘ar students limits their effectiveness in diagnosing and remediating deficits in comprehension. In an attempt to address this fundamental limitation, we are building on recent adv‘ances in automated speech processing, reading resetarch, ‘and high-speed computing. We have dubbed this effort Project LISTEN (for “Language Instruction that Speech Technology ENables”). This paper reports our initial results. To place these results in context, imagine how automated speech recognition may eventually be used in an interactive system for assisting oml reading. The system displays text on the screen and listens while the student (a child, illiterate adult, or foreign spe(aker) reads it aloud. When the student gets stuck or m,akes a serious mist‘ake, the system intervenes with the sort of assistance a ‘This resear ch was supported in part by the National Science Foundation under Grant Number MDR-9154059; in part by the Defense Advanced Research Projects Agency, DOD, through DARPA Order 5167, monitored by the Air Force Avionics Laboratory under contract N00039-85-C-0163; in part by a grant from the Microelectronics and C’omputer Technology Corporation (MCC); in part by the Rutgers Center for Computer Aids to Industrial Productivity, an Advanced Technology Center of the New Jersey Commission on Science and Technology, at Rutgers University: and in part by a Howard Hughes Doctoral Fellowship to the third author from the Hughes Research Laboratory, where she is a Member of Technical Staff. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the sponsors or of the United States Government. 392 Mostow p,a.rent or teacher might provide, such as saying the word, giving a hint, or expkaining unf,unili,ar vocabulary. Afterwards, it reviews the passages where the student had difficulty, giving an opportunity to reread them, and providing appropriate feedback. This paper reports on a scaled-down version of such a system. Along the way it points out some current limitations and directions for future improvement. 2. at Evelyn does (and Our implemented prototype, n‘arned Evelyn, displays a page of text on a screen, <and listens while someone reads it. While the user is reading, Evelyn dynamically displays what it thinks is the reader’s current position in the text, by highlighting the next word to read. This position does not necessarily progress linearly through the text, since the reader may repeat, misread, sound out, insert, or skip words. Due to the nature of the speech recognition process, the display lags behind the reader; it is intended to show us what the system is doing, rather than to be of pedagogical benefit. When the reader finishes, Evelyn identifies substitutions, deletions, and insertions relative to the original text. Evelyn treats these phenomena as follows: e Substitutions: Evelyn provides contrastive feedback. To focus the reader’s attention, it visually highlights the misread passage of the text on the screen. It plays back what the reader said, and then spe‘aks the same passage correctly using synthesized or pre-digitized speech. 63 Deletions: Evelyn provides corrective feedback. It highlights and speaks the text it thinks the reader skipped. For this case there is nothing to play back. 0 Insertions: Evelyn deliberately ignores them. Insertions <are usually hesitations, sounding out, self-corrections, repetitions, or interjections, rather than genuine misreadings of the text. Notice that Evelyn uses “missed words” -- words in the text that were not read correctly at least once -- as its criterion for what to give feedback on. This criterion is based on the pedagogical ,assumption that whether the reader eventually succeeded in reading the word matters more than whether he or she got it right on the first try. Although word misses are a reasonable first-order approximation, finer-grained criteria will be needed to From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. trigger <an exp,a.nded mnge of pedagogically useful interventions. We make no claims for the pedagogical efficacy of Evelyn’s feedback. Rather, its purpose is to show us what the speech analysis is doing, and to test the feasibility of features that might later support v,a.rious interventions. To identify more effective forms of feedback to implement, we are currently performing “Wizard of Oz” studies in which a human experimenter controls the display by hand to simulate possible system behavior -- including interrupting the reader to provide cassistance in the context where it is needed. 3. Relation to previous work There have been some uses of automated speech recognition in language le,arning. For example, the Indiana Speech Training Aid (Watson et al, 1989) helps hearing- impaired people improve their speech, by compcaring their pronunciation of problem words to that of fluent speakers. (Newton, 1990) describes a commercial product that uses automated speech recognition for a similar purpose in foreign language training. However, these systems are based on isolated word recognition technology, which requires ‘as input a single word or phrase chosen from a fixed vocabukqy. The techniques used in isolated word recognition do not h‘andle the continuous speech that occurs in reading connected text. Some more recent work has used continuous speech recognition. (Bernstein et al, 1990) automatically estimated the intelligibility of foreign speakers based on how well their readings of a few sentences matched models trained on native speakers. A system developed at MIT uses a connected speech recognizer to follow the reading of C’L known text, providing verbal feedback via DECtalk (McC‘andless, 1992, Phillips et al, 1992). However, systematic evaluation of its accuracy has been limited by the lack of a corpus of disfluent reading. Instead, fluent utterances were used to simulate disfluent reading by pretending that one word in each utterance should have been some other word selected randomly from the lexicon -- a methodology that admittedly fails to capture important characteristics of distluent reading (McCandless, 1992, p. 12). There has been more use of speech in the system-to- student direction, thanks to the availability of synthesized or digitized speech. In particular, previous research has documented the benefits of making speech feedback on demand available to children with reading difficulties (Wise et al, 1989, Roth & Beck, 1987, McConkie & Zola, 1987, Reitsma, 1988), and this capability is now available in some commercial educational software (e.g., (Discis, 1991)). Pronouncing a word on dem,a.nd supports students’ reading comprehension -- both directly, by relaxing the bottleneck caused by their deficits in word recognition, and indirectly, by freeing them to devote more of their attentional resources to comprehension processes. Although such assistance can therefore be very useful, its utility is limited by the students’ ability and willingness to ask for help when they need it; struggling readers often misread words without realizing it (McConkie, 1990). Alternatives to the approach reported here include using an eyetracker or a user-controlled pointing device to track the reader’s position in the text. These alternatives might indeed facilitate text tracking, but at best they could only indicate what the reader was trying to read -- not whether the outcome was successful. Our project differs from previous efforts by drawing on the best available technology, in the form of Bellcore’s ORATORTM speech synthesizer2 (Spiegel, 1992) and CMU’s Sphinx-II speech recognizer (Hang et al, 1993). ORATOR produces high-quality speech, and is especially good at pronouncing names. Sphinx-II represents the current state of the ‘art in speaker-independent connected speech recognizers, insofar as it was ranked at the top in DARPA’s November 1992 evaluations of such systems. However, analysis of oral reading differs from speech recognition in an important way. In speech recognition, the problem is to reconstruct from the speech signal what sequence of words the speaker said. In contrast, the problem addressed in this paper is to figure out, given the text, where the spe‘aker departed from it. ow Evelyn war In this section we explain how the Evelyn system works. Since Evelyn is built on top of the Sphinx-II speech recognizer, we start with a minimal description of Sphinx- II to distinguish what it already provides from what Evelyn contributes. (For a more detailed description of Sphinx-II, please see (Hu‘ang et al, 1993).) Then we explain how we use the Sphinx-II recognizer within the Evelyn system -- that is, how we generate Sphinx-II’s knowledge sources from a given text, and how we process Sphinx-II’s output. 4.1. Speech recognition with the Sphinx-II system Sphinx-II’s input consists of digitized speech in the form of 16,000 l&bit samples per second from a microphone via an analog-to-digital converter. Sphinx-II’s output consists of a segmentation of the input signal into a string of words, noises, and silences. Sphinx-II uses three primary knowledge sources: a datab‘ase of phonetic Hidden Markov Models, a dictionary of pronunciations, and a language model of word pair transition probabilities. The Hidden Markov Models use weighted transitions between a series of states to specify the acoustic probability of a given phone or noise. The pronunciation dictionary represents the pronunciation of each word as a sequence of phonemes. The language model specifies the linguistic probability that the second word in each pair will follow the first. Sphinx-II operates as follows. The digitized speech is compressed to produce four 16-bit numbers every 10 msecs. This stream of values is matched against the Hidden M,a.rkov Models to compute the acoustic probability of each phone at each point in the speech signal. Hypothesized phones are concatenated into words 20RATOR IS a registered trademark of Hellcore. Natural Language Sentence h.ndysis 393 using the pronunciation diction,ary, and hypothesized words are concatenated into word strings using the language model. Beam se,arch is used to pursue the most likely hypotheses, while unlikely paths are pruned. At the end of the utterance, Sphinx-II outputs the highest-rated word string as its best guess. 4.2. How Evelyn applies Sphinx-II to oral reading In order to use Sphinx-II, we must supply the phonetic, lexical, and linguistic knowledge it requires to recognize oral reading of a given text. Evelyn’s phonetic knowledge currently consists of Sphinx-II’s standard 7000 Hidden Markov Models trained from 7200 utterances produced by 84 adult speakers (42 male and 42 female). Sphinx-II uses separate models for male and female speakers. The results reported here used the female models, which we assume work better for children’s higher-pitched speech. Evelyn’s lexical knowledge is created by a combination of lookup and computation. First the given ASCII text is segmented into individu‘al words, ‘and the words ,a.re sequentially numbered in order to distinguish multiple occurrences of the same word. The phonetic pronunciation of each word is then looked up in a pronunciation lexicon of about 32,000 words. However, some words may not be found in this lexicon, such as idiosyncratic words or proper names. Pronunciations for these missing words are generated by the ORATOR speech synthesizer. Finally, the pronunciation lexicon is augmented with individual phonemes to allow recognition of non-word phonetic events that occur when readers sound out difficult words. Evelyn’s linguistic knowledge consists of a probabilistic word pair tmnsition gr,amm,ar. This gmmm~ar is generated automatically from the numbered word sequence. It consists of pairs of (numbered) words and the likelihood of a transition from one word to the next. There are currently three kinds of transitions in our language model. The highest-probability transition is from one word to the next one in the text, which models correct reading. Second, a transition to an ‘arbitrary other word in the text models a repetition or skip. Third, a transition to a phoneme models a non-word acoustic event. The constraint provided by this gmmmar is critical to the accuracy of the speech recognition. If the grammar weights tmnsitions to correct words too strongly, then word misses will not be detected when they occur. However, if it weights them too weakly, recognition accuracy for correct words will be low. To represent a particular tradeoff, Evelyn uses a linear combination of the three kinds of transitions. As Figure 4-l shows, the recognized string is Sphinx- II’s best guess ;ts to what the reader actually said into the microphone, given the phonetic, lexical, and linguistic knowledge provided. Evelyn compares this string against the original text, and segments the input utterance into correctly spoken words from the text, substitutions, deletions, and insertions. Based on this analysis, Evelyn provides the feedback described e,arlier in Section 2. 4.3. Text following Although Evelyn provides corrective feedback only after the reader finishes the page, in the future we would like to interrupt with help when appropriate. Evelyn does provide one prerequisite capability for m,a.king such interruptions, namely text following. At present this capability is used merely to provide a visible dynamic indication of where Evelyn thinks the reader is. However, experience with this capability has exposed some challenging technical issues. In order to track the reader’s position in the text, Evelyn obtains partial recognition results from Sphinx-II four times a second in the form of the currently highest rated word string. However, these partial results are subject to ch‘ange. For example, suppose the word “alter” was spoken. The first partial hypothesis may be the determiner I, I, ;L . After more speech has been processed, the word “all” might be a candidate. It is not until the whole spoken word has been processed that Sphinx-II will return the correct hypothesis. Moreover, if the subsequent word is “candle”, the hypothesis may be revised to reflect the high probability of the phrase “altar candle”. In contrast, the phrase “alter ego” would require no modification of the earlier hypothesis of “alter”. The point of this discussion is that on; cannot merely look at the last word in the current partial recognition result for a reliable estimate. Our initial text following algorithm, which did just that, caused the cursor that indicates the current location in the text to skip around wildly as it tried to follow the reader. The problem is that in order to select reliably among competing hypotheses, the recognizer needs to know the context that follows. This “hindsight dependency” suggests that for some time to come, a real-time speech recognizer will have to lag behind the speaker by a word or two to reduce inaccuracy -- no mutter how fust the murhine it runs on. Thus pedagogical interruptions triggered by recognition will face a tradeoff between interrupting promptly and waiting for more reliable recognition results. To attack this problem, we developed ;I more sophisticated heuristic text-following algorithm. It exploits the expectation that the reader will step through the text in a sequential fashion, and yet allows for reading errors without considering short-lived spurious candidates. The revised algorithm has a certain amount of inertia. As it digests partial results, it refrains from moving the cursor until at least n (currently n =2) consecutive words in the text have been recognized. This heuristic gives us reasonable confidence that a portion of the text has been read correctly and that recognition is stable. If Sphinx-II recognizes ;I word other than the expected next one, this method prevents the cursor from immediately jumping to ;t new location in the text, since it is likely to represent either a misreading by the reader or a misrecognition by the recognizer. However, when the reader really does skip to another place in the text, the method allows the cursor to catch up, after a short delay during which consecutive words from the new location in the text are recognized. 394 Mostow phonetic models [ 4- dictionary of pronunciations transition probabilities bob.1 HH AE DD a.3 brown.4 W EV TD bob.1 has.2 a.3 bob.1 has.2 a.3 brown.4 and.5 White.6 dog.7 named.8 S selective playback In this example, the reader self-corrected “had” to “has”, but misread “Spotty” as “spot”. Sphinx-II’s actu‘al recognition w<as much less accurate th‘an the ideal result shown. Figure 4-1: Example of how Evelyn should detect a word miss 5. How well Evelyn works Since Evelyn is only a precursor of an educational system, a true pedagogic‘2 evaluation of it would be premature. However, we did measure its perforrrnance in a way that would help guide our work. In p‘articukar, we needed a suitable evaluation scheme to help us develop, assess, ‘and improve the hanguage models described in Section 4.2 by allowing us to test alternative kanguage models against the s,ame data. We now describe the data we collected for this purpose, the methodology we used to measure perform,ance, and the results we obtained. 5.1. Corpus of oral reading To evaluate Evelyn’s perforn-mnce, we collected and transcribed a corpus of oral reading, which we are continuing to expand. This paper is based on readings by second graders at two schools in Pennsylv‘ania. Of the 27 speakers, 17 are from Turner School, a public school in Wilkinsburg with a predomimantly minority student body. The other 10 (are from the Winchester Thurston School, a private school in Pittsburgh. We made the recordings at the schools, using special- purpose softw‘are running on a NeXT workstation to display the texts ‘and digitahy record the speech. We used a Sennheiser close-talking headset microphone to reduce the ‘amount of non-task information in our acoustic signal, but did not by (any meCans eliminate it. Our corpus contains m,any sounds, both speech ‘and non-speech, that ‘are not ex‘amples of a reader decoding a text. (Teachers ‘and children ‘are talking in the h‘allway, doors <are skamming, and readers (are shuffling and bumping the microphone with their chins.) We selected the reading materials from the Spache graded reading tests (Spache, 1981), since their levels of difficulty Lare well calibrated ‘and their accompCanying comprehension probes have been c‘arefully v‘alidated. To Natural Language Sentence Analysis 395 accommodate a large display font, we split each text into two or three one-page passages, adv‘ancing to each “page” when the child finished the previous one, d an administering a comprehension probe when the child completed the text. To obtain examples of reading errors, we chose texts for each subject somewhat above his or her independent reading level, which we estimated by administering Spache’s word list test. We wanted text challenging enough to cause errors, but easy enough to produce complete readings. The evaluation corpus used for this paper consists of 99 spoken passages, totalling 4624 text words. The passages average about 47 words in length and 45 seconds in duration. The pace -- about one text word per second -- reflects the slow reading speed typic,2 of early readers. The number of actual spoken words is higher, due to repetitions and insertions. We carefully transcribed each spoken passage to capture such phenomena as hesitations, “sounding out” behaviors, restarts, mispronunciations, substitutions, deletions, insertions, and background noises. The transcripts contain correctly spoken words, phonetic transcriptions of non- words, and noise symbols such zs [breath] . 5.2. Accuracy of recognition and missed-word detection Using the original texts, transcripts, and recognizer outputs over the corpus, we measured both “raw” recognition accuracy and missed-word detection accuracy. To compute recognition accuracy, we comp‘ared the string of symbols output by the recognizer against the string of symbols obtained from the transcript. We used a standard dynamic programming algorithm to align symbols from these two strings and count substitutions, deletions, and insertions. These “raw” scores appear mediocre: 4.2% substitutions, 23.9% deletions, and 0.5% insertions. Thus 28.1% of the transcribed symbols are misrecognized, and the total error rate, including insertions, is 28.6%. However, since Evelyn’s purpose is to detect missed words, a more useful criterion in this domain is the accuracy of this detection. (Recall that a word that is misread or “sounded out”, but eventually produced correctly, is not considered a missed word.) To measure the accuracy of missed-word detection, we counted the words in the text that Evelyn misclassified, either as correct, or its missed. Measuring the accuracy of missed-word detection requires a three-way comparison between the original text (what the reader was supposed to say), a transcript of the utterance (what the reader actu‘ally said), and the recognizer’s output (what the recognizer thinks the reader said). First, the actual word misses in the corpus are identified by comp,aring the transcripts ag,ainstthe origimal text, using the alignment routine described e,arlier. Then the hypothesized misses are identified by comp,aring the recognizer output against the original text, using the s(ame caIignment routine. Fimally we check the hypothesized misses against the actu‘al ones. Table 5-l: Accuracy of Missed-Word Detection 1 Reader Disfluency 1 Evelyn Coverage 1 Evelyn Precision 1 2.5% 63.6% 60.9% Disfluency = (missed words) / (words in text) Coverage = (misses detected) / (words missed) Precision = (misses detected) / (missed-word reports) Table S-l summarizes Evelyn’s ability to detect word misses in our current corpus: how frequently word misses occurred, what fraction of them Evelyn reported, and what fraction of such reports were true. We computed these three numbers separately for each reading, and then averaged them across the readings, so as to avoid counting the longer, more difficult texts more heavily than the shorter, easier ones. (Th is methodology also served to discount some “outlier” runs in which our language model caused the recognizer to get lost without recovering.) The first number measures reading disfluency as the percentage of words actually missed by the reader, which v‘aried from zero to 20%, but averaged only 2.5%. That is, the average reader missed only about one word in 40. The second number shows that Evelyn detected these misses almost two thirds of the time -- probably enough to be pedagogically useful. The third number reflects a moderate rate of false akarms. For each properly reported miss, Evelyn often classified a correctly read word as a miss, but a majority of such reports were true. It is instructive to compare these numbers against a “strawman” algorithm that classifies 2.5% of the words in the text as missed, but chooses them randomly. That is, how well can we do if all we know is the average reader’s disfluency? Since the strawman chooses these words independently of whether they are actual misses, its expected coverage and precision will also each be 2.5%. How well does Evelyn do by comparison? Its coverage and precision <are each about twenty-five times better. Thus the additional information contributed by speech recognition, although imperfect, is nevertheless significant. These results represent the best of the few language models we have tested so far on the corpus. Further improvements in accuracy may require devising better language models of oral reading and training new phonetic models on a large number of young readers. Besides accuracy, we <are concerned with speed, since timely intervention will require keeping up with the reader. For our language model and corpus, the recognizer already runs consistently in between real time ‘and two times real time on a lCx)+ MIPS DEC Alpha workstation. A modest incre‘ase in processing power due to faster h,ardware should therefore produce the speed required for re‘al time response. , 396 Mostow 6. Conclusion The principal contribution of this work is an implemented system for a new task -- automatically following the reading of a known text so as to detect an important class of oral reading errors (namely, missed words). Its model of oral reading constitutes an initial solution to the problem of constraining a speech recognizer’s search when the text is known but the reading is disfluent. We identified hindsight dependency as causing an intrinsic tradeoff between accuracy and immediacy for recognition-driven interrupts, ‘and developed a heuristic text-following algorithm based on this tradeoff. To establish an initial baseline for performance at this new task, we evaluated the performance of Evelyn and its underlying model. We defined performance evaluation criteria that are more appropriate for the task of detecting word misses than is the tradition,2 definition of accuracy in speech recognition. To evaluate our algorithms, we recorded ‘and tmnscribed a corpus of oral reading by second graders of varying fluency. This corpus is a contribution in itself, since the speech recognition community has not previously had access to a corpus of disfluent oral reading. It is essential to our continued efforts to improve on the baseline defined by Evelyn. The social signific,ance of our work, if it succeeds, will be its impact on illiteracy: even a one percent reduction in illiteracy would save the nation over two billion dollars each year. But in the long run, the broader scientific significance of this work may be its role in helping to open a powerful new channel between student and computer based on two-way speech communication. Acknowledgements We th‘ank M‘arcel Just, Leslie Thyberg, and M‘argaret McKeown for their expertise on reading; Matthew Kane, Cindy Neelan, Bob Weide, Adam Swift, Nanci Miller, ‘and Lee Ann Galasso for their various essential contributions; the entire CMU Speech Group for their advice on speech in general ‘and Sphinx in particular; Murray Spiegel and Bellcore for use of their ORATOR speech synthesis system; CTB Macmillan/McGraw-Hill for permission to use copyrighted reading materi‘als from George Spache’s Diagnostic Reuding S~~ules; the pupils we recorded at Irving School in Highkand P,ark, NJ, Winchester Thurston School in Pittsburgh, PA, ‘and Turner School in Wilkinsburg, PA, and the educators who facilitated it; and the many friends who provided advice, encouragement, and assistance to get Project LISTEN smrted. References J. Bernstein, M. Cohen, H. Murveit, D. Rtischev, ‘and M. Weintraub. (1990). Automatic evaluation and training in English pronunciation. lnternutionul Conferenct~ on Speech and Lunguuge Processing (ICSLF-90). Kobe, Japan. Discis Knowledge Rese<arch Inc. nZSCIS Books. 45 Shepp,ard Ave. E, Suite 802, Toronto, C‘anada M2N SW9. Commercial implementation of Computer Aided Reading for the Macintosh computer. E. Her-rick. (1990). Literacy Questions und Answers. Pamphlet. P. 0. 8 1826, Lincoln, NE 68501: Contact Center, Inc. X. D. Huang, F. Alleva, H. W. Hon, M. Y. Hwang, K. F. Lee, ‘and R. Rosenfeld. (1993). The SPHINX- II speech recognition system: An overview. Computer Speech and Lunguuge, (in press). M. McCandless. (May 1992). Word Rejection for u Literury Tutor. S.B. Thesis. Cambridge, MA: MIT Dep‘artment of Electrical and Computer Engineering. G. W. McConkie. (November 1990). Electronic Vocabuhary Assistance Facilitates Reading Comprehension: Computer Aided Reading. Unpublished manuscript. G. W. McConkie and D. Zola. (1987). Two examples of computer-based research on reading: Eye movement tracking and computer aided reading. In D. Reinking (Eds.), Computers and Reuding: Issues for Theory und Practice. New York: Teachers College Press. F. Newton. (1990). Foreign language training. Speukeusy, Vol. Z(2). Internal publication distributed to customers by Scott Instruments, 1111 Willow Springs Drive, Denton, TX 76205. M. Phillips, M. McCandless, and V. Zue. (September 1992). Literacy Tutor: An lnteructive Reading Aid (Tech. Rep.). Spoken Language Systems Group, 545 Technology Square, NE43-601, Cambridge, MA 02 139: MIT Laboratory for Computer Science. P. Reitsma. ( 198X). Reading practice for beginners: Effects of guided reading, reading-while-listening, and independent reading with computer-based speech feedback. Reuding Reseurch Quurterly, 23(2), 219-235. S. F. Roth and I. L. Beck. (Spring 1987). Theoretical and Instructional Implications of the Assessment of Two Microcomputer Programs. Reading Rescurch Quarterly, 22(2), 197-218. G. D. Spache. (1981). Diuxnostic Reuding Scules. Del Monte Research Park, Monterey, CA 93940: CTB Macmillan/McGraw-Hill. M. F. Spiegel. (Jantnary 1992). The Orator System User’s Munuul - Releuse 10. Morristown, NJ: Bell Communications Research Labs, C. S. Watson, D. Reed, D. Kewley-Port, and D. Maki. (1989). The Indiana Speech Training Aid (ISTRA) I: Comp,arisons between human and computer- based evaluation of speech quality. Journul cf Speech und Heuring Reseurch, 32,24S-25 1. B. Wise, R. Olson, M. Anstett, L. Andrews, M. Terjak, V. Schneider, J. Kostuch, and L. Kriho. (1989). Implementing a long-term computerized remedial reading program with synthetic speech feedback: H,ardw<are, softw,are, ‘and real-world issues. Behuvior Reseurch Methods, Instruments, & Computers, 21, 173-180. Natural Language Sentence Analysis 397 | 1993 | 59 |
1,386 | On Computing Mini Rachel Ben-Eliyahu Cognitive Systems Laboratory Computer Science Department University of California Los Angeles, California 90024 rachel@cs.ucla. edu Abstract This paper addresses the problem of computing the minimal models of a given CNF propositional the- ory. We present two groups of algorithms. Algo- rithms in the first group are efficient when the the- ory is almost Horn, that is, when there are few non-Horn clauses and/or when the set of all liter- als that appear positive in any non-Horn clause is small. Algorithms in the other group are efficient when the theory can be represented as an acyclic network of low-arity relations. Our algorithms sug- gest several characterizations of tractable subsets for the problem of finding minimal models. 1 Introduction One approach to attacking NP-hard problems is to identify islands of tructabidity in the problem domain and to use their associated algorithms as building blocks for solving hard instances, often ap- proximately. A celebrated example of this approach is the treatment of the propositional satisfiability problem. In this paper, we would like to initiate a sim- ilar effort for the problem of finding one, all, or some of the minimal models of a propositional the- ory. Computing minimal models is an essential task in many reasoning systems in Artificial In- telligence, including propositional circumscription [LifJ and minimal diagnosis [dKMR92], and in an- swering queries posed on logic programs (under sta- ble model semantics [GL91, BNNSSl]) and deduc- tive databases (under the generalized closed-world assumption [Min82]). While the ultimate goal in these systems is not to compute minimal models but rather to produce plausible inferences, efficient algorithms for computing minimal models can sub- stantially speed up inference in these systems. *This work was partially supported by an IBM grad- uate fellowship to the first author, by NSF grants IRI- 9157636 and IRI-9200918, by Air Force Office of Sci- entific Research grant AFOSR 900136, and by a grant from Xerox Palo Alto research center. 2 Ben-Eliyahu Rina Dechter Information & Computer Science University of California Irvine, California 92717 dechter@ics. uci. edu Special cases of this problem have been stud- ied in the diagnosis literature and, more recently, the logic programming literature. Algorithms used in many diagnosis systems [dKW87, dKMR92] are highly complex in the worst case: To find a mini- mal diagnosis, they first compute all prime impli- cates of a theory and then find a minimal cover of the prime implicates. The first task is output ex- ponential, while the second is NP-hard. Therefore, in the diagnosis literature, researchers have often compromised completeness by using a heuristic ap- proach. The work in the logic programming lit- erature (e.g. [BNNSSl]) focused on using efficient optimization techniques, such as linear program- ming, for computing minimal models. A limitation of this approach is that it does not address the issue of worst-case and average-case complexities. We want to complement these approaches by studying the task of finding all or some of the min- imal models in general, independent of any spe- cific domain. We will use the “tractable islands” methodology to provide more refined worst-case guarantees. The two primary “islands” that we use are Horn theories and acyclic theories. It is known that Horn theories have a unique minimal model that can be found in linear time [DG84]. Our near-Horn algorithms try to associate an input the- ory with a “close” Horn theory, yielding algorithms whose complexity is a function of this “distance”. For acyclic theories, we will show that while find- ing one or a subset of the minimal models can be done in output-polynomial time, the task of find- ing all minimal models is more complex. We will set up conditions under which the set of all minimal models can be computed in output-polynomial time and we will present a tree-algorithm that solves this problem in general. Once we have an efficient al- gorithm for generating minimal models of tree-like theories, we can apply it to any arbitrary theory by first compiling the theory into a tree. The re- sulting complexity will often be dominated by the complexity of this compilation process and will be less demanding for “near-tree” theories. From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. 2 Preliminary definitions A clause is positive if it contains only positive lit- erals and is negative if it contains only negative lit- erals. In this paper, a theory is a set of clauses. A set of literals covers a theory iff it contains at least one literal from each clause in the theory. A set of covers of a theory is complete iff it is a superset of all minimal covers of the theory. A theory is called positive if it is composed of positive clauses only. Given a theory @ and a set of literals S, the operation Q@S performs unit prop- agation on the theory @US. For each theory @, nf(@) denotes @a@. For each model M, pas(M) denotes the set of symbols to which M assigns true. We will sometimes refer to a model as a set of liter- als, where a negative literal 1P in the model means that the model assigns false to P and a positive lit- eral P in the model means that the model assigns true to P. Definition 2.1 (X-minimal model) Let @ be a theory over a set of symbols Z, X E L, and M a model for a. M is un X-minimal model for Sp ifl there is no other model M’ for @ such that pos(M’)nX C pos(M)nX. If M is an X-minima/ model for X = t, it will be culled simply a minimal model. 3 General algorithms Cadoli [Cad921 h as shown that the problem of findin PNP[ 8 an X-minimal model for a theory is (‘“gra)l-hard. Roughly, this means that it is at least as hard as problems that can be solved by a deterministic polynomial algorithm that uses O(logn) calls to an NP oracle. In Figure 1 we show an algorithm for computing X-minimal models that takes O(n2) steps and uses O(n) calls to an NP oracle (where n is the number of variables in the theory). In Figure 2 we show a variation of this al- gorithm that uses a procedure for satisfiability that also returns a model in case the theory is satisfiable. The algorithm suggests the following: Theorem 3.1 Let C be a class of theories over a language ,C having the following properties: I. There is un algorithm CY such that for any theory (a E C, cx both decides whether @ is satisfiable and produces a model for @ (if there is one) in time O(tc). 2. C is closed under instantiation, that is, for every @ E C and for every literal L in ic, @a(L) E C. Then for any theory @ E C, an X-minimal model for @ can be found in time O(lXltc). Corollary 3.2 An X-minimal model for a 2-CNF theory @ can be found in time O(lXln) where n is the length of the theory. However, using a straightforward reduction from VERTEX COVER [Kar72], we can show that if we are interested in finding a minimum cardinality Find-X-minimal( @, X, M) Input: A theory @ and a subset of the variables in a’, X. Output: true if @ is satisfiable, false otherwise. In case ~$3 is satisfiable, the output variable M is an X-minimal model for @. 1. If lsat(@) return false; 2. For i= 1 to n M[i] = false; 3. Let PI, . . . . P, be an ordering on the variables in Q such that the first 1x1 variables are all the variables from X. 4. For i := 1 to n do If sat(@u{lPi}) then Q := @a{-Pi} else @ := @@{Pi}, M[i] = true; 5. return true; Figure 1: Algorithm Find-X-minimal Find-X-minimal2( a, X, M) 1. If -rmodel-sat(@, M) return false; 2. negX := {PIP E X,-P E M}; X := X-negX; a := (PU{-P(P E negX}; 3. While X # 0 do a. Let P E X; b. If Tmodel-sat(@U{lP},M’) then Q := <p@(P) else Qp := @@{lP}, M := M’; c. x := x - u-y; If X = 0 return true; Figure 2: Algorithm Find-X-minimal2 model for a 2-CNF theory (namely, a model that assigns true to a minimum number of symbols), the situation is not so bright: Theorem 3.3 The following decision problem is NP-complete: Given a positive 2-CNF theory Q and an integer K, does @ have a model of curdi- nulity 5 K? 4 Algorithms for almost- theories In this section, we present algorithms for computing minimal models of a propositional theory which are efficient for almost Horn theories. The basic idea is to instantiate as few variables as possible so that the remaining theory will be a Horn theory and then find a minimal model for the remaining theory in linear time. 4.1 Algorithm for theories with only a few non-Morn clauses Algorithm MinSAT is efficient when most of “the theory is Horn and there are only few non-Horn clauses. Given a theory, MinSAT works as follows: It first tries to solve satisfiability by unit propaga- tion. If the empty clause was not generated and no positive clause is left, the theory is satisfiable, and the unique minimal model assigns false to the vari- Automated Reasoning 3 MinSAT(@, M) models, we need to compare all the models gener- Input: A theory @. Output: true if Q is satisfiable, ated. Therefore, the complexity of finding all mini- false otherwise. In case @ is satisfiable, the output mal models for a theory in the class Xl!k is O(7~rn~~). variable M wilI contain a set of models for @ that is 4.2 Algorithms that exploit the a superset of ~11 the minimal models of @. positive graph of a theory 1. Q :=UnitInst(@, I, Sat); If not Sat return false; 2. If @ contains no positive clauses then begin M := (IU(7PIP E Q}}; return true; encl. 3. M := 0; Let A b e a complete set of covers for the set of all the positive clauses in @. For each S E A do: If MinSAT(QuS, M’) then M := MU combine(1, M’); 4. If M == 0 then return false else return true; I ’ V= {PIP is a positive literal in some clause in @}, Figure 3: Algorithm MinSAT E= {(P, Q)I P and Q appear positive in the same In this section we will identify tractable subsets for satisfiability and for finding all minimal models by using topological analysis of what we call the pos- itive graph of a theory. The positive graph reflects on the interactions of the positive literals in the theory. Definition 4.2 (positive graph of a theory) Let @ be a theory. The positive graph of @ is an undirected graph (V, E) defined as follows: ables in the remaining theory. If a nonempty set of positive clauses is left, we compute a cover for the remaining set of positive clauses, replace them with the cover, and then call MinSAT recursively on the new theory. If the theory is not satisfiable, or if we are interested in all minimal models, we have to call MinSAT again with a different cover. is true. The procedure combine(I, h4) gets a set of literals I and a set of sets of literals A4 and returns the set {SlS = WUI, w E M). We can show that MinSAT returns a superset of Algorithm MinSAT is shown in Figure 3. The procedure Unit.Z’nst(@, I, Sat) gets a theory Q, and all the minimal models of the theory. We group returns nf(@). I contains the set of unit clauses used for the instantiations. Sat is false iff the empty all the propositional theories in classes !I!o, 91, . . . clause belongs to the normal form; otherwise Sat as follows: * Q E !I?0 iff nf(@) h as no positive clauses or con- tains the empty clause. clause). Note that @ is a Horn theory iff its positive graph has no edges. Definition 4.3 (vertex cover) Let G = (V, E) be a graph. A vertex cover of G is a set V’ 5 V such that for each e E E there is some v E V’ such that v E e. can consider all possible instantiations of the vari- ables in the cover. Each such instantiation yields a Horn theory for which we can find a minimal model We take “vertex cover of the theory” to mean (if there is one) in linear time. When we combine “vertex cover of the positive graph of the theory”. An algorithm that computes a superset of all minimal models based on a vertex cover of a theory the model for the Horn theory with the cover in- stantiation, a model of the original theory results. We can show that a superset of all minimal mod- els of a theory can be generated in this way. If we are interested only in deciding satisfiability, we can 0 Q[, E *k+lcl iff for some A that is a complete set of covers for C and for each S in A, @@S belongs to %l!k, where C is the set of positive clauses in n @. N-5 ) o e that if a theory has k: non-Horn clauses it belongs to the class qj for some j 2 k and that all Horn theories belong to Qo. We can show that if <P E @k then the above algorithm runs in time O(nmk), where n is the length of the input and m the maximum number of positive literals that ap- pear in any clause. This is also the worst case com- plexity if we are interested only in deciding satisfi- ability. Since for every k the class Qk is closed un- der instantiation, we can use Theorem 3.1 to prove that: Proposition 4.1 If a theory 4p belongs to the class gk for some k, then an X-minimal model for Q, can be found in time O(jXjnmk). Algorithm MinSAT returns a superset of all the minimal models. To identify the set of all minimal stop once the first model is found. Hence,. Theorem 4.4 If the positive graph of a theory @ has a vertex cover of cardinality c, then the satisfia- bility of Qi can be decided in time O(n2c), where n is the size of the theory, and an X-minimal model for @ can be found in time O(IXjn2’). The set of all minimal models of Q, can be found in time O(n22c). In general, the problem of finding a minimum- cardinality vertex cover of a graph is NP-hard. A greedy heuristic procedure for finding a vertex cover could simply remove the node with maximum de- gree from the graph and continue with the reduced graph until all nodes are disconnected. The set of all nodes removed is a vertex cover. Algorithm VC-minSAT (Figure 4) integrates the above heuristic into a backtrack algorithm for find- ing the minimal models. MaxDegree takes the positive graph as an input and returns a symbol (node) that has a maximum degree. If there is more 4 Ben-Eliyahu VC-minSAT(@, M, G) Input: A theory @ and a positive graph of a, G. Output: true if @ is satisfiable, otherwise false. If Xi? is satisfiable, M contains a superset of all minimal models for @. 1. I := 0; @ := UnitInst(G, I, Sat); 2. If +‘at return false; G := Update(Q, G); 3. If G is disconnected then begin M := qJ{+IP E @a); return true; end. 4. P := MaxDegree( Sat := false; M = 0; 5. If VC-minSAT(@U{P}, Mt , G) then M := combine(I, M+); 6. If VC-minSAT(@U{~P}, M-, G) then M := MUcombine(I, M-); 7. If M == 0 return false else return true Figure 4: Algorithm VC-minSAT than one such symbol, it chooses the one that ap- pears in a maximum number of non-Horn clauses in the theory. Updute(@, G) returns the positive graph of @, by updating G. We can show that al- gorithm VC-minSAT produces a superset of all the minimal models. We should mention here that the idea of ini- tializing variables in a theory until the remaining theory is Horn has been suggested, in the context of solving the satisfiability problem, by Gallo and Scutella [GSM] d an was recently extended by Dalal and Etherington [DE92]. The advantages of our ap- proach are that we provide an intuitive criteria for how the variables to be instantiated are selected and we classify the performance of the algorithm using a well-understood and largely explored graph- ical property, vertex cover. Also note that we could define the negative graph of a theory just as we defined the positive graph. We could then write an algorithm that is analogous to VC-minSAT and is efficient for deciding satisfia- bility of theories for which the negative graph has a small vertex cover. Clearly, algorithm minSAT also has an analogous algorithm that considers negative instead of positive clauses. 5 Computing minimal models on acyclic networks of relations In this section we provide efficient algorithms for theories that can be represented as acyclic relations of low arity. We next define the notions of con- straint networks and relations and show how they can represent propositional theories and their sat- isfying models. Definition 5.1 (relations, networks, schemes) Given a set of variables X = {Xl, . . . . X,), each associated with a domain of discrete values Dl > “‘> Dn, respectively, a relation (or, alterna- tively, a constraint) p = p(X1, . . . . Xn) is any subset p c D1 x D2 x . . . x D,. The projection of p onto a subset of variables R, denoted ]CIR(~) or pR, is the set of tuples defined on the variables in R that can be extended to a tu- ple in p. A constraint network IV over X is a set ~1, . . . , pt of such relations. Each relation pi is defined on a subset of variables .S’a 5 X. We also denote by ~(,!$a) th e relation specified over Si. The set of subsets S = (.!?I, .., St} is called the scheme of N. The network N represents a unique relation rel(N) d fi d e ne over X, which stands for all consis- tent assignments (or all solutions), namely, rel(N) = {x = (xl, . . . . x7%)l vsi E s>HSi(x) E pi}’ A partial assignment T = t is a value assignment to a subset of variables T E X. The operator W is the join operator in relational databases. If rel(N) = p, we say that N describes or represents p. Any propositional theory can be viewed as a spe- cial kind of constraint network, where the do- main of each variable is { 0,l) (corresponding to {false, true}) and where each clause specifies a constraint (in other words, a relation) on its propo- sitional symbols. The scheme of a theory is ac- cordingly defined as the scheme of its correspond- ing constraint network, and the set of all models of the theory corresponds exactly to the set of all solutions of its corresponding constraint network. Example 5.2 Consider the theory ip = (iA V lB,lB v d’, C v D). This theory can be viewed as a constraint network over the vari- ables {A, B, C, D), where the corresponding rela- tions are the truth tables of each clause, that is, p(AB) = (OO,Ol, lo), p(BC) = {OO,Ol, lo}, and p(CD) = (Ol,lO, 11). The scheme of the theory @ is (AB, BC, CD). The set of all solutions to this network (and hence the set of models of (a) is p(ABCD) = {0001,0010,0011,0101,1001,1010, loll}. Note that <p has two minimal models: {0001,0010}. The scheme of a theory can be associated with a constraint graph where each relation in the scheme is a node in the graph and two nodes are con- nected iff the corresponding relations have variables in common. The arcs are labeled by the common variables. For example, the constraint graph of the theory @ of Example 5.2 is as follows: ;t B/ \C / \ AB CD Automated Reasoning 5 Theories that correspond to a constraint graph that is a tree are called acyclic theories, and their corresponding tree-like constraint graph is called a join tree. We next present two algorithms for computing minimal models for acyclic theories. These algo- rithms will be extended to arbitrary theories via a procedure known as tree-clustering [DP89], which compiles any theory into a tree of relations. Con- sequently, given a general theory, the algorithms presented next work in two steps: A join-tree is computed by tree-clustering, and then a specialized tree-algorithm for computing the minimal models is applied. The complexity of tree-clustering is ex- ponential in the size of the maximal arity of the generated relations, and hence our algorithms are efficient for theories that can be compiled into net- works of low-arity relations. We should note, how- ever, that even in the cases where tree-clustering is expensive, it might still be use&l since it offers a systematic way of representing the models of the theory in a hierarchical structure capable of sup- porting information retrieval without backtracking. 5.1 Finding a subset of all minimal models For the rest of Section 5, we will assume that we are dealing with constraint networks that correspond to propositional theories, and hence the domain of each variable is (0,l) and we have the ordering 1 > 0. We will also assume that we are looking for models that are minimal over all the symbols in the language of the theory, namely, X-minimal models where X is the set of all symbols in the theory. Definition 5.3 Given a relation p defined on a set of variables X, and given two tuples r and t in p, we say that t + r, iff for some Xo in X, tx,, + rx,, and, for all Xi E X, txi + rxi or txi = rxi. We say that t and r agree on a subset of variables SCX i#rs=ts. Definition 5.4 (conditional minimal models) Given a relation p over X and a subset of variables S z X, a tuple t E p is conditionally minimal w.r.t. S iff p r E p such that r agrees with t on S and tx-s + rx-s. The set of all conditional mini- mal models (tuples) of p w.r.t. S = s is denoted min(p \ S = s). The set of all conditional minimal m0dels (tuples) of p w.r.t. S is denoted min(p \ S> and is defined as the union over all possible assign- ments s to S of min(p \ S = s). min(p \ (0) is abbreviated to min(p). Example 5.5 Consider the relation p(ABCD) = {0111,1011,1010,0101,0001}. In this case, we have min(p) = (1010, OOOl}, min(p \ W, D)) = {0111,1011,1010, OOOl}, and min(p \ {A}) = {0001,1010}. One can verifv that: (1) any minimal tuple of a pro- jection IIs (p j can be’ extended to a minimal tuple of p, but not vice versa; (2) a conditionally minimal tuple is not necessarily a minimal tuple; and (3) a minimal tuple is a conditional minimal tuple w.r.t. to all subsets. Next we show that, given a join-tree, a sub- set of all minimal models can be computed in output polynomial time. The idea is as follows: Once we have a rooted join-tree (which is pair-wise consistentl), we can take all minimal tuples in the root node and extend them (via the join operation) with the matching conditional minimal tuples in their child nodes. This can be continued until we reach the leaves. It can be shown that all the mod- els computed in this way are minimal and that they are generated in a backtrack-free manner; however, not all the minimal models will be generated. In order to enlarge the set of minimal models cap- tured, we can reapply the procedure where each node serves as a root. We can show that if every minimal model has a projection that is minimal in at least one relation of the tree, the algorithm will generate all the minimal models. Formally, Definition 5.6 (parents of S) Given a scheme s = {Sl, . ..) St} of a rooted join-tree, we associate each subset Si with its parent subset SP i) s in the rooted tree. We call an ordering d = 1, .., St a tree-ordering iff a parent node adways precedes its child nodes. Definition 5.7 Let T be a rooted join-tree with So at the root. Let pi be the relation associated with Sa and let d = So,Sl, . . . . St be a tree-ordering. We define p’(T) = min(p0) W=i..t (min(pi \ S,(i))). Theorem 5.8 Let T be a rooted join-tree with a tree-ordering {Sl, . . ..&I. Then PO(T) is a subset of all the minimal models of T, and p’(T) can be computed in O(L C,“=, Ipil) steps where L is the number of minimal models in the output and pi is the input relation associated with Sd. Example 5.9 Consider the join-tree that corre- sponds to the theory 4[, in Example 5.2. Assum- ing BC is the root, we can use the tree-ordering d = BC, AB, CD. Since tuple (BC = 00) is the only minimal model of p(BC), it is selected. This tuple can be extended by A = 0 and by D = 1, re- sulting in one minimal model of p, namely the tuple (ABCD = 0001). If AB plays the role of a root, we will still be computing the same minimal model. However, when CD plays the role of a root, we will IPair-wise consistency, or arc consistency, is a pro- cess that when applied to join-trees will delete from the join-tree all the tuples that do not belong to any solu- tion. Pair-wise consistency can be achieved in polyno- mial time. 6 Ben-Eliyahu mini(@) Input: A theory @. min2(T) Output: A subset of all the minimal models of Qi. 1. Apply tree-clustering to @. If the theory is not satisfi- able, stop and exit. Else, generate join-tree T. Apply pair-wise consistency to T. 2. For each node R in 2’ do: For each join tree T’ rooted at R compute p’(Y). Input: A pair-wise consistent join tree 2’ which corre- sponds to a theory @. Output: All minimal models of 4. 1. Traverse the tree bottom up and compute Pmin(R) for each node R visited using equations (1) and (2). 2. output Pmin(P), where R” is the root node. L 3. Output the union of aJl models computed. Figure 6: Algorithm min2 Figure 5: Algorithm minl compute the tupde (ABCD = OOlO), which is also a minimal model of p. From Theorem 5.8, it follows that, given an acyclic network or any general backtrack-free network rel- ative to an ordering d, one minimal model can be computed in time that is linear in the size of the network and the total subset of minimal models p’(T) can be computed in time proportional to the size of the set. We summarize this in algorithm minl, given in Figure 5. Theorem 5.10 (complexity of minl ) The complexity of minl is O(n2” + nLlpl), where k is the maximum arity of any relation in the join- tree, n is the number of relations, p is the largest relation in the generated tree T, and L is the num- ber of minimal models generated. So minl is especially eficient when the theory is compiled into a join-tree having relations with low arity. We next present two sufhcient conditions for the completeness of algorithm mini . Theorem 5.11 (sufficient condition) Suppose T is a join-tree having the scheme S = {Sl, . . . . St}, and suppose that for every minimal model t of T there is a scheme Si E S such that II,,(t) is in min(p(Si)). Then mini , when applied to T, will generate all the minimal models of T. Theorem 5.12 (local sufficient condition) Suppose that for every node S in a join-tree T the set min(p(S) \ S’) is totally ordered, where 5” is the set of all variables that are common to S and at least one of its neighbors in the tree. Then minl I when applied to T, will generate all the minimal models. 5.2 Listing all minimal models Algorithm mini does not necessarily produce all minimal models because, as the following example shows, it is not always the case that all minimal models are minimal within at least one subrelation. Example 5.13 Consider the join-tree where the variables are (A, B, C, D, E, F, G), the scheme is a tree (ABC, BCDEF, EFG), and the corre- sponding relations are p(ABC) = {011,110,000), p(BCDEF) = (11011,10100, OOOlO}, and p(EFG) = (110,000,101}. The reader can ver- ify that the tuple (0110110) is a minimal model for this network, but its projection relative to any of the relations is not minima!. We now present a second algorithm, min2 , that computes all the minimal models but is not as effi- cient as mini in the sense that during processing it may generate models of the theory that are not minimal. Some of those models will be pruned only at the final stage. Nevertheless, we conjecture that the algorithm is optimal for trees. Basically, algorithm min2 computes partial minimal models recursively while traversing the join-tree bottom up. When we visit a node R, we prune all the partial models that we already know cannot be extended to a minimal model. The resulting subset of partial models is denoted by Pmin(R). More formally, let TR denote the net- work rooted at node R and SR the set of all vari- ables that R shares with its parent. We define Pmin(R) = min(rel(TR) \ SR). Since SR & R, Pmin(R) = min(JR \ SR) where JR is defined to be (1) JR = min(rel(TR) \ R). Note that for the root node R”, Pmin(R’) is the set of all minimal models of the whole tree (con- ditioning is on the empty set). We can show that JR can be expressed recursively as a function of Pmin(Ul), . . . . Pmin( Uta) where U1, . . . . Un are R’s children: JR = p(R) M (CQFzl Pmin(U;)). (2) This allows a bottom-up computation of Pmin(R) starting at the leaf nodes. The algorithm is sum- marized in Figure 6. Example 5.14 Consider again the tree-network of Example 5.9. Algorithm min2 will perform the following computations: Pmin(AB) = min(p(AB) \ {B}) = (00, Ol}, Pmin(CD) = min(p(CD) \ (C)) = (01, lo}, Automated Reasoning 7 Pmin(BC) = min(p(BC) W (Pmin(AB) W Pmin(CD))) = min(p(BC) W (ABCD = {0001,0010,0101,0110))) = min({0001,0010,0101)) = {0010,0001). We see that although the theory has 7 models, only 4 intermediate models were generated during the com- putation. The reader can also verify that algorithm mid produces all the minimal models of Example 5.13. We can show that min2 computes all and only the minimal models. The complexi?$y of min2 (with- out the tree-clustering preprocessing step ) can be bounded as follows: Theorem 5.15 Let r be the maximum number of tuples in any relation ~a in the join-tree, and sup- pose that for every node R in the join-tree lJ~l 5 m. Then the complexity of min2 is O(nm’), where n is the number of relations. Consequently, if the ratio between the number of minimal models, I, and } JRI is less than some c for every node R in the tree, then the number of models generated will be linear in c -1. 6 Conclusion The task of finding all or some of the minimal models of a theory is at the heart of many knowl- edge representation systems. This paper focuses on this task and introduces several characterizations of tractable subsets for this problem. We have presented new algorithms for finding minimal models of a propositional theory. The first group of algorithms is effective for almost-Horn the- ories. The other group is effective for theories that can be represented as an acyclic network of small- arity relations. Loveland and colleagues (e.g. [LovSl]) have shown how SLD resolution for first-order Horn the- ories can be modified to be efficient for near-Horn theories. We use different methods and provide worst-case complexities. Cadoli [Cad921 has de- scribed a partition of the set of propositional the- ories into classes for which the problem of finding one minimal model is tractable or NP-hard. His classification is different from ours but, as in Sec- tion 5, is also done by considering the set of logical relations that correspond to to the theory. An algo- rithm that exploits acyclic theories for computing minimum cardinality models is given in [FD92]. The ultimate usefulness of our algorithms must be tested by implementing them in systems that solve real-world problems in diagnosis or logic pro- gramming. We believe, however, that in any event the algorithms and the theoretical bounds provided in this paper are of value since the problem of com- puting minimal models is so fundamental. 8 Ben-Eliyahu Acknowledgments We thank Yousri El Fattah, Itay Meiri, and Judea Pearl for useful discussions and helpful comments on earlier drafts of this paper. We have benefited from discussions with Adam Grove and Daphne Koller on the topic of computing minimal models. Thanks also to Michelle Bonnice for editing. [BNNSSl] [Cad921 [DE923 [DC841 [dKMR92] [dKW87] [DP89] [FD92] [CL911 [GS88] [Kar72] [Lif] [LovSl] [Min82] References C. Bell, A. Nerode, R.T. Ng, and V.S. Sub- rahmanian. Computation and implements tion of non-monotonic deductive databases. Technical Report CS-TR-2801, University of Maryland, 1991. Marco Cadoli. On the complexity of model finding for nonmonotonic propositional log- its. In Proceedings of the Fourth Italian Conference on Theoretical Computer Sci- ence, October 1992. M. DalaI and D. Etherington. A hierar- chy of tractable satisfiabihty problems. IPL, 44:173-180, 1992. W. Dowling and J. Gallier. Linear time algorithms for testing the satisfiability of propositional horn formulae. journal of Logic ProgromrniPag, 3:26?-284, 1984. J. de Kleer, A.K. Mackworth, and R. Re- iter. Characterizing diagnosis and systems. Artificial Intelligence, 56:197-222, 1992. J. de Kleer and B.C. Williams. Diagno- sis multiple faults. Artificial Intelligence, 32:97-130, 1987. R. Dechter and J. Pearl. Tree clustering for constraint networks. Artificial Intelligence, 38:353-366, 1989. Y. El Fattah and R. Dechter. Empirical evaluation of diagnosis as optimization in constraint networks. In DX-92: Proceedings of the workshop on Principles of Diagnosis, October 1992. Michael Gelfond and Vladimir Lifschitz. Classical negation in logic programs and disjunctive databases. New Generation Computing, 9:365-385, 1991. G. GalIo and M. ScuteIla. Polynomi- ally solvable satisfiability problems. IPL, 29:221-227, 1988. R. M. Karp. Reducibility among combina- torial problems. In Complexity of Computer Computations. Plenum Press, 1972. V. Lifshitz. Computing cricumscription. In IJCAI 1985. D. Loveland. Near-horn prolog and beyond. Journal of Automated Reasoning, 7~1-26, 1991. J. Minker. On indefinite databases and the closed world assumption. In Proceedings of the 6th Conference on Automated Deduc- tion. Springer-Verlag, 1982. | 1993 | 6 |
1,387 | Reasonin recis -ItS * Nita Goyal and Yoav Shoham Robotics Laboratory, Computer Science Department Stanford University Stanford, CA 94305 {nita,shoham}@cs.staford.edu Abstract Many knowledge-based systems need to represent vague concepts. Although the practical approach of representing vague concepts as precise inter- vals over numbers is well-accepted in AI, there is no systematic method to delimit the boundaries of intervals, only ad hoc methods. We present a framework to reason precisely with vague concepts based on the observation that the vague concepts and their interval-boundaries are constrained by the underlying domain knowledge. The frame- work is comprised of a constraint language to represent logical constraints on vague concepts, as well as numerical constraints on the interval- boundaries; a query language to request informa- tion about the interval boundaries; and a compu- tational mechanism to answer the queries. A key step in answering queries is preprocessing the con- straints by extracting the numerical constraints from the logical constraints and combining them with the given numerical constraints. 1 Introduction The input to an AI system embedded in a real-world environment is often numerical whereas the reason- ing is done with abstract symbols. Many abstract symbols embody vague concepts over continuous nu- merical ranges. To quote Davis, “In some respects, the concepts of commonsense knowledge are vague,. . . Many categories of common sense have no well-marked boundary lines; there are clear examples and clear nonexamples, but in between lies an uncertain region that we cannot categorize, even in principle.” [Davis 19901. For example, there is no minimum precise body temperature that a doctor considers high and there is no maximum number of hairs that a person might have and still be considered bald. The representation of such vagueness poses a prob- lem. “From a theoretical point of view, this vague- ness is extremely difficult to deal with, and no re- *This research has been supported by grant AFOSR-89- 0326. 426 Goyal ally satisfactory solutions have been proposed.” [Davis 19901. Some of the approaches that try to address this theoretical difficulty are fuzzy logic [Zadeh 19831 and vague predicates [Parikh 19831. However, it is com- monly accepted in AI that, though inadequate theoret- ically, in practice it is often adequate to assume that a vague concept is precise and that there is indeed a well-defined boundary. In fact, most system builders who encounter the vagueness problem [Hayes-Roth et al. 1989; Shahar, Tu & Musen 19921 adopt a similar approach of representing a vague concept as an inter- val over the range of numbers. This practical approach is illustrated by an example from [Davis 19901: “Sup- pose that “bald” did refer to some specific number of hairs on the head, only we do not know which num- ber. We know that a man with twenty thousand hairs on his head is not bald, and that a man with three hairs on his head is bald, but somewhere in between we are doubtful.” This precise representation of the vague concept bald is still useful for reasoning. Despite the pervasiveness of the vagueness problem, and the pervasiveness of the practical approach of rep- resenting vague concepts as intervals, there has been no effort in AI to provide a systematic account of this practical approach. We propose a framework for rep- resenting and reasoning with vague concepts as inter- vals that has the advantages of (1) improving our un- derstanding of the issues involved in the practical ap- proach, and (2) replacing the ad hoc approach used by system designers to delimit the interval-boundaries. The framework is based on the observation that vague concepts and their interval-boundaries (also re- ferred to as thresholds) are constrained by the under- lying domain knowledge that must be used to reason about the thresholds. We motivate the components of the framework by extending Davis’ baldness example. Example 1: “Anyone with 3 or fewer hairs is bald and anyone with 20000 or more hairs is not bald” “All old people are bald” (note that “old” itself is a vague concept that we will assume has a well-defined boundary) “Anyone who is 50 years or younger is not old whereas anyone over 80 is old” From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. “All presidents of companies are old” “Tom’s age is 70 years, he has 500 hairs and is the president of a company” “Jim’s age is 75 years and he has 800 hairs” “Sam’s age is 45 and he has 650 hairs” Is Tom bald? Logical reasoning tells us that since Tom is president of a company, he is old and therefore bald. Note that here we used only the logical relations between the concepts president, old and bald, where old and bald are vague concepts but president is not. Is Jim bald? We can reason that since Tom is old, the oldness threshold’ can be at most 70. Since Jim’s age is 75 which is over the oldness threshold, he must be old and therefore bald. Note that here we needed numerical reasoning with Tom and Jim’s ages and old- ness threshold, as well as logical reasoning that since Jim is old he must be bald. We can ask if the baldness threshold is necessarily more than 800? Since Jim is bald and has 800 hairs, the baldness threshold must be at least 800. There- fore, the answer to the query is yes and hence anyone with less than 800 hairs is bald. Here we needed nu- merical reasoning about Jim’s hairs and the baldness threshold. Is Sam bald? Since anyone with less than 800 hairs is bald, and Sam has only 650 hairs, he must be bald. Here we needed numerical reasoning with number of hairs on Sam’s head and the baldness threshold. As illustrated by this example, we need to represent both logical relations between symbolic concepts and numerical relations on thresholds. Also, logical as well as numerical reasoning is required to answer the inter- esting queries. Hence, the proposed framework facil- itates this representation and supports queries about the thresholds. Framework: The framework is comprised of three main parts - a constraint language to express do- main knowledge, a query language to query the domain knowledge and a computational mechanism to answer the queries. The first part of the framework is a constraint Zun- guuge that captures the domain knowledge. The lan- guage enables the expression of logical constraints on the vague concepts as well as numerical constraints on the thresholds of these copcepts. Explicit representa- tion of the thresholds is important to represent the nu- merical constraints and as we shall see, to ask queries. The second part of the framework is a query language that extracts relevant information about the thresholds implied by the domain knowledge. In particular, the queries enable us to delimit the thresholds based on the information provided in the domain knowledge2. ‘By oldness threshold we mean that age such that ev- eryone of higher age is old whereas everyone of lower age is not old. The baldness threshold is defined analogously. 2Note that it is not necessary to assign specific values to the thresholds to answer any queries, although this as- This is exactly what a system designer needs to define intervals for a vague concept that are consistent with the domain knowledge. For example, the answer to the query “what is the minimum permissible value for the baldness threshold?” provides the designer with useful information to define the interval for bald. The third part of the framework is a computational mechanism to answer the queries in the query lan- guage using the domain knowledge expressed in the constraint language. In Sections 2 and 3, we introduce particular con- straint and query languages. In Section 4, we de- scribe a computational mechanism to answer queries for these languages. It includes a sound and com- plete algorithm, a discussion of the complexity, and ex- perimental results illustrating the applicability of the framework. Our experiments were carried out in the domain of medical diagnosis where numerical measure- ments of parameters such as blood pressure and heart rate are abstracted to vague concepts such as high and low blood pressure and used for the diagnosis of the patient’s condition. In this paper, for the sake of clar- ity and understanding, we stick to the more everyday example of bald people. 2 Constraint Language To express the domain knowledge, the constraint lan- guage must have an explicit representation of thresh- olds, a language to express numerical constraints and a language to express the logical constraints. We present such a language here, chosen for its familiarity as well as to strike a tradeoff between expressivity and effi- ciency of answering queries. The vague predicates in the logical language are dis- tinguished from the other predicates. We refer to the vague predicates, which must all be unary, as intervul- predicates and to all other predicates as nonintervul- predicates. The set of interval-predicates is denoted by ZP, and the set of noninterval-predicates by n/z?. With every P E ZP we associate two threshold terms P- and P+, called the Zower and upper thresholds of P, respectively. The set of all threshold terms is de- noted by 7- (7 = {P-, P+ 1 P E ZP}). The interval- predicates will be interpreted in a special way to reflect our intuition about the vague predicates: P will be in- terpreted as the interval [P-,P+] over 9, the set of real numbers. We will refer to this interpretation as the predicate-us-intervuZ assumption. 1. Numerical Constraints: The language of numer- ical constraints is that of linear arithmetic inequali- ties where the threshold terms in T are the variables of the inequalities. A numerical constraint must be reducible to the form (alzl + . . . +a,~,) rel b, where a1 )...) un,b E 92, Xl,.. .,z, E 7-, and rel E {s,>, <, >, =}. We denote the set of numerical constraints by NC. signment is made much easier in our framework. Nonmonotonic Logic 427 2. Logical Constraints: These are definite Horn clauses without function symbols (also called Dat- alog sentences in the deductive database literature [Ullman 19881). The predicates of these logical constraints are interval-predicates ZP as well as noninterval-predicates NZP. We denote the set of logical constraints by LC. The constraints in Example 1 are represented in the language as follows. We extend the example to include another constraint that all rich VPs become presidents of companies. Example 2: ZP = {bald, old, rich} N-ZP = (age, hairs, pres, money, was-VP) NC = (bald- = 0, 3 5 bald+ 5 20000, old+ = 00, 50 5 old- 5 80, 0.1 5 rich- 5 1, rich+ = 00 > u {P-<P+jPEZP} The unit for bald is number of hairs, for old is age in years, and for rich is money in millions of dollars.- LC = {pres(x) t was-VP(z) A money(x, y) A rich(y) bald(z) t old(y) A uge(z, y) A huirs(z, z) Old(Y) +- Pr44 A W(X,Y) age (Tom, 70)) hairs (Tom, 500)) was- VP(Tom), money(Tom, 6), uge(Jim, 75), huirs(Jim, 800) uge(Sam, 45), huirs(Sam, 650)) q There are other languages that combine quantitative and qualitative constraints. For instance, Williams’ qualitative algebra [Williams 19881 expresses opera- tions on reals and signs of reals, but is not concerned with logical constraints. Similarly, [Meiri 19911 and [Kautz & Ladkin 19911 present frameworks for express- ing and processing both quantitative and qualitative temporal constraints. Their language limits the con- straints, whether numerical or logical, to be binary whereas our language does not. On the other hand, their language can express disjunctive relations be- tween intervals which our language does not. Most closely related to our language are languages for constraint logic programming (CLP) in the style of Lassez et al. [Jaffar & Lassez 19871. CLP considers general Horn theories, as opposed to our limited Dat- alog theories. However, CLP does not allow numerical constraints in the head of a clause. In our language the interval-predicates can occur in the head which, if represented in CLP, would correspond to numerical constraints occurring in the head. 3 Query Language The purpose of the query language is to enable a user to extract information about the thresholds that is im- plied by the domain constraints. It is a useful tool for a system designer to find the threshold values allowed by the constraints. The kind of queries supported are informally described below. Here Pih, . . . , P”,h E ‘T, al >“‘> a, E !R, rell,.. . ,reZn E {<,>,<,>,=), and i E (l,...,n). Is it necessarily the case that (Pih rell al) A. . . A (Pk’ reZn a,) ? Is it possibly the case that (Pih rell ai) A.. . A (Pk’ rel, a,) ? What is the minimum value that Pih can take? What is the maximum value that Pi” can take? Many queries may be derived using the above prim- itives. For example, the query “P(a) ?” can be cast as “Is it necessarily the case that (P- < a) A (P+ 2 a) ?“. If the answer is yes then P(a) is true, otherwise it is unknown. If the answer to “Is it possibly the case that (P- 5 a) A (P+ 1 a) ?” is no then P(a) is false, other- wise it is unknown. In addition, it is possible to request a specific as- signment of values to the thresholds that satisfies the constraints. For Example 2, assigning the value 2000 to the baldness threshold is consistent with the given constraints. We will indicate briefly in Section 4.3 how this assignment is made. This procedure is particu- larly useful for a system designer who assigns specific numbers to the thresholds in the design stage of the system. 4 Computational Mechanism for Answering Queries The final component of our framework is the compu- tational mechanism responsible for answering queries on constraints. A key step in answering queries is preprocessing the constraints by extracting the nu- merical constraints from the logical constraints and combining them with the given numerical constraints. The preprocessing is a two-step procedure: first, using the predicate-as-interval assumption, the procedure extracts the numerical information on the interval- predicates from the logical constraints LC. This de- rived information is in the form of disjunctions of nu- merical constraints. Next, the procedure combines these disjunctive constraints with the given numeri- cal constraints NC. We describe the procedure to ex- tract numerical information from LC in Section 4.1. In Section 4.2 we prove that the procedure is sound and complete and also discuss the complexity issues. In Section 4.3 we describe how to combine the numerical information from LC with NC. 4.1 Extracting Numerical Informat ion from Logical Constraints The algorithm Symb-to-Numeric described in this sec- tion takes as input the logical constraints LC and the sets of interval and noninterval-predicates ZP and n/zp, and returns a set of numerical constraints quantJIC. This process of conversion from logical to numerical constraints preserves the information about the thresholds of interval-predicates but discards the information on the noninterval-predicates. A formal discussion is deferred to Section 4.2. 428 Goyal Function Symb_to-Numeric( LC, ZIP, nfip) : quantJ/C s + 0; for every clause c E LC such that head(c) E ZP do S, t Expand(c, LC, 27, NZP); stsus,; /* S has no JI&TP predicates */ endfor quant JlC t 0; for every clause c E S do quantJIC t quant-LC U Convert-LC-to-NC(c); return(quant-LC) endfunction Figure 1: Numerical Information from Logical Con- straints The algorithm Symb-to-Numeric is described in Figure 1. Starting with all those clauses in LC that have interval-predicates at the head, we expand their bodies using other clauses in LC until all noninterval- predicates are eliminated from the body. Expand is very similar to SLD resolution [Lloyd 19871 but with two differences: (1) only noninterval-predicates are expanded (2) all possible expansions are computed. Thus, each clause in set S of Figure 1 has only interval- predicates. Using the predicate-as-interval assump- tion, we convert the resultant clauses to numerical con- straints as described by function Convert-LC-to-NC in Figure 2. This function works by fragmenting each clause into subclauses such that each subclause has at most one variable and no two subclauses have the same variable 3. For example, the clause P(a) t $1; R$x) A S(b) is a disjunction of three subclauses: a t Q(x) A R(x)” and “t S(b)“. In general, each s;bclause thus obtained will be one of the six basic types described in Figure 2. Each type of sub- clause is converted to numerical constraint by using the predicate-as-interval assumption, and by interpret- ing the connectives 1, V, A as complement, union and intersection of intervals, respectively. An application of the algorithm on Example 2 is il- luminating: Example 3: The first step in the procedure is to lo- cate clauses with interval-predicates at the head in LC and expand them until all noninterval-predicates are eliminated. Here there are two such clauses with old and bald at the head. On expansion, we obtain set S: bald(500) t old@) old(m) t rich(6) bald(800) t old(75) bald(650) t old(M) On applying the function Convert-LC-to-NC, each of these clauses fragments into subclauses of the first two types: “P(a)” and “t P(a)“. On conversion we obtain the set quunt_L@: quuntJ/C = {(bald- < 500 5 bald+) V (70 < old-) v (old+ < 70) 3Note that this is always possible because all interval- predicates are unary. 4Note that each of the 4 clauses obtained here will ac- tually split into 2 clauses. fin$$bot; Convert-LC-toiVC(lc) : nc . h-sub&uses t MakeSubclauses( /* Every subclause of lc with a constant or the same variable */ for every subclause subcl E lc-subclauses do Case subcl of: /* a is a constant */ “P(a)“: subcl’ t (P- 2 a < P+) “t P(a)“: subcl’ t (a < P-) v (a > P+) L’P(x)“: subcl’ t (P- = -CO) A (P+ = +CO) (‘t P( 2)” : subcl’ t t’- > f’+ ((t PI(,), . . . ) P,(x)“: subcl’ t vrcl $Y1 (P; > Pj’) “P(z) t Q&T), . . . , Q&$‘: subcl’ t (vyzIP- < Q;) A (vrEIP+ 2 Q’) ; nc t nc V subcl’ endfor return( nc) endfunct ion Figure 2: Conversion from Logical to Linear Arith- metic Constraint (old- < 70 5 old+) V (6 < rich-) V (rich+ < 6) (bald- 5 800 5 bald+) v (75 < old-) v (old+ < 75) (bald- 5 650 5 bald+) v (45 < old-) v (old+ < 45)) 4.2 Formal results on conservation of numerical informat ion We establish formally that no numerical informa- tion is lost in the conversion performed by algorithm Symb-to-Numeric. We begin by defining the mod- els of LC that are faithful to the predicate-as-interval assumption; we call these the standard models. Specif- ically, in all standard models it4 = (0, p) over a do- main D, the interpretation function 1-1 will have to map interval-predicates to intervals over the reals. In the following, !R denotes the set of real numbers. Definition 1: Given a set of logical constraints LC, the set of interval-predicates ZP, and the set of thresh- old terms 7, a standard model of LC w.r. t. ZP is a model M = (D, p-I> such that M k LC, and for ev- ery P E ZP there exist P- , P+ E 7- and it is the case that p(P-),p(P+) E ?R and p(P) = {x 1 p(P-) 5 x 5 /4p+>, x E w efinition 2 : Given LC, ZP and ‘T as above, a numerical submodel of LC w.r.t. ZP is a model M = (R, p) such that there is some standard model Al’ = (DJ.4’) of LC w.r.t. ZP, and ,!L is the restriction of $ to terms in 7. The following theorem establishes that the algorithm symb-to-numeric is sound and complete w.r.t. the nu- merical information (complete proof in [Goyal 19931). Theorem 4 : (Soundness and Completeness) The class of numerical submodels of LC w.r.t. ZP is iden- tical to the class of models of quunt.LC. Proof: (Sketch for Completeness) An arbitrary model M of quunt-LC is extended to a standard model M’ of Nonmonotonic Logic 429 LC such that its numerical submodel is exactly M. M’ is constructed by first building a dependency graph of predicates in LC and then by defining the interpreta- tion of the predicates in A&P in the topological order of the graph. The intuition is that when a clause in LC is used to build the interpretation of the predi- cate in the head from the predicates in the body, the body predicates would have been already interpreted because of the order of interpretation. The equivalence of the numerical submodel of M’ and the model M is proved through mathematical induction on the topo- logical order of predicates. I In the worst case, the space and time complexity of computing quant_LC is exponential in the size of LC. This is not surprising, since in the worst case, quantJ/C is of exponential size. However, we have identified syntactic restrictions on the constraint lan- guage for which we can avoid such exponential blowup. In practice, the performance of the algorithm has been found to be quite acceptable for the following reasons. First, we observe that the algorithm is exponential ,only in the size of the non-ground constraints. Typically, the number of non-ground constraints is small com- pared to the number of ground literals. Second, this algorithm is invoked only once for all the queries on a given set of constraints; hence, the cost is amortized over all the queries. Thus, the overall performance of the system is not severely affected despite the appar- ent intractability. A more detailed discussion of the complexity issues may be found in [Goyal 19931. 4.3 Combining with Numerical Constraints The constraints in the set quantJIC, obtained by con- verting the logical constraints to numerical constraints, are disjunctive. These constraints must be combined with the set of given numerical constraints NC to an- swer the queries. In principle, we can convert the set quant-LC to disjunctive normal form (DNF) and add the constraints NC to each disjunct. The disjunction thus obtained is referred to as output-c. However, in practice, we leave quant-LC in its conjunctive nor- mal form to save space, and generate the disjuncts of output-c one by one through backtracking. Further- more, to make the process more efficient, we first re- duce the size of the set quantJIC using the constraints NC. We elaborate on these below and also discuss how existing methods are applicable to answer queries on a single disjunct of output.C. Pruning quant.JC: We have developed a procedure that uses the constraints in NC to reduce the size of the set quantJ/C significantly. This procedure, called reduce, uses the upper and lower bounds of all thresholds implied by the constraints in NC to prune quant-LC in two ways. First, if a disjunct of a con- straint in quantJIC is already satisfied by the bounds, then that constraint can be deleted from quant_LC. In Example 3, the lower and upper bounds for old- are 50 and 80 respectively, hence (old- > 45) is already satisfied. Second, if a disjunct is inconsistent with the bounds, then that disjunct can be deleted. For in- stance, (old- > 82) is inconsistent with the bounds for old-. The experimental results confirm that the procedure reduce reduces the size of quant_LC significantly. In Example 3, quant_LC has 8 constraints with 3 dis- juncts each that should give rise to 3* disjuncts (in DNF) in the worst case. Applying procedure reduce eliminates all but 1 disjunct, that is: output-C = NC U {(old- 5 70), (bald’ 2 SOO)} When the procedure was applied to the medical diag- nosis domain, in the first application quant-LC had 12 constraints with 3 disjuncts each, giving rise to 312 dis- juncts in the worst case. Procedure reduce eliminated all but 2 disjuncts. In a second medical application, quant-LC had 416 constraints with 2 or 3 disjuncts per constraint that would have given rise to at least 2416 disjuncts. Procedure reduce eliminated all but 2592 disjuncts. Generating disjuncts of output-C: Once the set quant-LX has been pruned, queries are answered by generating the remaining disjuncts of output-c one at a time through backtracking. We avoid generat- ing redundant disjuncts in output-C by recognizing the presence of common disjuncts in the constraints of quant-.LC. For instance, in the second medical ap- plication, only 184 disjuncts had to be generated out of the 2592 that were possible. In practice, most queries do not require backtracking even over all possible distinct disjuncts that are gen- erated. For instance, a query whether a constraint is possibly true or not, has to find any one disjunct over which the constraint is satisfied. Furthermore, even for queries where all disjuncts have to be checked, an ap- proximate answer can be obtained by computing only on a few disjuncts. For instance, a query to find the minimum value of a threshold can return the minimum over only a few disjuncts. This approximate answer is still useful since it supplies a lower bound on the threshold, even though not the tightest lower bound. Thus, this procedure gives a useful approximate an- swer any time that an answer is required, and the ap- proximation gets closer to the optimal as the allowed time increases. The experimental results on answering queries are available in [Goyal 19931. We can even assign a specific value to the thresholds using heuristic criteria. For Example 3, the baldness threshold (bald+) could be 10400 which is halfway be- tween its bounds of 800 and 20000, and satisfies all the given constraints. When a large amount of ground data is available, clustering techniques are utilized to assign a specific value. Answering for each disjunct: We have discussed previously how the set quant_LC is pruned a priori to eliminate redundant disjuncts and how the disjuncts 430 Goyal of output-C are generated. Each disjunct thus gen- erated is a set of linear arithmetic constraints. We now discuss how any query is answered on a single disjunct. The queries for maximum and minimum val- ues of thresholds (queries 3 and 4 in Section 3) re- quire the computation of lower and upper bounds of thresholds. Queries for checking a constraint for con- sistency (queries 1 and 2 in Section 3) require a con- sistency check on a set of constraints. Thus, any exist- ing method for computing bounds and checking consis- tency of linear arithmetic constraints can be used. If NC has only simple order relations or bounded differ- ences, we can use an efficient S(n3) procedure (where n is the number of variables) from [Davis 19871 or [Meiri 19911. Sacks’ bounder [Sacks 19901 is applicable but more useful for nonlinear constraints. For more gen- eral linear constraints, we have to use a linear program- ming method that is still tractable 6(n3a5L) (L is size of input) [Karma&u 19841. Lassez’s work on canon- ical form of generalized linear constraints [Huynh et al. 19901 has potential applications, though the advan- tage of a canonical form would be offset by the cost of maintaining the canonical form because we backtrack on disjunctive constraints. 5 Conclusions We have provided a systematic account of the practi- cal approach of representing vague concepts as precise intervals over numbers. Based on the observation that the vague concepts and their interval-boundaries are constrained by the underlying domain knowledge, we motivated and proposed a framework to reason pre- cisely with vague concepts. The framework is com- prised of a constraint language to represent the domain knowledge; a query language to request information about the interval boundaries; and a computational mechanism to answer the queries. We described the constraint and query languages and a computational mechanism to answer queries. A key step in answering queries is preprocessing the con- straints by extracting the numerical constraints from the logical constraints and combining them with the given numerical constraints. We proved this algorithm to be sound and complete and also discussed the com- plexity issues. Some experimental results of applying this framework to a medical domain were discussed. The main contribution of our work is in providing a systematic framework to understand the common though ad hoc approach of representing vague predi- cates as intervals. This work is particularly applica- ble to a knowledge base during its development stage where the vague concepts over numbers need to be de- fined precisely. Acknowledgements We would like to thank Surajit Chaudhuri, Ashish Gupta, Alon Levy, Pandu Nayak, Moshe Tennenholtz, Becky Thomas and the anony- mous reviewers. References Davis, E. 1987. Constraint Propagation with Interval Labels. Artificial Intelligence 32(3):281-331. Davis, E. 1990. Representations of Commonsense Knowledge. Morgan Kaufmann Publishers, 19-20. Goyal, N. 1993. A Framework for Reasoning Precisely with Vague Concepts. Ph.D. diss. (in preparation), Dept. of Computer Science, Stanford University. Hayes-Roth, B.; Washington, R.; Hewett, R.; Hewett, M.; and Seiver, A. 1989. Intelligent Monitor- ing and Control. In Proc. of Eleventh International Joint Conference on Artificial Intelligence, 43-249. Huynh, T.; Joskowicz, L.; Lassez, C.; and Lassez, J-L. 1990. Reasoning about Linear Con- straints using Parametric Queries. In Proc. of Tenth FST TCS, Bangalore, India. Jaffar, J.; and Lassez, J-L. 1987. Constraint Logic Programming. In Proc. of 14th A CM Symposium on Principles of Programming languages, 111-119. Karma&r, N. 1984. A New Polynomial-Time Al- gorithm for Linear Programming. Combinatorics 4:373-395. Kautz, H.A.; and Ladkin, P.B. 1991. Integrating Met- ric and Qualitative Temporal Reasoning. In Proc. of Ninth National Conference on Artificial Intelligence, 241-246. Lloyd, J.W. 1987. Foundations of Logic Programming, 2nd. ed.. Springer-Verlag. Meiri, I. 1991. Combining Qualitative and Quantita- tive Constraints in Temporal Reasoning. In Proc. of Ninth National Conference on Artificial Intelligence, 260-267. Parikh, R. 1983. The problem of Vague Predicates. In Cohen and Wartofsky (eds.), Language, Logic, and Method. Reidel Publishers, 241-261. Sacks, E. 1990. Hierarchical Reasoning about In- equalities. In Readings in Qualitative Reasoning about Physical Systems, eds. D.S. Weld and J. de Kleer. Morgan Kaufmann Publishers, 344-350. Shahar, Y.; Tu, S.W.; and Musen, M.A. 1992. Knowl- edge Acquisition for Temporal-Abstraction Mecha- nisms. Knowledge Acquisition 4:217-236. Ullman, J.D. 1988. Principles of Database and Knowledge-Base Systems, Vol. 1. Computer Science Press. Williams, B.C. 1988. MINIMA: A Symbolic Approach to Qualitative Algebraic Reasoning. In Proc. of Sev- enth National Conference on Artificial Intelligence, 264-269. Zadeh, L.A. 1983. Commonsense and Fuzzy Logic. In The Knowledge Frontier: Essays in the Representa- tion of Knowledge, eds. N. Cercone and G. McCalla. New York: Springer-Verlag, 103-136. Nonmonotonic Logic 431 | 1993 | 60 |
1,388 | Restricte A knowledge representation problem can be some- times viewed as an element of a family of prob- lems, with parameters correspon ding to possible assumptions about the domain under consider- Abstract Vladimir Lifschitz* Department of Computer Sciences and Department of Philosophy University of Texas at Austin Austin, TX 78712 Examples ation. When additional assumptions are made, the class of domains that are being described be- comes smaller, so that the class of conclusions that are true in all the domains becomes larger. As a result, a satisfactory solution to a parametric knowledge representation problem on the basis of some nonmonotonic formali’sm can be expected to have a certain formal property, that we call re- stricted monotonicity. We argue that it is im- portant to recognize parametric knowledge repre- sentation problems and to verify restricted mono- tonicity fir their proposed solutions. Introduction This paper knowledge is in about the methodology of representing nonmonotonic formalisms. A knowledge representation problem can be sometimes viewed as an element of a family of problems, with parameters cor- responding to possible assumptions about the domain under consideration. When additional assumptions are made, the class of domains that are being described be- comes smaller, so that the class of conclusions that are true in all the domains becomes larger. As a result, a satisfactory solution to a parametric knowledge repre- sentation problem on the basis of some nonmonotonic formalism- can be expected to have a certain formal property, that we cali restricted monotonicity. The idea of restricted monotonicity is first illustrated here by examples. Then the precise definition of this property is given, and methods for proving it are discussed. Finally, we apply the concept of restricted monotonicity to the analysis of some of the recent work on representing action and change in nonmonotonic formalisms. *This work was partially supported by National Science Foundation under grant IRI-9101078. Here is a simple knowledge representation problem involving a default: We will compare two solutions, one based on default logic [Reiter, 19801, the other on circumscription [Mc- Carthy, 19861. The first formalization is the default theory whose postulates are the axioms Vz(Penguin(z) > Bird(z)) (1) and Vz(Penguin(z) > -FZies(z)), and the default (2) Bird(z) : Flies(z) / Flies(z). (3) The second is the circumscriptive theory with the axioms (1)) (2) and Vx(Bird(z) A lAb(z) > Flies(x)), (4) in which Ab is circumscribed and FZies varied. There is an important difference between the two for- malizations that becomes obvious when we apply the default birds normally fly to specific objects. Although the postulates of the two theories do not use any ob- ject constants, let us assume that their languages in- clude an object constant, say, Joe. Since Joe is not postulated to be a bird, the formula FZies(Joe) is un- decidable both in the default logic formalization 5?1 and in the circumscriptive formalization Ts. We are interested in the theories obtained from Tl and T2 by adding some of the possible assumptions Bird( Joe), lBird( Joe), Penguin( Joe), -Penguin (Joe) (5) 432 Lifschitz From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. to their axiom sets. For some subsets p of (5), adding p to the axiom set of Tl will have the same effect on the status of the formula. FZies(Joe) as adding p to T2. If, for instance, p is { Bird(Joe), lPengzlin( Joe)}, then FZies( Joe) is provable both in TlUp and TzUp. If, on the other hand, p is {Penguin( Joe)}, th en this formula is refutable in both theories. The situation will be different, however, if we take p to be {Bird(Joe)}. N ow Joe is known to be a bird, but it is not known whether he is a penguin. The default logic formalization sanctions the conclusion that Joe flies: Tl U (Bird( Joe)} entails FZies( Joe). In the corresponding circumscriptive formalization, T-J U { Bird( Joe)}, the formula FZies( Joe) remains undecidable. The theories Tl and T2 can be viewed as somewhat different interpretations of the given set of assumptions about the ability of birds to fly. Whether or not Ti is considered too strong-or T’ too weak-depends on which of the two identically worded, but slightly different knowledge representation problems we had in mind in the first place. The circumscriptive solution T:! can be viewed as reasonable, and the default logic solution Ti as exces- sively strong, if the absence of both Pengzlin( Joe) and lPengvin(Joe) among the axioms is supposed to in- dicate our willingness to take into consideration both the domains in which Joe is a penguin and the domains in which he isn’t, and to sanction only the conclusions that are true in domains of both kinds. We will ex- press this knowledge representation convention by say- ing that these two literals function in this example as “parameters.” A parameter is an additional postulate whose presence in the axiom set is supposed to make the set of domains under consideration smaller, and thus to increase the set of conclusions sanctioned by the formalization. The knowledge representation problem stated at the beginning of this section would be described more pre- cisely if we specified that the ground literals contain- ing Bird or Penguin should be treated as parame- ters. This statement implies that a formalization T will be considered adequate only if it has the follow- ing property: For any subsets p, q of (5) such that p C q, each consequence of T U p is a consequence of T U (I. In particular, if T is a.dequate, then all theo- rems of TU { Bini( Joe)} will be among the theorems of ‘TU { B?R/( Jot), Pelzgzrilz( Joe)}, so that FZies( Joe) will not be one of them. This property is an example of what we call “re- stricted monotonicity.” It is satisfied for the circum- scriptive formalization T2, but not for the default the- ory Ti. We will see that, in general, there is no correla- tion between restricted monotonicity and the choice of a. nonmonotonic formalism; what matters is which pos- tulates are included in the formalization and, in case of circumscription, what circumscription policy is ap- plied. For instance, we will give a default logic formal- ization of the same example that satisfies the restricted monotonicity condition. There is nothing wrong, of course, with a different interpretation of the flying birds problem: If the axioms do not tell us whether Joe is a penguin, we may treat the domains in which this is the case as “secondary,” and be prepared to jump to the conclusion that we are not in such a domain, at least when we decide whether Joe can fly. But it is important to be clear about how the problem is interpreted before discussing the adequacy of a particular solution. We argue in this paper that many knowledge rep- resentation problems can be described as parametric, and that, when we deal with such a problem, it is im- portant to recognize this fact and to verify restricted monotonicity for its proposed solutions. A class of examples that is discussed here in some de- tail is given by initial conditions in temporal projection problems. In various versions of the “Vale shooting” story [Hanks and McDermott, 19871, the initial situa- tion is described by including some of the formulas HoZds( Loaded, SO), lHoZds( Loaded, SO), HoZds(AZive, SO), -HoZds(AZive, SO) (6) in the axiom set. We can think of these formulas as parameters. Every consistent subset of (6) represents an instance of a “parametric problem”; the larger the subset is, the more conclusions about the values of fluents in future situations can be justified. This is again an example of restricted monotonicity. Definition We would like to give a definition of restricted mono- tonicity applicable to many nonmonotonic formalisms. In order to make it general, we will first introduce the notion of a “declarative formalism.” A declarative formalism is defined by a set S of symbolic expressions called sentences, a set P of symbolic expressions called postulates, and a map Cn from sets of postulates to sets of sentences. A set of postulates is a theory. A sentence A is a consequence of a theory T if A E Cn(T). The formalism is monotonic if Cn is a monotone operator, that is, if C%(T) c Cn(T’) whenever T C T’. Here are some examples. Any first-order or higher- order language of classical logic can be viewed as a declarative formalism. Its postulates are identical to its sentences-they are arbitrary closed formulas of the language; C%(T) is the set of sentences that are true in all models of T. The use of circumscription amounts to defining Cn(T) to be the set of sentences that are true in the models of T which are minimal relative to some circumscription policy. In case of default theories, a sentence is a closed formula, and a postulate is either a closed formula or a default. (Note Nonmonotonic Logic 433 that here P differs from S.) For a default theory T, Cn(T) is the intersection of the extensions of T. When a declarative formalism (S, P, Cn) is used to solve a parametric knowledge representation problem, a subset Sc of its sentences is designated as the set of assertions, and a subset Pe of its postulates is designated as the set of parameters. The idea of a parameter was discussed above: A parameter is an additional postulate whose presence in a theory is supposed to make the class of domains under consideration smaller. An assertion is a sentence that can be interpreted as true or false in a domain described by the theory. The need to distinguish between assertions and arbitrary sentences arises when the language contains auxiliary symbols, such as Ab, that have no “observable” meaning in the domains under consideration. We may wish to specify, for instance, that a sentence is an assertion if it does not contain Ab. Some formalizations do not use auxiliary symbols; in such cases, we view every sentence as an assertion. Let (S, P, Cn) b e a declarative formalism. Let a subset Se of S be designated as the set of assertions, and a subset PO of P as the set of parameters. We say that a theory T satisfies the restricted monotonicity condition if, for any sets p, q c Pa, pc q + Cn(TUp)nSa c Cn(TUq)nSa. (7) In words: If more parameters are added to T as additional postulates, no assertions will be retracted. Note that (7) is trivially true if Cn is a monotone op- erator. Consequently, restricted monotonicity becomes an issue only when a nonmonotonic formalism is used. Condition (7) is weaker than the monotonicity of Cn in two ways. First, it applies only to theories of the form T U p for the subsets p of Pa, rather than to arbitrary theories. Second, it refers not to the set of all consequences of a theory, but only to the assertions that belong to it. Methods for Proving Restricted Monotonicity The mathematical apparatus required for proving re- stricted monotonicity will vary depending on the declarative formalism on which the solution is based. In this section we discuss some of the methods that can be used for verifying the restricted monotonicity condition in circumscriptive theories and in extended logic programs. Circumscriptive Theories For simplicity, we restrict attention to finite circum- scriptive theories without prioritization. Let S be the set of all sentences of some first-order language, R a list of distinct predicate constants of that language, and 2 a list of distinct function and/or predicate constants disjoint from R. By CnR;z we denote the consequence operator corresponding to the circumscription which circumscribes R and varies 2. This means that, for any finite subset T of S, CnR,z(T) is the set of sen- tences entailed by CIRC[/\,eT A; R; Z]. Proposition 1. Let T be a finite theory in the formalism (S, S, CnR,z > . If the set of parameters is finite, and the parameters do not contain symbols from R or 2, then T satisfies the restricted monotonicity condition. Proof. For any set p of parameters, CnR;z(T Up) is the set of sentences entailed by CIRC[ A A A A A; R; 21. AET AEP Since &ep A does not contain symbols from R or 2, this formula is equivalent to CIRC[ A A; R; Z] A A A. ACT AEP If p C Q, then this conjunction is entailed by the corresponding condition for Q: CIRC[ A A; R; Z] A A A. AET AEq It follows that p c Q implies CnR,z(T U P) C CnR;z(T U q), and consequently cnR;z(T Up) fl SO C CnR;z(T U q) n SO. In case of the circumscriptive theory T2 defined above, R is Ab, 2 is Flies, PO is (5), and SO is the set of sentences not containing Ab. By Proposition 1, the restricted monotonicity of Tz follows from the fact that the parameters (5) contain neither Ab nor Flies. Note that the statement of Proposition I imposes a restriction on the set of parameters, but not on the set of assertions. It follows, in particular, that T2 would have satisfied the restricted monotonicity condition even if all sentences were considered assertions. We will see below that, for some other solutions to the same knowledge representation problem, the fact that Ab is not allowed in assertions is crucial for the verification of restricted monotonicity. Consider, on the other hand, the theory Ti, which differs from Ts in that the predicates Bird and Penguin are allowed to vary, along with Flies. This theory is stronger than T2, and it allows us to justify, among others, the conclusion that there are no penguins in the world: VzlPenguin(z). This result may be viewed as undesirable. Peculiarities of this kind in circumscriptive theories are well known ([McCarthy, 1986], Section 5). In fact, Ti has a more fundamental defect: It does not satisfy the restricted monotonicity condition. Indeed, 1 Penguin( Joe) is a consequence of Ti which is lost when Penguin(Joe) is added to its axiom set. Proposition 1 does not apply here, because the predicates Bird and Penguin, varied in Ti, occur in the parameters. 434 Lifschitz Extended Logic Programs According to [Gelfond and Lifschitz, 19911, an ex- tended logic program is a set of rules of the form Lo - L1, . . . , L,, not Lm+l,. . . , not L,, (8) where each Li is a literal, that is, an atom possibly preceded by 1. (“General” logic programs are, syn- tactically, the special case when classical negation 1 is not used.) The rule (8) h as the same meaning as the default L1 A...A L, : L,+1,..., 1;,/ Lo, (9) where 1 stands for the literal complementary to L, so that the language of extended programs can be viewed as a subsystem of default logic. A ground literal L is a consequence of an extended program II if it belongs to all extensions of II. Extended logic programs can be viewed as theories in a declarative formalism, if ground literals are taken to be sentences, and rules of the form (8) are considered postulates. The knowledge representation problem described at the beginning of the paper can be solved in the language of extended programs as follows: Flies(x) t Bird(x), not Ah(x), Bird(x) - Penguin(x), lFlies(x) - Penguin(x), (10) Ah(x) - not lPenguin(x). Note the last rule, which plays an important part in this program. Without it, adding the fact Penguin(Joe) to the program would have made it in- consistent. Note also the use of the combination not 1 in that rule. The simpler rule Ah(x) - Penguin (2) (11) would have canceled the applicability of the first rule of the program to 2 only when a: is known to be a penguin; with not 1 inserted in front of Pengvin( xc), this is accomplished for every x that is not known to satisfy lPenguin(x). (Compare this with the discussion of the cancelation rule for Noninertial in Section 4 of [Gelfond and Lifschitz, 19921.) In particular, even with the fact Bird( Joe) added to (lo), the formula Flies(Joe) remains undecidable. We see that (10) is similar in this respect to the circumscriptive forma.liza.tion TX, rather than to the default theory Tl. Written as defaults, the rules (10) are: Bird(x) : lAb(x) / Flies(x), Penguin(x) / Bird(x), Penguin(x) / lFlies(x), : Penguin(x) / Ah(x). (12) The first of these defaults is reminiscent of the ap- proach to the use of default logic advocated by Morris [1988]. The theorem about restricted monotonicity in ex- tended logic programs stated below is based on the notion of a “signing.” This notion was originally de- fined for general logic programs [Kunen, 19891, and then extended by Turner [1993] to programs that may contain classical negation. The absolute value of a literal L (symbolically, ILI) is L if L is positive, and 1 otherwise. A signing for an extended logic program II without variables is a set X of ground atoms such that (i) for any rule (8) from II, either ILOI, * * * 7 IL-J E x, ILn+1l, * * *, lLnl @ x or ILOI, - - -, ILnl fm IL?2+1l,...,ILnl EX; (ii) for any atom A E X, -A does not appear in II. It is easy to see, for example, that the set of ground instances of Ah(x) is a signing for the set of ground instances of the rules (10). The following lemma is a special case of Theorem I from [Turner, 19931. Lemma. Let II1 be an extended program without variables, and let X be a signing for III. Let II2 be a program obtained from II 1 by dropping some of its rules (8) such that ILo1 # X. If a ground literal L is a consequence of II2 and IL1 4 X, then L is a consequence of II1 also. Proposition 2. Let II be an extended logic program, and let X be a signing for the set of ground instances of the rules of II. If all parameters and assertions are ground literals whose absolute values do not belong to X, then II satisfies the restricted monotonicity condition. IProof. Since a program has the same consequences as the set of all ground instances of its rules, we can assume, without loss of generality, that the rules of II do not contain variables. Let p and q be sets of parameters such that p c q. By applying the lemma to IIUq as II, and HUp as IIz, we conclude that, for any ground literal L such that IL1 $Y! X, if L E Cn(II U p) then L E Cn(II U q). It follows that Cn(II up) r7 SO c Cn(ll u q) n SO. Proposition 2 implies, for instance, that (10) satisfies the restricted monotonicity condition. estrictecl Monotonicity in heories of Action As observed above, temporal projection problems can be thought of as parametric, with initial conditions as parameters. It is interesting to look from this perspec- tive at the existing approaches to describing actions in nonmonotonic formalisms and to see how success- ful they are in achieving restricted monotonicity. (For the methods based on stating frame axioms explicitly and then applying classical logic, the problem does not arise, because any theory based on a monotonic logic satisfies the restricted monotonicity condition.) Nonmonotonic Logic 435 Minimizing Change The first attempt to solve the frame problem using circumscription ([McCarthy, 19861, Section 9) was shown by Hanks and McDermott [1987] to lead in some cases to “overweak disjunctions.” The analysis of McCarthy’s method from the point of view of restricted monotonicity shows that it has also another flaw. The following key observation was made by Fangzhen Lin in connection with the temporal min- imization method (personal communication, October 31, 1992). Let A be an action whose effect is to make a propositional fluent F false if it is currently true. The initial value of F is not given. Minimizing change will lead to the conclusion that F was initially false, because in this case nothing has to change as A is ex- ecuted. This undesirable conclusion presents a difficulty that is perhaps even more fundamental than the one uncovered by Hanks and McDermott. Minimizing change may lead not only to conclusions that are too weak; sometimes, its results are much too strong. Since the only action considered in this example is performed in the initial situation, it does not matter whether the minimization criterion is simple or temporal, as in [Kautz, 19861, [Lifschitz, 19861 and [Shoham, 19861. We will describe Lin’s example formally and show that it can be viewed as a violation of restricted mono- tonicity. Instead of the situation calculus language, we will use the simpler syntax of a theory of a single action ([Lifschitz, 19901, S ec ion 2). Theories of this kind in- t’ clude situation variables and fluent variables, but they do not have variables for actions. The function Remit is replaced by two situation constants: Si for the initial situation, in which a certain fixed action is executed, and S’ for the result of the action. If the only fluent constant in the language is F, then the possible initial conditions are Holds(F, Si), lHolds(F, S”). (13) The assumption that the action in question makes F false if executed when F is true is expressed by the formula Holds(F, Si) > -Holds(F, 9). (14) Minimizing change, in this simplified language, is expressed by postulating the “commonsense law of inertia” in the form gab > HoZds(f, 9) E HoZds(f, Si), and circumscribing A b . (15) Let T be the circumscriptive theory with axioms (14) and (15), in which Ah is circumscribed and Holds var- ied. Take the formulas (13) to be parameters, and all closed formulas not containing Ab to be asser- tions. Lin’s observation shows that T does not sat- isfy the restricted monotonicity condition. 1 Holds( F, Si) Indeed, is a consequence of T, but not a con- sequence of T U { Holds(F, Si)}. Mathematically, the lack of restricted monotonicity in this example is not surprising: The predicate Holds, varied in T, occurs in parameters, so that Proposition I does not apply. Other Approaches to the Frame Problem Two other ways to apply circumscription to the frame problem are proposed in [Lifschitz, 19871 and [Baker, 19911. Unlike McCarthy’s original proposal and the temporal minimization approach, these meth- ods have reasonable restricted monotonicity proper- ties, although this fact does not follow from Propo- sition 1. A restricted monotonicity theorem for Baker- style formalizations can be proved using Theorem 3 from [Kartha, 19931. The logic programming method of [Gelfond and Lifschitz, 19921 and [Baral and Gelfond, 19931 builds on the ideas of [Morris, 19881, [Eshghi and Kowalski, 19891, [Evans, 19891 and [Apt and Bezem, 19901. A restricted monotonicity theorem for this method can be derived from Proposition 2. Sandewall 119891 proposed to apply a nonmonotonic formalism to a set of axioms that does not include initial conditions (“observations”) and to get first “the set of all possible developments in the world regardless ofany observations,” and then “ to take that whole set and ‘filter’ it with the given observations.” A mechanism of this kind may achieve restricted monotonicity by removing the initial conditions from the scope of the nonmonotonic consequence operator. Some of the ideas of [Lin and Shoham, 19911 seem to be in the same group. High-Level Languages for Describing Actions The “high-level” language A [Gelfond and Lifschitz, 19921, designed specifically for describing action and change, has propositions of two kinds. “Value propo- sitions” specify the values of fluents in particlular sit- uations. “Effect propositions” are general statements about the effects of actions. A “domain description” is a set of propositions. The semantics of domain de- scriptions is defined in terms of “models.” A model of a domain description D consists of two components: One specifies the “initial state” of the system, and the other is a “transition function,” describing how states are af- fected by performing actions. The effect propositions from D determine what can be used as the transition function in a model of D. The value propositions limit possible choices of the initial state. Details of the syn- tax and semantics of A can be found in [Gelfond and Lifschitz, 19921. A value proposition is a consequence of a domain description D if it is true in all models of D. This def- inition allows us to treat the language A as a declar- ative formalism, with value propositions as sentences, and both value propositions and effect propositions as postulates. 436 Lifschitz This formalism is nonmonotonic. Indeed, adding an effect proposition to a domain description D, generally, changes the set of models of D in a nonmonotonic way, so that the set of consequences of D changes nonmono- tonically also. However, adding value propositions to D merely imposes additional constraints on the choice of the initial state in a model, so that it can only make the set of models smaller, and the set of consequences larger. If we agree to identify both parameters and as- sertions with value propositions, then this fact can be expressed as follows: Proposition 3. Every domain description fIes the restricted monotonicity condition. in A satis- Since initial conditions in a temporal projection problem are represented in A by value propositions, we conclude that the problem of restricted monotonicity for temporal projection is resolved in A in a satisfac- tory way. The extensions of A introduced in [Baral and Gel- fond, 19931 and [Lifschitz,, 19931 have similar restricted monotonicity properties. Acknowledgements I have benefitted from discussing the ideas pre- sented here with Robert Causey, Michael Gelfond, G.N. Kartha, Fangzhen Lin, Norman McCain, Luis Pereira, Hudson Turner and Thomas Woo. My special thanks go to Fangzhen Lin for permission to include his unpublished counterexample. References Apt, Krzysztof and Bezem, Marc 1990. Acyclic pro- grams. In Warren, David and Szeredi, Peter, editors 1990, Logic Programming: Proc, of the Seventh Int’l Conf. 617-633. ‘Baker, Andrew 1991. Nonmonotonic reasoning in the framework of situation calculus. Artificial Intelligence 4915-23. Baral, Chitta and Gelfond, Michael 1993. Repre- senting concurrent actions in extended logic program- ming. In Proc. of IJCAI-93. To appear. Eshghi, Kave and Kowalski, Robert 1989. Abduction compared with negation as failure. In Levi, Giorgio and Martelli, Maurizio, editors 1989, Logic Program- ming: Proc. of the Sixth Int ‘1 Conf. 234-255. Evans, Chris 1989. Negation-as-failure as an approach to the Hanks and McDermott problem. In Proc. of the Second Int ‘I Symp. on Artificial Intelligence. Gelfond, Michael and Lifschitz, Vladimir 1991. Clas- sical negation in logic programs and disjunctive databases. New Generation Computing 9:365-385. Gelfond, Michael and Lifschitz, Vladimir 1992. Rep- resenting actions in extended logic programming. In Apt, Krzysztof, editor 1992, Proc. Joint Int’l Cmf. and Symp. on Logic Programming. 559-573. Hanks, Steve and McDermott, Drew 1987. Nonmono- tonic logic and temporal projection. Artificial Intel- ligence 33(3):379-412. Kartha, 6. N. 1993. Soundness and completeness theorems for three formalizations of action. In Proc. of IJCA I-93. To appear. Kautz, Henry 1986. The logic of persistence. In Proc. of AAAI-86. 401-405. Kunen, Kenneth 1989. Signed data dependencies in logic programs. Journal of Logic Programming 7(3):231-245. Lifschitz, Vladimir 1986. Pointwise circumscription: Preliminary report. In Proc. AAAI-86. 406-410. Lifschitz, Vladimir 1987. Formal theories of action (preliminary report). In Proc. of IJCAI-87. 966-972. Lifschitz, Vladimir 1990. Frames in the space of situations. Artificial Intelligence 46:365-376. Lifschitz, Vladimir 1993. A language for describing actions. In Working Papers of the Second Symposium on Logical Formalitations of Commonsense Reason- ing. Lin, Fangzhen and Shoham, Yoav 1991. Provably correct theories of action (preliminary report). In Proc. AAAI-91. 349-354. McCarthy, John 1986. Applications of circumscrip- tion to formalizing common sense knowledge. Arti- ficial Intelligence 26(3):89-116. Reproduced in [Mc- Carthy, 19901. McCarthy, John 1990. Formalizing common sense: papers by John McCarthy. Ablex, Norwood, NJ. Morris, Paul 1988. The anomalous extension problem in default reasoning. Artificial Intelligence 35(3):383- 399. Reiter, Raymond 1980. A logic for default reasoning. Artificial Intelligence 13( 1,2):81--132. Sandewall, Erik 1989. Combining logic and differ- ential equations for describing real-world systems. In Brachman, Ronald; Levesque, Hector; and Re- iter, Raymond, editors 1989, Proc. of the First Int’Z Conf. on Principles of Knowledge Representation and Reasoning. 412-420. Shoham, Yoav 1986. Chronological ignorance: Time, nonmonotonicity, necessity and causal theories. In Proc. of AAAI-86. 389-393. Turner, Hudson 1993. A monotonicity theorem for extended logic programs. In Proc. of the Tenth Int’Z Conference on Logic Programming. To appear. Nonmonotonic Logic 437 | 1993 | 61 |
1,389 | Subnormal modal logies Grigori Schwarz Robotics Laboratory Computer Science Department St anford University Stanford CA 94305-4110 Abstract Several widely accepted modal nonmonotonic log- its for reasoning about knowledge and beliefs of ra- tional agents with introspection powers are based on strong modal logics such as KD45, S4.4, S4F and S5. In this paper we argue that weak modal logics, without even the axiom K and, therefore, below the range of normal modal logics, also give rise to useful nonmonotonic systems. We study two such logics: the logic N, containing propo- sitional calculus and necessitation but no axiom schemata for manipulating the modality, and the logic NT - the extension of N by the schema T. For the nonmonotonic logics N and NT we de- velop minimal model semantics. We use it to show that the nonmonotonic logics N and NT are at least as expressive as autoepistemic logic,reflexive autoepistemic logic and default logic. In fact, each can be regarded as a common generalization of these classic nonmonotonic systems. We also show that the nonmonotonic logics N and NT have the property of being conservative with respect to adding new definitions, and prove that computa- tionally they are equivalent to autoepistemic and default logics. Introduction Several nonmonotonic logics have been proposed as formalisms for common-sense reasoning [Reiter, 1980; McCarthy, 1980; McDermott and Doyle, 19801. Some of the most interesting and powerful logics (default logic and autoepistemic logics among them) can be obtained from the approach originated by McDermott and Doyle [1980] [1982]. Th and further refined by McDermott e 1 ‘d ea is to introduce a modality L which is read as “is known”, “is believed”, or ‘is provable”, and to define a set T of consequences of a given theory in such a way that if a sentence + does not belong to T, then the sentence ~L$J is in T. Formally, let I be a set of sentences representing the initial knowledge of an agent, and let S be a monotonic Miroslaw Truszczyriski Department of Computer Science University of Kentucky Lexington, KY, 40506 logic. A set T is called an S-expansion of I, if T = {p : I u {lL$ : $ @ T} l--s p}, (1) where t-s denotes derivability in S. In [McDermott and Doyle, 19801 expansions were de- fined and investigated with the classical propositional logic in place of S. In this case the notion of expan- sion has counterintuitive properties. Expansions of I are meant to serve as candidates for a knowledge or belief set based in I. However, a consistent expansion (in the sense of [McDermott and Doyle, 19801) may contain both ti and lL$, contrary to the intended in- terpretation of L. Already in [McDermott and Doyle, 19801, where expansions were first introduced, this un- desirable phenomenon was observed. As a way out of the problem, McDermott [1982] suggested to replace in (1) the propositional derivability by the derivability l-s in a modal logic S. If S contains the inference rule of necessitation, then the counterintuitive properties, like the one indicated above, disappear. A crucial question is what modal logic S should be used. McDermott [1982] considered three well-known logics: T, S4 and S5. He noted that, intuitively, all the axioms of S5 seem to be acceptable. On the other hand, he proved that nonmonotonic S5 collapses to the monotonic logic S5 and, hence, cannot be used as a formal description of defeasible reasoning. Logics S4 and T yield “true” nonmonotonic systems adequate for handling some examples of common-sense reason- ing [Shvarts, 19901. But it is hard to justify why we should give up some of the axioms of S5 while retaining the others if all of them seem to be intuitively justified. It was proved in [McDermott, 19821 that if a logic S contains the necessitation rule, then each S-expansion contains all theorems of S5. Hence, the presence or absence of some of the axioms of S5 has no effect on the fundamental property of expansions that they are closed under provability in S5. This result may be re- garded as an evidence that the search for modal logics which may yield useful nonmonotonic systems should focus on two classes of modal logics: those close to S5, satisfying many of the axioms of S5, and those which satisfy none or few of them. 438 Schwarz From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. So far, logics close to S5 have received substan- tially more attention most notably the modal logics KD45, S4.4, S4F. It has been shown [Shvarts, 1990; Schwarz, 1991a; Truszczyfiski, 1991b] that if we disre- gard inconsistent expansions, the nonmonotonic modal logic KD45 is equivalent to the celebrated autoepis- temic logic by Moore, logic S4.4 is equivalent to the au- toepistemic logic of knowledge introduced in [Schwarz, 1991a], and Reiter’s default logic can be naturally em- bedded in the nonmonotonic logic logic S4F, which, in turn, has a natural interpretation as a logic of minimal (or grounded) knowledge [Schwarz and Truszczynski, 19921. The logics KD45, S4.4 and S4F share several com- mon features. They are close to S5 and admit natural epistemic interpretation as logics of belief (KD45), true belief (S4.4), k nowledge (S4F). F inally, each is maxi- mal with respect to some property of nonmonotonic logics. The logics KD45 and S4.4 are maximal with respect to the property of producing exactly one ex- pansion for modal-free theories [Schwarz, 1991a]. The logic S4F is a maximal logic for which theories with- out positive modalities (like the theory (Lp > p} con- sidered by Konolige [1988]) have a unique expansion [Schwarz and Truszczyriski, 19921. Logics close to S5 are normal, that is, they contain the necessitation rule and the modal axiom scheme L(cp > T+!J) > (Lp > L+). Normal modal logics possess an elegant and natural semantics, namely, the possible world semantics introduced by Kripke [1963]. Kripke semantics makes it easy to investigate normal modal logics, and was fruitfully exploited in investigations of nonmonotonic logics based on normal modal log- its [McDermott, 1982; Moore, 1984; Levesque, 1990; Shvarts, 1990; Marek et ad., 19911. In this paper we will focus our attention on the other end of the spectrum of modal logics. We will consider nonmonotonic logics that correspond to weak modal logics that satisfy none or only few axioms of S5. Namely, we will consider logics not satisfying the axiom K. Such logics ar% not normal. We will re- fer to them as subnormal. Some non-normal logics have been investigated by philosophers [Kripke, 1965; Segerberg, 19711. (In fact, the first three modal logics introduced, Sl, S2 and S3, were not normal; see [Feys, 19651 for a historical survey). Because these logics were aimed at eliminating the so-called “paradoxes of mate- rial implication”, they do not contain the necessitation rule, but contain the axiom K or some of its weaker versions. Hence, they are different from the subnormal modal logics considered in this paper. There are at least two reasons why it is important to consider nonmonotonic logics based on weak modal logics. First, according to one of the results of [Mc- Dermott, 19821, if T and S are two modal logics such that T E S C 5’5, then each I-expansion is an S- expansion but the converse does not hold in general. In other words, when we replace a logic S by a weaker one, say 7, then often some of the expansions disap- pear. Hence, using weak modal logics in the schema (1) offers a possible solution to the problem of un- grounded expansions (see Konolige [1988]). Secondly, the assumption that a reasoner has a power of rea- soning in a strong modal logic such as KD45, S4.4 or S4F may not be a realistic one. Therefore, it is impor- tant to study what types of reasoning can be modeled if weak modal logics are used instead. It turns out that in the nonmonotjonic case we do not loose any- thing by restricting an agent’s reasoning capabilities. Namely, and it is perhaps the most surprising result of our work, we show that nonmonotonic modal logics KD45 and S4.4 (that is, essentially, autoepistemic logic and autoepistemic logic of knowledge) can be embed- ded into nonmonotonic modal logics corresponding to some very weak modal logics containing necessitation. A similar result is also known to hold for default logic [Truszczynski, 1991b]. In this paper we study two modal logics. Both log- its are assumed to be closed under the uniform sub- stitution rule and contain propositional calculus and the rule of necessitation. These requirements specify the first of them, the logic N of pure necessitation. It was first introduced and investigated in [Fitting et al., 19921 but its nonmonotonic counterpart had been stud- ied earlier in [Marek and Truszczynski, 19901. The second logic is obtained from N by adding the axiom schema T: Lp > p. It will be referred to as t#he logic NT. The logic N is clearly distinguished among all modal logics contained in S5 and containing the necessita- tion rule. It is the weakest one. It is also the weakest logic without counterintuitive expansions containing 1c, and ~LI,/I. It is interesting to note that N-expansions were introduced already in the pioneer work of Moore [1985] under th e name “modal fixed points”. He also noticed that all N-expansions are stable expansions, but not vice versa, and suggests the interpretation of “nonmonotonic N” as a logic of “justified belief”. In [Marek and Truszczynski, 19901 nonmonotonic N was studied in detail under the name strong autoepistemic logic. The reasons we are interested in the logic NT are the following. First, while KD45 is quite commonly accepted as a logic of belief, there is no consensus as for the “right” modal logic of knowledge. For example, in the monograph [Lenzen, 19781 among all the modal axioms of S5 only the axiom T is essentially unques- tionable (you cannot know a false proposition, you can only believe that it is true). Hence, NT may be re- garded as a part of any reasonable logic of knowledge. Another reason is more formal in nature. Often modal formulas describing examples of common-sense reasoning have no nested modalities, and no negative occurrences of L. For example, “p is true by default” is usually expressed as 1Llp > p. For such theories many nonmonotonic logics coincide. For example, it is Nonmonotonic Logic 439 proved in [Marek and Truszczyfiski, 19901 that for the- ories without negative occurrences of L, S-expansions coincide for all modal logics S between N and KD45. Similarly, S-expansions coincide for all modal logics S between NT and S4.4. Recall that KD45 and S4.4 are maximal logics which do not produce “ungrounded” expansions for objective theories. Thus, it seems that T is, probably, the only axiom which makes a differ- ence. Hence, it is important to study the nonmono- tonic modal logic corresponding to the weakest modal logic containing the schema T. The problem with logics like N or NT is, that they are not closed under the equivalence substitution rule. For example, I-N p G lip, but I+N Lp E Lllp. This means, that algebraic semantics for such logics are im- possible (if we want G to be interpreted as equality in the algebra of truth values). On the other hand, a Kripke-style semantics for N has been found in [Fitting et al., 19921. The key idea used in this paper was to treat N as a logic with infinitely many modalities, so L in Lp and L in L$ represent two different modalities, if cp and $ are syntactically different. For a long time nonmonotonic modal logics, even if an underlying modal logic was normal, lacked an intuitively clear semantics. Recently, such a seman- tics has been found [Schwarz, 19921. In this paper we combine ideas from [Fitting et al., 19921 and [Schwarz, 19921 and obtain a possible world semantics for the nonmonotonic logics N and NT. We apply these se- mantics to prove that the nonmonotonic modal logics KD45 and S4.4 can be embedded into the nonmono- tonic logics N and NT. What is more, the embeddings are very simple. For example, to embed the nonmono- tonic logic KD45 (that is, the autoepistemic logic) into the nonmonotonic logic N, it is enough to replace each occurrence of L with 1LlL. Consequently, the non- monotonic logic N is at least as expressive as nonmono- tonic KD45: the modality 1LlL can be viewed as the “modality of nonmonotonic KD45”. In the same time, no faithful embedding of nonmonotonic N (or NT) into nonmonotonic KD45 is known so far. We regard our “embedding” results as the most impor- tant results of our paper. They show that subnormal modal logics such as N and NT can easily be used (in the nonmonotonic setting) to simulate other nonmono- tonic formalisms and, thus, are viable and powerful knowledge representation tools. Perhaps even more importantly, the expressive power of nonmonotonic logics N and NT comes at no additional cost in terms of computational complexity, In this paper we study computational properties of the logics N and NT and their nonmonotonic counterparts. It turns out that the logics N and NT behave similarly to logics close to S5 (such as S5 itself, KD45, S4.4 and S4.3.2). Namely, in the monotonic case, it is NP- complete to decide if a theory is consistent in N (or NT) and it is Cr-complete to decide existence of an N( NT)-expansion. Since the complexity of reasoning with autoepistemic and default logics is also located on the second level of the polynomial hierarchy [Gottlob, 19921, nonmonotonic logics N and NT are computa- tionally equivalent to these two classic nonmonotonic systems. It is worth noting that logics “in the middle” of the spectrum, that is, those containing K but still “far away” from S5, such as K, or T, are much more complex (satisfiability is PSPACE-complete). Hence, our complexity results provide yet another justification for focusing on logics that are either very weak (sub- normal) or very strong (close to S5). Some properties which are quite easy to prove for monotonic modal logics are not at all obvious for non- monotonic modal logics. For example, adding explicit definitions of the form q E cp does not affect the q-free fragment of the set of logical consequences of a theory. Some nonmonotonic logics, for example logic of mod- erately grounded expansions, do not have this prop- erty [Schwarz, 1991b]. It is rather unfortunate since it means that simply introducing a new notation can change the fragment of agent’s knowledge that does not involve this new notation at all. In this paper we prove that for every (even subnormal) modal logic S, introducing an explicit definition of the form q E cp does not affect the q-free fragments of S-expansions. Kripke semantics for the logics N and NT In this section we recall the semantics introduced in [Fitting et al., 19921 and [Truszczynski, 19921 for the logics N and NT. By a multi-relationad Kripke model (or, simply, m-r Kripke model) we mean a triple where M is a nonempty set, commonly referred to as the set of (possible) worlds of a model, R,, where cp E LL, is a binary relation on M, and V is a function on AJ, called a vuduution function, assigning to each world (Y E iU a subset V(a) of propositional variables of the language. Hence, the only difference between multi- relational Kripke models and standard Kripke models is that the former have infinitely many accessibility relations while the latter have just one. The notion of truth of a formula cp in a world CY of a multi-relational Kripke model M = (MY mf&ECL Y V), denoted (M, a) b cp, is defined recursively on the length of a formula. The only case where the definition differs from the usual one is the case of modal operator L. If (o is of the form L$, we define (M, o) + cp if and only if for every p E M such that cy%/? we have (M,P) b $. A formula ‘p is valid in a multi-relational Kripke modelM if(M,a) )=cpforeverya,EM. Theorem 1 ([Fitting et ad., 19921) Let I E LL and let cp E ,CL. Then I I-N cp if and only if cp is valid in every multi-relational Kripke model in which I is valid. 440 Schwan A multi-relational Kripke model is reflexive of its accessibility rel ations is reflexive. if each Theorem 2 ([Trusaczyriski, 19921) Let I C /ZL and let cp E CL. Then I kNT cp if and only if cp is valid in every reflexive multi-relational Kripke model in which I is valid. Minimal model semantics for the nonmonotonic logics N and N We will follow the approach of [Schwarz, 19921, where minimal model semantics was proposed for the non- monotonic logic S for a wide class of normal modal logics. Speaking more precisely, we will consider uni- versal S5-models as special m-r Kripke models (with the same, universal, accessibility relation for every for- mula) and we will adapt the notion of minimality intro- duced in [Schwarz, 19921 to the class of all m-r models and the class of all reflexive m-r models. Let M’ = (&J’, {R$}+,EtL,Vr) and M” = (AP, iVP’ x M”, V”) be an m-r Kripke model and Kripke S5-model, respectively. Assume also that M’ and M” are disjoint. By the concatenation of M’ and M”, denoted M’o M”, we mean an m-r Kripke model (M’ u Ml’, (R&EL~. , V’ u V”), where R, = R; u ((M’ u M”) x M”). A Kripke SEi-model M is N-minimal (NT-minimal) model of I, if M b I and there does not exist an m-r Kripke model (reflexive m-r Kripke model) M’ such that 1. M’oM +I, 2. for some /? E M’, Vp’ differs from Vz for all CY E M”. Speaking informally, an S5-model M” is N-minimal, if it is not a common final cluster for all the accessibil- ity relations of any N-model with at least one world different from all the worlds of M”. This notion of rninimality may seem a bit exotic at first. In particular, it is different from the notion of a minimal universal S5-model as discussed by Halpern and Moses [1985] h w o consider minimality in the class of universal S5-models with respect to the inclusion re- lation on the sets of worlds of models. On the other hand, a careful examination of our notion of minimal- ity shows that it is very closely related to the notion of minimality used by Moore [1984] and Levesque [1990] in their characterizations of stable expansions in the autoepisternic logic. We refer the reader to [Schwarz, 1992; Schwarz and Truszczyliski, 19921 for a more de- tailed discussion of the minimal knowledge paradigm and comparisons between existing approaches. Simi- lar intuitions behind the notions of minimality studied here will be provided in the full paper. Theorem 3 For every theory I C CL and for every consistent theory T C ,CL, T is an N-expansion for I if and only if T is the set of all formulas valid in some N-minimal model for I. Theorem 4 For every theory I E ,CL and for every consistent theory T s ,CL, T is an NT-expansion for I if and only if T is the set of all formulas valid in some NT-minimal model for I. These two results show that the semantic notion of N- (NT-) minimal models is in the exact correspon- dence with the syntactic notion of an N- (NT-) expan- sion as specified by the equation (1). Expressive power of nonmonotonic logics N and NT In the monotonic setting the logics N and NT are very weak. In fact, the logic N is the weakest modal nonmonotonic logic containing necessitation. Despite of that, nonmonotonic logics N and NT are powerful nonmonotonic formalisms. It is known that default logic can be embedded into the nonmonotonic logics N and NT so that there is a one-to-one correspon- dence between extensions of default theories and N- and NT-expansions of the corresponding modal theo- ries [Truszczyriski, 1991b; Truszczyriski, 1991a]. The main goal of this sect ion is to show that also the au- toepistemic logic can be embedded into the nonmono- tonic modal logics N and NT. The autoepistemic logic represents the modality L as the belief modality. The interpretation that log- its N and NT give to the operator L is more that of knowledge than belief. Having a formula Lv in the set of consequences of a theory I (in either of these log- its) means that we are able to provide a very rigorous proof of Lp from our initial assumptions I. In the case of logic N, in such a proof we only use propositional calculus and necessitation. Hence, in particular, if I is modal-free, the only way to have a formula LAY in the set of consequences of I is to have (o among the propo- sitional consequences of I. In the case of the logic NT the situation is similar. The only difference is that we are also allowed to make use of the axiom schema T (Lp > p) which seems to be an uncontroversial prop- erty of knowledge modality. For a formula cp, by (Pi we denote the result of si- multaneously replacing each occurrence of L in cp by 1LlL. For a theory I, we define IN = {cpN: cp E I}. The following theorem shows that under the map- ping I t--+ IN, stable expansions for I are precisely N( NT)-expansions for IN. Theorem 5 Let T & .CL be consistent. Let I & ,CL. Then T is an autoepistemic expansion for a theory I if and only if T is an N-expansion for IN, and if and only if T is an NT-expansion for IN. A similar correspondence between autoepistemic logic and reflexive autoepistemic logic (the nonmono- tonic logic S4.4) under the same translation has been discovered earlier in [Schwarz, 1991a] Reflexive autoepistemic logic of [Schwarz, 1991a] is faithfully embedded into Moore’s logic by means of the Nonmonotonic Logic 441 translation which rewrites each &J as cp A Lp. Com- baining these two translations, we obtain easily the translation of reflexive autoepistemic logic into non- monotonic N, too. Algorithmic aspects of reasoning with nonmonotonic logics N and NT Reasoning in modal logics is often very complex. For example, it is PSPACE-complete to decide whether for a given theory I and a formula cp, I l-s* ‘p (where t--s4 denotes the provability operator in the logic S4). For stronger logics such as KD45, S4.4, S4F and S5 the problem of deciding whether a formula cp is a con- sequence of a theory I becomes easier namely, NP- complete (assuming PSPACE does not collapse to the first level of the polynomial hierarchy). In this section, we will study the complexity of reasoning in the logics N and NT. Theorem 6 Problems of deciding for a given finite set I of formulas, if I is consistent with logic N, or with logic NT, are NP-complete (in the length of I). Prob- lems of deciding, for given finite set I and formula cp, if I f-s cp, is co-NP-complete, for S being N or NT. Our complexity results have direct implications of on the complexity of reasoning in nonmonotonic log- its N and NT. The results for the case of the logic N have been obtained by Gottlob [1992]. Therefore, we derive here only the results for the nonmonotonic logic NT. As the main tool in our argument we will use a syntactic characterization of NT-expansions which follows from general results given in [Shvarts, 1990; Marek et al., 19911. Adapting this characterization to the case of logic NT, we obtain complexity results for the following algorithmic problems associated with the nonmonotonic logic NT: EXISTENCE Given a finite theory A C LK, decide if A has an NT-expansion; IN-SOME Given a finite theory A C ,CK and a for- mula cp E ICK, decide if cp is in some NT-expansion ofA; NOT-IN-ALL Given a finite theory A & LK and a formula cp E ,CK, decide if there is an NT-expansion for A not containing cp; IN-ALL Given a finite theory A E ISK and a formula cp E LK, decide if cp is in all NT-expansions of A. Theorem 7 Problems EXISTENCE, IN-SOME and NOT-IN-ALL are Er-corn-plete. Problem IN-ALL is II{-complete. For the case of the logic N the same complexity results have been obtained by Gottlob [1992]. Since reasoning in default and autoepistemic logics is also ZF- or @-complete (depending on the type of ques- tion) [Gottlob, 19921, it follows that computationally our nonmonotonic logics are equivalent to these two “classic” nonmonotonic formalisms. Explicit definitions All standard logics are conservative with respect to adding explicit definitions. Speaking informally, nam- ing a formula by a new propositional symbol does not change the set of theorems in the original language. Since logics are supposed to model general principles of reasoning and be applicable in a wide spectrum of domains, the property that the set of conclusions is (essentially) invariant under new names seems to be natural and desirable to have. It is known (see [Schwarz and Truszczynski, 19921) that those nonmonotonic logics in the McDermott and Doyle’s family which are based on normal modal log- its are conservative with respect to adding new names. That is, after a new name is introduced, each expan- sion of the resulting theory is a conservative extension of an expansion of the original theory. In this paper we will show, using different means than in [Schwarz and Truszczyliski, 19921 (the proof given there does not carry over to the case of subnormal modal logics) that every modal nonmonotonic logic in the family of McDermott and Doyle, in particular the nonmonotonic logics N and NT, share this desirable property. For a theory I s CL, we define IqJ = I u {q f rj). Let p be a propositional variable and let $ and cp be formulas. we write @(p/p) to denote the result of sub- stituting cp for p uniformly in $. Clearly, if p does not occur in $J then ti(p/p) = $. Let Q be an atom not in LL and let 77 be a formula from ,CL. For a theory T 2 LL define TJTq = {Ic, E $:$(q/q) E T}, where Li denotes the extension of the language CL by 4. The following theorem states that, nonmonotonic N and NT admit explicit definitions. Theorem 8 Let S be a modal logic. Let I C ,CL, A theory S is an S-expansion for Iqjn if and only if S = Tq)q for some S-expansion T of I. Moreover, S is a conservative extension of T, that is, T is exactly the q-free part of S. Conclusions In this paper we have studied the nonmonotonic log- its that can be obtained by the method of McDermott and Doyle from two very weak modal monotonic logics: the logic N of pure necessitation and its extensions, the logic NT. We argued that despite the fact that these logics lack several of the axiom schemata that charac- terize properties of knowledge and belief, the resulting nonmonotonic systems are at least as suitable for rep- resenting these notions as are autoepistemic, reflexive autoepistemic and default logics. We developed a minimal model semantics for the nonmonotonic logics N and NT. We proved that au- toepistemic logics can be embedded into either of these 442 Schwam two logics (the same result for the default logics has been already known earlier). We showed that both our nonmonotonic logics have a desirable property of being conservative with respect to adding new names. We also showed that reasoning with monotonic logics N and NT is computationally equivalent to reasoning in propositional logic, and established the complexity of reasoning with nonmonotonic logics N and NT. Our results show that the complexity of reasoning with non- monotonic logics N and NT is located on the second level of the polynomial hierarchy and, thus, is the same as the complexity of reasoning with autoepistemic, re- flexive autoepistemic and default logics. Finally, nonmonotonic logics N and NT are conser- vative with respect to adding explicit definitions. All these results indicate that nonmonotonic logics N and NT are viable and powerful nonmonotonic formalisms. Acknowledgements The second author was partially supported by National Science Foundation under grant IRI-9012902. References Feys, R. 1965. Modal Logics. Louvain E. Nauwelaerts, Paris. Fitting, M. C.; Marek, W.; and Truszczynski, M. 1992. Logic of necessitation. Journal of Logic and Computation 21349-373. Gottlob, G. 1992. Complexity results for nonmono- tonic logics. Journal of Logic and Computation 2:397- 425. Halpern, J.U. and Moses, U. 1985. Towards a theory of knowledge and ignorance: preliminary report. In Apt, K., editor 1985, Logics and Models of Concur- rent Systems. Springer-Verlag. 459 - 476. Konolige, K. 1988. On the relation between de- fault and autoepistemic logic. Artificial Intelligence 35~343-382. Kripke, S. 1963. Semantical analysis of modal logic I: Normal modal propositional calculi. Zeitschrij? fiir mathematische Logic and Grunddagen der Mathe- matik 67 - 96. Kripke, S. 1965. Semantical analysis of modal logic II: Non-normal modal propositional calculi. In Addison, J.W.; Henkin, L.; and Tarski, A., editors 1965, The Theory of Models, volume 9. North-Holland, Amster- dam. 206 - 220. Lenzen, W. 1978. Recent Work in Epistemic Logic, volume 30 of Acta Phidosophica Fennica. North- Holland, Amsterdam. Levesque, H. J. 1990. All I know: astudy in autoepis- temic logic. Artificial Intelligence 42:263-309. Marek, W. and Truszczynski, M. 1990. Modal logic for default reasoning. Annals of Mathematics and Ar- tificial Intelligence I:275 - 302. Marek, W.; Shvarts, G.F.; and Truszczynski, M. 1991. Modal nonmonotonic logics: ranges, characterization, computation. In Second International Conference on Principles of Knowledge Representation and Reason- ing, KR ‘91, San Mateo, CA. Morgan Kaufmann. 395-404. An extended version of this article will ap- pear in the Journal of the ACM. McCarthy, J. 1980. Circumscription - a form of non- monotonic reasoning. Artificial Intelligence 13:27-39. McDermott, D. and Doyle, J. 1980. Nonmonotonic logic I. Artificial Intelligence 13:41-72. McDermott, D. 1982. Nonmonotonic logic II: Non- monotonic modal theories. Journal of the ACM 29:33-57. Moore, R.C. 1984. Possible-world semantics for au- toepistemic logic. In Reiter, R., editor 1984, Proceed- ings of the workshop on non-monotonic reasoning. 344-354. (Reprinted in: M.Ginsberg, editor, Read- ings on nonmonotonic reasoning. pages 137 - 142, 1990, Morgan Kaufmann.). Moore, R.C. 1985. Semantical considerations on non- monotonic logic. Artificial Inteddigence 25:75-94. Reiter, R. 1980. A logic for default reasoning. Art@- ciad Intelligence 13:81-132. Schwarz, G.F. and Truszczynski, M. 1992. Modal logic S4F and the minimal knowledge paradigm. In Proceedings of TARK 1992, San Mateo, CA. Morgan Kaufmann. Schwarz, G. 1991a. Autoepistemic logic of knowl- edge. In Nerode, A.; Marek, W.; and Subrahma- nian, V.S., editors 1991a, Logic Programming and Non-monotonic Reasoning. MIT Press. 260-274. Schwarz, G.F. 1991b. Bounding introspection in non- monotonic logics. In Third International Conference on Principles of Knowledge Representation and Rea- soning, KR ‘92, Cambridge, MA. Schwarz, G.F. 1992. Minimal model semantics for nonmonotonic modal logics. In Proceedings of LICS- 92. Segerberg, K. 1971. An essay in classical modal logic. Uppsala University, Filosofiska Studier, 13. Shvarts, G.F. 1990. Autoepistemic modal logics. In Parikh, R., editor 1990, Proceedings of TARK 1990, San Mateo, CA. Morgan Kaufmann. 97-109. Truszczynski, M. 1991a. Embedding default logic into modal nonmonotoninc logics. In Nerode, A.; Marek, W.; and Subrahmanian, V.S., editors 1991a, Logic Programming and Non-monotonic Reasoning. MIT Press. 151-165. Truszczynski, M. 1991b. Modal interpretations of de- fault logic. In Proceedings of IJCAI-91, San Mateo, CA. Morgan Kaufmann. 393-398. Truszczynski, M. 1992. A generalization of Kripke semantics to the case of logics without axiom K. A manuscript. Nonmonotonic Logic 443 | 1993 | 62 |
1,390 | Algebraic Semantics for Cu lat ive ference Zbigniew Stachniak * Department of Computer Science University of Toronto Toronto, Ontario, M5S lA4 Canada zbigniew@co.yorku.ca Abstract In this paper we propose preferential matrix semantics for nonmonotonic inference systems and show how this algebraic framework can be used in methodological studies of cumulative inference operations. Introduction The notion of a cumulative inference operation arose as a result of formal studies of properties of nonmono- tonic inference systems, more specifically, as a result of the search for desired and natural formal properties of such inference systems. In this paper we propose an al- gebraic semantics for cumulative inference systems and show how this new semantic framework can be used for the methodological studies of nonmonotonic reason- ing. The point of departure for our presentation are the studies of nonmonotonic inference systems under- taken in (Brown & Shoham 1988, Gabbay 1985, Kraus, Lehmann, & Magidor 1990, Makinson 1988). All these works share a common preferential model-theoretic view on semantics. In (Makinson 1988) and (Makinson 1989) this unified semantic framework assumes the form of the theory of preferential model structures. If L is the lan- guage of an inference system, then a preferential model structure for L= is a triple M =< U, i==, +>, where 24 is a nonempty set the elements of which are called models, + is a binary relation on U, called the preference rela- tion of M, and k is a binary relation between models in U and formulas of C called the satisfaction relation of M. No properties of 2.4, of the satisfaction relation b9 nor of the preference relation 4 are assumed. Makin- son shows that cumulative inference systems are exactly those defined by the class of (stoppered) preferential model structures. *On leave from the Department of Computer Science, York University, Canada. Research Supported by the Natural Sciences and Engineering Research Council of Canada. The study of general properties of cumulative infer- ence systems can be based on a less general and more structured notion of a model structure. The key feature of our semantic proposal is the truth-functional inter- pretation of logical connectives, the idea well-developed in the context of logical calculi which can also be ex- ploited in the studies of nonmonotonic reasoning. We define a preferential model, or as it is called in this pa- per, a preferential matrix, as an algebra of truth-values augmented with a family V of sets of designated truth- values. We model a desired degree of nonmonotonicity by selecting an appropriate preference relation on 2). Preferential matrices have the same semantic scope as Makinson’s preferential model structures. Evident simi- larities between ‘classical’ logical matrices and preferen- tial matrices provide an access to reach algebraic tech- niques available for methodological studies of deductive proof systems. In this context, the present paper exam- ines a list of properties of nonmonotonic inference sys- tems, starting with characterization of cumulativity and loop-cumulutivity in terms of preferential matrices. We introduce a handy notion of the monotone buse of an inference system and study the distributivity property in terms of this notion. Finally, we look at the con- sistency preservation property in the context of finding automated theorem proving methods for cumulative in- ference systems. We give a criterion for such systems to have a refutationally equivalent automated proof system based on the resolution rule. In this paper we study inference systems on propo- sitional level only. It is assumed that the reader is fa- miliar with (Gabbay 1985, Kraus, Lehmann, & Magidor 1990, Makinson 1988). The familiarity with (Brown & Shoham 1988, Makinson 1989) and the basic facts on logical matrices, as presented in (Wojcicki 1988), is an asset. Preferential Matrices We begin this section with a brief description of the class 444 Stachniak From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. of cumulative inference systems. To avoid a lengthy exposition of facts available elsewhere, this description is just a list of definitions with rather scarce commen- taries. The reader may refer to (Gabbay 1985, Kraus, Lehmann, & Magidor 1990, Makinson 1988, Makinson 1989, Wojcicki 1988) h w ere all these definitions are fully motivated and discussed. A propositional language d: is defined in the usual way in terms of a finite set (fe, . ..fk} of logical connec- tives and a countable infinite set of propositional vari- ables. By L we denote the set of all well-formed formu- las of C. From algebraic point of view, L: is an algebra < E,fO,--.,fk > ff o ormulas while logical substitutions are simply endomorphisms of 1c. ’ Following (Makinson 1988), we say that an operation c : 2= + 2= is a cumulative inference operation if it satisfies the following two conditions: for all X, Y C L, (cl) x E C(X), (inclusion) (c2) X E Y E C(X) implies C(Y) = C(X) (cumulutivity). These (or equivalent) conditions were discussed in depth in (Gabbay 1985, Kraus, Lehmann, & Magidor 1990, Makinson 1988, Makinson 1989). Let us note, that (cl) and (~2) imply: (4 WV)) = C(X) (indempotence). If cr E E and X C L, then we read ‘cr E C(X)’ as ‘X entails cy’. An inference operation C is a consequence operation if, in addition to (cl) and (c3), it satisfies the following condition: for every X, Y C_ L, (c4) X C_ Y implies C(X) 5 C(Y) (monotonicity). Every system < 1c, C >, where C is a cumulative infer- ence operation on 1s is called an inference system. If C is a consequence operation, then < fZ, C > is called a logic. Pf Co, CI are two inference operations on e, then we shall write CO 5 Cl if for every X s L,Co(X) C_ Cl(X). We begin our voyage towards the notion of a preferen- tial matrix by analyzing the classical notion of a logical matrix (cf. Wojcicki 1988). Let C =< L, fo, . . . . fk > be an arbitrary language fixed for the rest of this pa- per. A logical matrix is a pair M =< ,A, D >, where A =< A, Fo, . . . . Fk > is an algebra of truth-values, with the set A of truth-values and with the operations Fo 9 “‘, Fk serving as interpretations of the connectives f0 9 “‘, fk, respectively. 2 The role of 4 is to provide the interpretation of logical connectives and to define the space of truth-values - the possible meanings of for- mulas of f,Z. V is a family of sets of truth-values (i.e. 1 In fact, C is an absolutely free algebra generated by proposi- tional variables of L. 2We assume that t and A, as algebras, are of the same simi- larity type. subsets of A). We consider every d E V a set of des- ignated truth-values. Interpretations of formulas of 1: are defined in terms of valuations of L: into M, which are simply homomorphisms of L into the algebra A of truth-values. Every logical matrix M defines the conse- quence operation Cniw in the following way: for every x u (4 c L9 (M) cy E Cnj+j(X) ifl for every valuation h and every d E V, h(X) C d implies h(a) E cl. The matrix consequence operation CnM is always struc- tural, i.e. it satisfies the following property: for every X E L and every substitution e, Structurality allows us to regard an entailment cv E C(X) as the schema representing all entailments of the form e(o) E C(e(X)), where e is any substitution. Moreover, the set of tautologies of a structural infer- ence operation is closed under arbitrary substitutions. In fact, the majority of propositional logics considered in the literature, in addition to being monotonic, are structural. Nonmonotonic formalisms depart not only from the monotonicity, but frequently from the struc- turality as well. One way of extending matrix semantics to cover all consequence operations (not just structural) is described in (Piochi 1983) and (Stachniak 1988). The key idea is to base the semantic entailment on a set of ‘admissible valuations’, i.e., to consider generalized mu- trices of the form < A,V,N >, where .4 and 2) are as before, and 76 is a subset of the set of all valuations of L into 4. In this semantic framework, every consequence operation can be defined by a generalized matrix. Let us note that in (Kraus, Lehmann, & Magidor 1990) a simi- lar idea (of restricting the set of possible interpretations) is used to go beyond structural nonmonotonic inference systems. Our main problem with semantic modeling of cumulative inference systems, however, is not struc- turality but monotonicity. To get over this problem, we employ the idea of a preference relation so successfully used in preferential model-theoretic semantics discussed in (Brown $t Shoham 1988, Klaus, Lehmann, & Magi- dor 1990, Makinson 1988, Makinson 1989). We call a system M =< A,V,X,c>, a preferential matrix of fZ if A, V, and 3c are as described earlier, and c is a binary relation on V. We call II the preference relation of M and for every pair do, r%l E V we read ‘cZc c dl’ as ‘do is preferred over dl’. EXAMPLE 2.0: Let fZ be a language with one binary connective V, one unary connective f, and two logical Nonmonotonic Logic 445 constants 3 and 4. The preferential matrix M we de- fine in this example is rather artificial; however, it is designed to provide simple illustrations of some of the properties of inference operations discussed in this pa- per. The truth-values of M are 0, 1,2,3. The constants 3 and 4 are interpreted as the truth-values 3 and 4, respectively, while V and f are interpreted as the oper- ations V and F defined in the following tables: The family V of designated truth-values has three sets: {0,2,3), 9~1~3)~ and { 1,2,3}. The preference relation of M is defined by: (0, 1,3} c (0,2,3} and {1,2,3} c {0,2,3}. Finally, the set ‘pt consists of all valuations of & into A. El Let M =< A,V, 3c, C> be a preferential matrix. For every set X s L, every set d of designated truth- values of M, and every valuation h E R, we shall write SatM(h, X, d) iff h(X) 5 d and for every d’ c d, h(X) g d’. Intuitively, ‘SatM(h,X, d)’ means that d is a most preferred set of designated truth-values con- taining h(X). With M we associate the inference oper- ation CM on Is by rewriting the definition (M) in the following way: for every X U (a} C L, o E CM(X) aff for every h E 3c and every d E V, SatM(h, X, d) implies h(a) E d. One of the conceptual distinctions between preferential model structures of Makinson and preferential matrices is the fact that in our approach the preference relation does not ‘work’ on models but on sets of designated truth-values - components of models. In the definition of the predicate SatM(h, X, d), we search V for a min- imal d (with respect to the preference relation) while keeping h and the algebra tLz of truth-values fixed. How- ever, every preferential matrix < A,V, 3c, C> can be ‘decomposed’ into a preferential model structure A/i = < Zi,b,-t>, where U = (< A,h,d >: h E 3C,d E VP), and < ,4, ho, do >+< ,A, hl, dl > if and only if ho = hl and do c dl. The satisfaction relation + is defined by the equivalence: <A,h,d>bcu iff h(o)Ed. Hence, preferential matrices can be considered special cases of preferential model structures. As we shall see shortly, for methodological studies of cumulative infer- ence systems, preferential matrices are just what we need. We call a preferential matrix M =< A, V, R, C> stoppered iff for every set A of truth-values of M, the set VA = (d E V : A E d) is empty or has the smallest element, i.e., there exists dA E VA such that for ev- ery d E VA 9 d # dA implies dA C d; moreover, for no d E VA, d C dA is true. The notion of a stoppered ma- trix is a counterpart of a stoppered preferential model structure (cf. Makinson 1988, Makinson 1989): a model structure < U, I=, +> is said to be stoppered iff for every set A of propositions and every m E U, if m b A, then there is a minimal n E U (minimal with respect to 4) such that n + A and either n = m or n 4 m. As it was pointed out by Makinson, this notion is partially metamathematical, as it refers to sets A of propositions and the satisfaction relation + as well as to the non-linguistic components U and 4 of the model structure. There does not appear to be any exactly equivalent purely mathematical condition. (Makinson 1989) In contrast to this situation, the notion of a stoppered preferential matrix is defined in purely set-theoretic terms. Let us also note, that if a preferential matrix M is stoppered, then so is the preferential model struc- ture M defined as in the previous paragraph. THEOREM 2.1: If M =< ,A, V,7i, C> is a preferen- tial stoppered matrix, then CM is cumulative inference operation. Moreover, if ?t is closed under the composi- tion with all substitutions of L, then CM is structural. THEOREM 2.2: For every cumulative inference opera- tion C there is a preferential stoppered matrix M = < .A, V, N, C> such that C = CM. Moreover, if C is structural then 3t can be assumed to consists of all val- uations. Theorems 2.1 and 2.2 give us a representation theorem for cumulative inference systems in terms of preferen- tial matrices. One of the matrices that satisfy Theorem 2.2 is ML =< fZ,{C(X) : X C L}, {id},~>, where id is the identity function on L, and C(X) c C(Y) iff C(X) # C(Y) and for some X’ C C(Y),C(X) = C(X’). Its construction resembles that of the so-called Lindenbaum matrix for a logical system (cf. Wojcicki 1988). Henceforth, we shall call it the Lindenbuum mu- ttix for C. A model structure similar to ML is used in (Makinson 1988) to characterize the class of cumula- tive inference operations in terms of preferential model structures. There are obvious connections between preferential matrices and logical matrices (for every preferential ma- trix < A,V,%,C>, < ~$2, > is a logical matrix). Hence, one may expect to transfer some of the alge- 446 Stachniak braic tools and techniques developed for logical matri- ces to‘study cumulative inference systems. In this and the following sections we will try to do just that. Let us consider the so-called loop principle (cf. Kraus, Lehmann, & Magidor 1990, Makinson 1988, Makinson 1989): (loop) x0 c C(Xl),Xl c_ qx2),...,xa-l c WLJ, Xn E C(X0) implies C(X0) = C(X,). In (Kraus, Lehmann, & Magidor 1990) and (Makin- son 1989) this principle has been found &he counter- part of transitivity of preference relation in stoppered preferential models. In preferential matrix semantics, loop can be related to the following property of pref- erence relations. We say that a preferential matrix < .A, V, ti, C> is loop-free if and only if there is no se- quence do c dl c . . . c d, c do of sets in V. Following (Kraus, Lehmann, & Magidor 1990), we call every in- ference operation satisfying (loop) loop-cumulative. Our last theorem of this section characterizes the loop prin- ciple in the class of cumulative systems. THEOREM 2.3. Representation Theorem for Loop- Cumulative Inference Operutions. An inference oper- ation C is loop-cumulative if and only if it is defined by a stoppered loop-free matrix. Monotone Bases Q Nonmonotonic inference systems are build ‘on top’ or ‘on the basis of’ some monotonic logical systems; they depart from their deductive counterparts by giving up monotonicity for some other principles of inference. The system C presented in (Kraus, Lehmann, & Magidor 1990) is based on the classical logic < &, 62 > and so are all the inference systems < &-, C > such that C2 5 C. Frequently, nonmonotonic inference systems are based on non-classical logics: on Kleene’s three- valued logic (cf. Doherty 1991), on modal logics (cf. McDermott 1982, Moore 1985), on constructive logic (cf. Pearce P992), etc. This suggests the following defi- nition: the monotone buse of a cumulative inference operation C is the largest structural conse- quence operation CB such that CB 5 C (the largest with respect to 5). To the best of our knowledge, no formal discussion of the monotone base of inference systems is present in the literature, although many important properties of non- monotonic inference systems were implicitly defined in terms of this notion. Two of such properties, distribu- tivity and consistency preservation, will be studied soon. The fact that all structural consequence operations on L form a lattice under the ordering 5 (cf. Wbjcicki 1988) implies the following theorem: THEOREM 3.0: Every inference operation has the unique monotone base. In (Makinson 1989), a cumulative inference operation C is called supruclussicul if C2 5 C, where C2 denotes the consequence operation of the classical logic. If C is nontrivial, then the above condition is equivalent to the statement that C2 is the monotone base of C, i.e. that CB = 62. Namely, C2 is a maximal structural consequence operation, i.e. for every structural conse- quence operation C* 1 62, C* = C2 or C* is trivial, i.e., C*(X) = L, for all X C L. Since C is nontrivial, by Theorem 3.0, we must have CB = 62. Before we state our next theorem, let us introduce the following notation. Hf M =< A,V,?d, C> is a prefer- ential matrix, then CM denotes the inference operation defined by M and CnM denotes the consequence oper- ation defined by the logical matrix < .A, V >. THEOREM 3.1: For every preferential matrix M, cnM<cM. Unfortunately, CnM is not always the monotone base of CM. As the next example shows, we can easily find two preferential matrices MO and Ml such that CM~ = CM~ but CnMo # CT&M,. EXAMPLE 3.2: Let MO be as in Example 1.0 and let Ml be obtained from MO by the addition of (3) to the family of designated truth-values and by making this set the maximal with respect to the preference relation. The inference operations CM,, and CM, are identical while 4 E Cn&( { 3)) - CnMl ({ 3)). The reason while a!& = CM, is that the addition of the set (3) is ‘use- less’, i.e., it does not modify the inference engine of MO. However, this addition modifies the ‘monotone base’ of MO represented by CnM,,. 0 THEOREM 3.3: Let C be a cumulative inference op- eration and let M be the Lindenbaum matrix for 6. Then CB = CnM. By Theorem 3.3, the monotone base of a cumulative in- ference operation C is defined by the ‘logical part’ of the Lindenbaum matrix for C. If C is introduced by an arbi- trary matrix M, then frequently CB = CnM, provided that M has no ‘useless’ sets of designated truth-values. The details are as follows. Let M =< &V, 3c, t> be a preferential matrix. We call a set d E 2) useless if for Nonmonotonic Logic 447 some d* E V, d* c d and d s d*. The next theorem shows that all the useless sets can be safely eliminated. THEOREM 3.4: Let M be a preferential matrix and let N be the matrix obtained from M by removing all useless sets. Then CM = C,. THEOREM 3.5: Let C be a cumulative inference oper- ation defined by a matrix M =< A, V, Z, C> without useless sets. If 3c contains a valuation of C onto 4, then @B = CnM. Theorems 3.4 and 3.5, when applied to a preferential matrix M with the full set of valuations, say that the monotone base of CM is defined by M with all the use- less sets removed. As we have settled the problem of semantic defini- tion of monotone bases, let us turn to the problem of an interplay between cumulative inference operations and their monotone bases. In (Makinson 1988, Makin- son 1989) it is shown that every supraclassical inference system C defined by a cZussicuZ preferential model struc- ture (i.e. the satisfaction relation preserves the intended meanings of classical logical connectives) satisfies the following principle of distributivity: for every X,Y E L, (4 G(X) = X and C2(Y) = Y implies C(X)nC(Y) E C(X n Y). Makinson’s proof exploits the following property of clas- sical models: a disjunction cy is satisfied in a model M iff one of the disjuncts of o. is satisfied in M. The preferential matrix counterpart of this property, called well-connectivity, is defined as follows. Let M =< 4, V, 3c, C> be a preferential matrix for C and let us suppose that V is one of the binary connectives of fZ. We say that M is well-connected (with respect to V) iff for every pair a, b of truth-values of M and every d E V: aVbEd ifl aedorbcd. Well-connectivity is, in fact, a property of the logical matrix < A,V >, since its definition is independent of the preference relation of M. Moreover, if M is well- connected, then CnM is disjunctive with respect to V, i.e., for every X U {cy,/3} C L, Cn(X U {CY V p)) = C(X u {a}) n C(X U {p}). We say that a cumulative in- ference operation C is distributive if it satisfies (d) with C2 replaced by CB. Our next theorem extends Makin- son’s result to a larger class of cumulative operations. THEOREM 3.6: Every cumulative inference operation defined by a well-connected preferential matrix is dis- tributive. Another result, first formulated and proved for supra- classical cumulative inference systems, which, when ex- pressed in terms of monotone bases, holds for a larger class of such systems is presented in the following theo- THEOREM 3.7: Every distributive cumulative inference operation is loop-cumulative. Consistency Preservation An inference operation C satisfies the consistency preservation property if for every X E L, c(x)=L iff cfj(x)=L. In other words, this property, when satisfied, sets the limit on how much a given inference operation C and its monotone base CB may differ. In the context of supraclassical inference systems this property was inves- tigated in (Makinson 1989), where the reader is referred to for examples of inference systems with this property. Let us note that: THEOREM 4.0: Every cumulative structural inference operation satisfies the consistency preservation prop- erty. The satisfaction of the consistency preservation- prin- ciple enables the application of rich refutational auto- mated theorem proving techniques available for logical systems, such as non-clausal resolution or signed tableau (cf. H%hnle 1991, Stachniak 1991, Stachniak 1992), to cumulative inference systems. In fact, following (Stach- niak 1991), we can characterize the class of cumulative inference systems for which a refutationally equivalent resolution proof system can be found. Informally, a logic P is said to be a resolution logic if there exists a resolution-based proof system Rs refutationally equiv- alent to P, i.e. for every finite set X of formulas, X is inconsistent in P if and only if X can be refuted in Rs (cf. Stachniak 1991, Stachniak 1992). This definition can be extended to cumulative inference systems in the following way: a cumulative inference system < C, C > with the consistency preservation property is said to be a resolution inference system if and only if the logic < C,CB > is a resolution logic. In the light of this definition, the characterization of the class of resolution logics given in (Stachniak 1991) can be extended to res- olution inference systems as follows: THEOREM 4.1: Let C be a cumulative inference oper- ation with the consistency preservation property. Then the following conditions are equivalent: (i) < C,C > is a resolution inference system; 448 Stachniak (ii) there exists a finite logical matrix M such that C and CnM have the same inconsistent sets; (iii) for some integer k 2 0, L(“)/O is finite and for every finite set X C L, C(X) = L a$ C(e(X)) = L, for every substitution e that maps L into L(“). In this theorem, L(“) d enotes the set of all formulas of L which are built up by means of the connectives of L and the propositional variables ~0, . . ..pk. 0 denotes the congruence relation of 1s defined in the following way: for every Q, p E L, c&/3 i$for every X C L and every r(p) E L, C(X lJ bfP/Qw = L ++ C(X u {Y (P/P)}) = L. If C is a structural cumulative inference operation whose monotone base is defined by a finite logical matrix N, then, by Theorems 4.0 and 4.1, < C,C > is a reso- lution inference system. Moreover, a resolution based automated proof system for < Is, C > can be effectively constructed from N following, for example, the algo rithm described in (Stachniak 1991). Let us close this section with the following general re- mark. The application of refutational theorem proving techniques to a particular cumulative inference system < C, C > hinges upon the availability of an effective pro- cedure that reduces the problem of validation of an in- ference to the problem of inconsistency checking. What is required is an algorithm which for every finite set X of formulas and every formula CY constructs a finite set X, such that the following reduction principle holds: CY EC(X) aff C(X,) = L. For example, in the case of the classical propositional logic we can just put X, = X U (-Q). The reduction principle, however, is not universally available to all cu- mulative inference systems, as it is not available to all logics. Acknowledgment I am grateful to David Makinson for comments on the first draft of this paper. References A. BROWN, A. L., AND SHOHAM, Y. 1988. New Re- sults on Semantical Nonmonotonic Reasoning. In Pro- ceedings of the Second International Workshop on Non- Monotonic Reasoning, 19-26. Lecture Notes in Com- puter Science 346. DOHERTY, P. 1991. NML3 - A Nonmonotonic For- malism with Explicit Defaults. Ph.D. diss., Dept. of Computer Science, Thesis, Univ. of LinkSping. GABBAY, D. M. 1985. Theoretical Foundations for Non-Monotonic Heasoning in Expert Systems. In Log- its and Models of Concurrent Systems (K. Apt ed.). Springer-Verlag. H~~HNLE, R. 1991. Uniform Notation of Tableaux Rules for Multiple-Valued Logics. In Proceedings of the Twenty-First International Symposium on Multiple- Valued Logics, 238-245. IEEE Press. KRAUS, S.; EEHMANN, D.; MAGIDOR, M. 1990. Non- monotonic Heasoning, Preferential Models and Cumu- lative Logics. Artificial Intelligence 44: 167-207. MAKINS~N, D. 1988. General Theory of Cumulative Inference. In Proceedings of the Second International Workshop on Non-Monotonic Reasoning, 1-18. Lecture Notes in Computer Science 346. MAKINSON, D. 1989. General Patterns in Nonmono- tonic Reasoning. In Bundbook of Logic in Artificial In- telligence and Logic Programming. Vol 2. Nonmono- tonic and Uncertain Reasoning (Gabbay, D. M., Hog- ger, C.J., and Robinson J.A. eds.) Forthcoming. MCDERMOTT, D. 1982. Nonmonotonic Logic II: Non- monotonic Modal Theories. Journal of the Association for Computing Muchinery 29: 33-57. MOORE, R. C. 1985. Semantic Considerations on Non- monotonic Logic. Artificial Intelligence 25: 75-94. PEARCE, D. 1992. Default Logic and Constructive Logic. In Proceedings of the Tenth European Confer- ence on Artificial Intelligence, 309-313. John Wiley. BIOCHI, B. 1983. Logical Matrices and Non-Structural Consequence Operations. Studia Logica 42: 33-42. STACHNIAK, Z. 1991. Extending Resolution to Heso lution Logics. JournuP of Experimental and Theoretical Artificial Intelligence 3: 17-32. STACHNIAK, Z. 1992. Resolution Approximation of First-Order Logics. Information and Computation 96: 225244. STACHNIAK, Z. 1988. Two Theorems on Many-Valued Logics. Journal of Philosophical Logic 17: 171-179. W~JCICKI, R. 1988. Theory of Logical Calculi: Basic Theory of Consequence Operations. Kluwer. Nonmonotonic Logic 449 | 1993 | 63 |
1,391 | Minimal belief and negation as failure: A feasible a Antje Beringer and Torsten Schaub FG Intellektik, ‘IT Darmstadt, Alexanderstrafle 10, D-6100 Darmstadt, Germany (antje,torsten}@intellektik. f in ormatik.th-darmstadt.de Abstract . Lifschitz introduced a logic of minimal belief and negation as failure, called MBNF, in order to provide a theory of epistemic queries to non- monotonic databases. We present a feasible sub- system of MBNF which can be translated into a logic built on first order logic and negation as fail- ure, called FONF. We give a semantics for FONF along with an extended connection calculus. In particular, we demonstrate that the obtained sys- tem is still more expressive than other approaches. Introduction Lifschitz [1991; 1992]l introduced a logic of minimal belief and negation as failure, MBNF, in order to pro- vide a theory of epistemic queries to nonmonotonic databases. This approach deals with self- knowledge and ignorance as well as default information. From one perspective, MBNF relies on concepts de- veloped by Levesque [1984] and Reiter [1990] for data- base query evaluation. In these approaches, databases are treated as first order theories, whereas queries may also contain an epistemic modal operator. In addition to query-answering from a database, this modal opera- tor allows for dealing with queries about the database. From another perspective, Lifschitz’ approach relies on the system GK developed by Lin and Shoham [1990], which uses two epistemic operators accounting for the notion of “minimal belief” and “negation as failure”. Thus, MBNF can be seen as an extension of GK, which identifies their epistemic operator for minimal belief with the ones used by Levesque and Reiter. MBNF is very expressive. Apart from asking what a database knows, it permits expressing default knowl- edge and axiomatizing the closed world assumption [Reiter, 19771 and integrity constraints [Kowalski, 19781. Furthermore, Lifschitz [1992] established close relationships to logic programming, default logic [Re- iter, 19801 and circumscription [McCarthy, 19801. However, Lifschitz’ approach is purely semantical and mainly intended to provide a unifying framework ‘In what follows, we rely on the more recent approach. 400 Beringer for several nonmonotonic formalisms. Consequently, there is no proof theory yet. We address this gap by identifying a subsystem of MBNF and translating it into a feasible system by relying on the fact that in many cases negation as failure is expressive enough to account for the different modalities in MBNF. The resulting system is called FONF (first order logic with negation as failure). We demonstrate that it provides a versatile approach to epistemic query answering for nonmonotonic databases, which are first order theories enriched by beliefs and default statements. Further- more, we give a clear semantics of FONF along with an extended connection calculus for FONF. Also, we show that FONF is still more expressive than PROLOG and a competing approach [Reiter, 19901. Minimal belief and negation as failure MBNF deals with an extended first order language in- cluding two independent modal operators B and not. B is of epistemic nature and represents the notion of “minimal belief”, whereas not captures the notion of “negation as failure”. A theory T, or database, is a set of sentences. a, ,f? denote sentences; F, G denote for- mulas. A positive formula (or theory) does not contain not. An objective one contains neither B nor not. For instance, given an ornithological database, we can formalize the default that “we believe that birds fly, unless there is evidence to the contrary”, as V’a(Bbird(z) A notlfly(z) -+ Bfly(a:)). Now, the idea is to interpret our beliefs by a set of “possible worlds”, ie. Bbird( Tureety) is true iff Tweety is a bird in all possible worlds and notlfly( Tzoeety) is true iff Tweety flies in some possible world. Formally, the truth of a formula is defined wrt a triple (w, W3, Kd), where UI is a first order interpre- tation, or simply world, representing “the real world”, Ws is a set of “possible worlds” defining the meaning of beliefs formalized with B, and Wnot serves for the same purpose in case of not. u), ws and knot share the same universe, but WB and Wnot do not necessar- ily include w. Intuitively, this means that beliefs need not be consistent with reality. Thus, Tweety may be believed to fly without actually flying. From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. Then, the truth of a formula F in MBNF is defined for the language of MBNF extended by names for all elements of the common universe.’ 1. For atomic $‘, (w, WB, Wnot)b=MBNP F iff w k F. for SOme name 5, (w, We, Wnot) ~=MB~F F(t). 5. (WI WB, W~&==MBNF BP iff VW E WB : (W’, WB, wnot) ~==MBNF F. 6. (w, WB, WA)j=MBNP notF iff 3w’ E Wnot : (w’, WB, wnot) #MBNF 8’. The definition of a model in MBNF is restricted to the case where WB = WnOt . Therefore, Lifschitz introduces structures (w, W), where w is a world and W a set of worlds corresponding to WB and Wnot . In partic- ular, he is only- interested in <-maximal structures, where (w, W) < ( w’, W’) iff W C W’, since they ex- press ‘(the idea of ‘minimal belief ‘: The larger the set of ‘possible worlds’ is, the fewer propositions are be- lieved” [Lifschitz, 19921. Formally, a model in MBNF is defined by means of a fixed-point operator I’(T, W), which, given a theory 2’ and a set of worlds W, denotes the set of all <-maximal structures (w, W’) such that T is true in (w, W’, W). Then, a structure (w, W) is an MBNF-model of T iff (w, W) E I’(T, W). In MBNF, theoremhood is only defined for positive formulas: A positive formula F is entailed- by T, Tl= MBNP F, iff F is true in all models of T. Thus, query answering is also restricted to positive queries.3 Notice that models of T need not be models of F. For instance, Bp is true in all models of B(p A q) and, hence, B(P A 4) l=MBNP Bp. However, no MB&?-model of B(PA d is an MBNF-model of Bp, since none of them is ‘<-maximal in satisfying Bp. So, we have to distin- guish carefully between formulas in a gi ven theory and formulas being posed as queries to that theory. FONF: A feasible approach to MBNF We develop a feasible approach to MBNF by identify- ing a large subclass of MBNF, which allows for equiv- alent formalizations in first order logic plus negation as failure. This is the case whenever a theory T is complete for believed sentences, ie. whenever we have either TbMBNF Ba or TkMBNF ~Bclr for each cy. In this case, first order logic with negation as failure is strong enough to capture also the notion of “minimal belief”. We thus identify a feasible subclass of MBNF for which we provide a translation into FONF, a first or- der logic with an additional negation as failure oper- ator not. This translation preserves the above notion ’ + without any subscript denotes first order entailment. 3 As regards MBm-queries, we rely on this restriction throughout the paper. of completeness in the sense, that an MBNF-theory is complete for believed sentences iff the corresponding FONF-theory is. As mentioned above, we have to distinguish between queries and sentences in a database. Accordingly, we define the following feasible subset of MBNF for queries and databases separately: e A feasible query is an MBNF-formula q satisfying: 1. q is positive. 2. Each scope of an 3 or 1 in q is either purely sub- jective or purely objective. 3. If the scope of a -1 in q is subjective, then it must not contain free variables. e A feasible database (FDB) is an MBNF-theory con- taining only rules of the form Fl A . . . A F, A not Fm+l A . . . A not F, + F,+l where for n,m > 0 each Fi (i = l,...,n+l) is - either a disjunction-free MBNF-formula scope of 1 is minimal and objective,4 where the - or of the form B(G1 V . . . V Gk) where the Gi (; = 1 , . . . , Ic) are objective formulas. F nfi may also be an unrestricted objective formula. These restrictions are not as strong as it seems at first sight: Even default rules, integrity constraints, and closed world axioms can be formalized within FDBs. In [Lifschitz, 19921 an MBNF-formula F is translated into a first order formula F” to relate MBNF- and first order entailment: A second sort of so-called “world variables” is added to the first order language; append- ing one of them to each function and predicate symbol (as an additional argument), and introducing a unary predicate B whose argument is such a world variable. A world variables denotes the world in which a cer- tain predicate or function symbol is interpreted and B accounts for the “accessibility” of a world from the actual world. However, the translation O is insufficient for creating a deduction method for MBNF. First, it deals only with positive formulas and, therefore, dis- cards a substantial half of MBNF: The modal operator not. Second, only first order entailment carries over to MBNF but not vice versa. That is, roughly speaking, even for positive T and o, To /= Q’ implies TbMBNF cx but not vice versa. In this sense, the translation O is sound but incomplete. Our approach addresses this shortcoming by trans- lating feasible queries and databases into FONF. This has the following advantages: First, we deal with a much larger subset of MBNF. In particular, we can draw nonmonotonic conclusions by expressing B and not by a first order predicate bel and a negation as failure operator not. Second, our translation is truth- preserving. That is, for feasible queries and databases, FONF-entailment carries over to MBNF and vice versa. 40bserve that one cannot distribute -, over B. Nonmonotonic Logic 401 In this sense, the translation is sound and complete for feasible queries and databases. In the sequel, we give this translation and prove that it is truth-preserving. Now, FONF+Y~&S are all formulas that can be built using the connectives and construction rules of first order logic and the unary operator not. The only constraint on FONF-formulas is that variables must not occur free in the scope of not. The translation * of feasible MBNF-queries and -da- tabases into FONF-formulas is developed in analogy to [Lifschitz, 19921. W e use the predicate be2 to translate the MBNF-operator B. Then, a feasible MBNF-formula F, ie. either a feasible query or a formula belonging to a FDB, is translated into the FONF-formula F* in the following way. o If F is objective, then 1. F* is obtained by appending the world variable V to each function and predicate symbol in F. o else (ie. if F is non-objective) 2. (lF)* = not F*. 3.(FoG)*=F* 0 G* for 0 = A, V or --f. 4. (Q F)” = Q F* for Q = 3 or V. 5. (BF)* = VV (beZ(V) --3 F*). 6. (not F)* = not(BF)*. Observe that feasible queries must not contain not, so that then Condition 6 does not apply. The trans- lation * depends on the notion of feasible formulas which obey syntactical restrictions. Thus, we have to account for all connectives. As an example, trans- lating ICY V ‘B/3 (a,/3 objective) into FONF yields la* V not (VV (beZ(V) -+ ,P), which shows that the combination 1B is translated using negation as failure, namely not, whereas pure negation 1 is kept. In order to show that this translation is truth- preserving, we look at the semantics of FONF and define satisfiability wrt a set of worlds W: W j=PONF a iff VW E W : (u), W)kPONP cy, where the truth value of a FONF-formula wrt a struc- ture (w, W) is defined in the following way: e If F is objective, then 1. (w, W)b,,,, F iff w j= F. o else (ie. if F is non-objective) 2. (w, q=FONF -F iff (w, W)kfpONp F. 3. (w, W)b=,,,, F A G iff (w, W) f=PONF F and (w, W)j=PONP G. 4. (w, W)k=,,,, notF iff 3w’ E W: (w’, W)pSPONP F. FONF can be seen as an extension of extended logic programs [Gelfond and Lifschitz, 19901. Accordingly, FONF-models extend the semantics of extended logic programs to the first order case: For a FONF-theory T, and a set of worlds W, we develop a set of objective formulas TW from T by 1. deleting all rules, where not QI occurs in the body while W(=,,,, CY holds. 2. deleting all remaining subformulas of the form not cy. Then, W is a FONF-model for T if it consists of all first order models of Z’w A sentence Q is entailed by a FONF-theory T, TkPONP cy, iff Q is true in all FONF- models of 2’. Then, we obtain the equivalence between query-answering in MBNF and FONF for FDBs. Theosem I. For feasible MBNF-databases T and feasi- ble MBNF-queries a! : TbMBNP CY ifl T*)=,,,, Q*. As a corollary, we get that the translation * preserves completeness for believed sentences in the above sense. The translation proposed in [Lifschitz, 19921 satisfies only one half of the above result since it only provides completeness for “monotonically answerable queries”, Moreover, Lifschitz deals with a more restricted frag- ment of MBNF, which excludes, for example, the use of negation as failure for database sentences. A connection calculus for FONF We develop a calculus for FONF based on the connec- tion method [Bibel, 198’71, an affirmative method for proving the validity of a formula in disjunctive nor- mal form (DNF). These formulas are displayed two- dimensionally in the form of matrices. A matrix is a set of sets of literals. Each element of a matrix rep- resents a clause of a formula’s DNF. In order to show that a sentence Q! is entailed by a sentence T, we have to check whether 1T V a! is valid. In the connection method this is accomplished by path checking: A path through a matrix is a set of literals, one from each clause. A connection is an unordered pair of literals with the same predicate symbol, but different signs. A connection is complementary if both literals are identi- cal except for their sign. Now, a formula, like 1TV a., is valid iff each path through its matrix contains a com- plementary connection under a global substitution. First, we extend the definition of matrices and liter- als. An MM-Literal is an expression not M, where M is an m-matrix. An h&f-matrix is a matrix contain- ing normal and MM-literals. Although these structures seem to be rather complex, we only deal with normal form matrices, sinceNM-literals are treated in a special way during the deduction process. The definition of classical normal form matrices re- lies on the DNF of formulas. Here, we deal with for- mulas in disjunctive FONF-normal form (FONF-DNF) by treating subformulas like not a! as atoms while transforming formulas into DNF. Then these o’s are transformed recursively into FONF-DNF and so forth. An R&&matrix MF represents a quantifier-free FONF- formula F in FONF-DNF as follows. 1. If F is a literal then MF = {{F)). 2. If F = not G then MF = ({not MC}} 3. If F = Fl A.. . A F, then MF = {U~I, MF; )a 402 Beringer 4. If F = Fl v . . . v Fn then MF = lJy=, MF~. For instance, p(X) V (q(u) A not (T(Y) A g(Y) VP(U))) has the following matrix representation: P(X) n(a) not L II dY) P(a) (Y) In order to define a nonmonotonic notion of com- plementarity, we introduce so-called adjunct matri- ces. They are used for resolving the complex struc- tures in MM-matrices during the deduction process. Given an NM-matrix M = {Cl, . . . , C&) where C, = {L l,***, L,,not N} the adjunct matrix MN is defined as MN = (M \ Cn) U N, or two-dimensionally: An NM-matrix is J&i-complementary if each path through the matrix is MM-complementary. A path p is NM-complementary if 0 p contains a connection {K, L} tary under unification or which is complemen- e p contains an NM-literal not N with N being an MM-matrix. If the adjunct matrix MN is not NM- complementary, then p is MM-complementary. The deduction algorithm relies on the standard con- nection method, except that if a path contains an MM- literal then the same deduction algorithm is started re- cursively with the corresponding adjunct matrix. Let M be an MM-matrix and let ‘PM be the set of all paths through M. Then the NM-complementarity of M is checked by checking all paths in FM for NM complementarity. This is accomplished by means of the procedure nmc in the following informal way for a set of paths ‘P’ and a matrix M’. nmc(P’, M’) e If ‘P’ = 0, then nmc(P’, M’)=“yes” o else choose p E P’. - If p is classically complementary with connec- tion {Ka, Lu} and unifier u, then nmc(P’, M’) = =((P - {P I {K L) E ~})a, M’) - else * if there exists an MM-literal not N E p such that nmc(P”N, M~)=“no”, then n.mc(P’, M’)=runc(P - {p 1 not N E p}, M’) * else nmc(P’, M’)=“no” Initially, nmc is called with PM and M, namely mnc(P”, M). The n, we obtain the following result. Proposition 1 If the algorithm terminates with “yes”, then the h&i-matrix is MM-complementary. So far, we have only considered quantifier-free FONF- formulas in FONF-DNF. Hut how can an arbitrary FONF-formula F be treated within this method? First of all, F must be transformed into FONF-Skolem nor- mal form, analogously to [Bibel, 19871. We denote the result of FONF-skolemization of a formula F by S(F). Now, FONF-formulas can be represented as matri- ces. If we have a FONF-database, we require that rules containing free variables in the scope of not (which is actually not allowed in FONF) are replaced by the set of their ground instances before skolemixation. However, the above algorithm has its limitations due to its simplicity. First, in MBNF and FONF, it is nec- essary to distinguish between sentences occurring in the database and those serving as queries. Repre- senting a database together with a query as an RM- matrix removes this distinction. Second, the above algorithm cannot deal with FONF-theories possessing multiple FONF-models. This requires a separate algo- rithmic treatment of alternative FONF-models. We address this shortcoming by slightly restrict- ing the definition of feasible queries and databases instead of providing a much more complicated algo- rithm. Thus, we introduce determinate queries and databases.5 Determinate queries are feasible MBNF- queries in DNF which are either objective or consist only of one non-objective disjunct. Determinate data- bases are FDBs which do not contain circular sets of rules, like {not p -+ Bq, not Q + Bp}, because only such circular sets cause multiple models in FONF. This restriction is not as serious as it might seem at first sight: First, non-objective disjunctive queries can be posed by querying the single disjuncts separately. Sec- ond, we doubt that we loose much expressiveness by forbidding circular rules. So, if we consider this kind of databases and queries, we can use the nonmonotonic connection method for query-answering: reposition 2 For determinate FONF-theories T and FONF-sentences a: TkFONF cy ifl the W-matrix of S(lT V cx) is MM-complementary. Together with Theorem 1, we obtain the following. Theorem 2 For determinate MBNF-databases T and determinate MBNF-queries CY: TkMBNF Q! ifl the I%‘U- matrix of S(lP V a*) is MM-complementary. Hence, we obtain a deduction method for a quite large subset of MBNF: Given a determinate MBNF-database T and -query cu, we check whether TbMBNP cr holds by l.translating T into T* and CY into CY*, 2. replacing free variables in the database, with all constants occurring 3. skolemizing lT* V a* yielding S( lT* V a*), 4. testing the resulting matrix for NMcomplementarity. ‘This expression will also be FONF-queries and -databases. used for the corresponding Nonmonotonic Logic 403 Finally, let us consider an example illustrating our ap- proach. Consider the following MBNF-database T: B( teaches( anne, bio) V teaches( sue, ho)) not teaches(X, bio) + 1 teuches(X,bio) which we write in short notation as B(t(a, b) V t(s, b)) A (not t(X, b) * +(X, b)). Recall that the second conjunct is considered as an ab- breviation for the set of its ground instances. Consider - the following query: Is it true that Anne doesn’t teach biology? This query, say a, is formalized in MBNF as +(a, b). Notice that T and ~2 constitute determinate MBNF- expressions. Now, we have to verify whether TbMBNF CK holds. According to the closed world axiom given by not t(X, b) -+ +(X, b), saying that a person does not teach biology unless proven otherwise, we expect a pos- . . itive answer. Following the four steps above, we first translate T and QI into FONF and obtain the FONF-theory rC VV beZ(V) -4 t(a, b, V) V t(s, b, V) not PV beZ(V) --+ t(X, b, V)] --) +(X, b, V) along with the query QI* = +(a, b, V). Then, the the- ory is negated yielding 15Y. After replacing X by the constants a, s and b6 in -&?, we obtain S(lT*Vcr*) by FONF-skolemization which is (with Skolem constants wi (i = 1,2,3)) beZ(V) A lt(a, b, V) A -4(s, b, V) not [lbeZ(wz) v t(u, b, wq)] A t(u, b, WI) not [lbeZ(ws) V t(s, b, wa)] A t(s, b, WI) +(a, b, WI). This FONF-formula has the following matrix represen- tation (if we ignore the drawn line) - beZ(V) not Nl not N2 with the submatrices Nr = [-beZ(wa) t(u, b, wr)] and Nz = [-beZ(wa) t(s, b, w3)]. It remains to be checked whether this matrix is MM-complementary in order to prove TbMBNF a. The first connection starting from the query +(a, b, ~1) is shown by the drawn line. It remains to ‘be tested, if all paths through the NM-literal not Nl are MM- complementary. So, the adjunct matrix has to be built yielding the following matrix (with Nz as above), which must not be A?U-complementary for a successful proof. beZ(V) --beZ(w2) t(a, b,wz) not N2 lt(a, b,wl) -+(a, h V) t(r, h ~1) 1 1 -t(s) 6 VI ‘For simplicity, we omit the last case in this as it obviously does not contribute to the proof. example, During the proof for the above adjunct matrix a copy of the first clause has to be generated. We get the substitution u = {Vl\wl, VZ\W~}, where VI occurs in the first copy and V2 in the second one. The resulting matrix contains the (non-complementary) path C+,b,wl), +(s, b, w2), lbel(m), t(a, h w2), t(4 h Wl), +(a, b, WI)}. The first two literals stem from the two copies of the first clause of the adjunct matrix. The four others belong to the remaining clauses of the adjunct matrix. Since this path through the adjunct matrix is not complementary, the MM-literal not Nr in the origi- nal matrix is NM-complementary. Therefore, all paths through the original matrix are MM-complementary. Accordingly, we have proven that TkMBNF cy holds and thus that Anne doesn’t teach biology. In order to illustrate the difference between queries to and about the database in presence of the closed world assumption (CWA), consider the query, say p, Is it known that Anne doesn? teach biology? Now, we expect a negative answer, since the used for- malization of the CWA only affects objective formulas. It avoids merging propositions about the world (like objective formulas in the database) and propositions about the database, which causes inconsistencies when using the “conventional” CWA [Reiter, 19771 in the presence of incomplete knowledge. p is formalized in MBNF as B+(u, b) yielding the skolemized FONF-formula lbeZ(wq)V+(u, b, ~4). Treat- ing p and the above formula T according to the four aforementioned steps results in the following matrix with submatrices Nl and N2 as defined above: bel(V) not Nl not N2 -beZ(zod) -+(a, h ~4) +(a, b, V) t(a, b, ~1) t(s,b,wl) 1 Looking at the three paths without A&Gliterals, it can be easily seen that at least one of them will never be complementary (regardless of how V is instantiated). Consequently, the matrix is not MM-complementary, which tells us that p is not an MBNF-consequence of T. That is, even though we were able to derive that Anne does not teach biology, we cannot derive that it is known that Anne does not teach biology. This is be- cause the used closed world axioms merely affect what is derivable and not what is known. However, observe that the opposite query, namely lB+( a, b) is answered positively. Certainly, we could obtain different answers by using different closed world axioms. The above algorithm has been implemented using a PROLOG implementation of the connection method. The program takes MM-matrices and checks whether they are MM-complementary. It consists only of five PROLOG-clauses. Interestingly, the first four clauses constitute a full first order theorem prover, and merely 404 Beringer the fifth clause deals with negation as failure. This extremely easy way of implementation is a benefit of the restriction to determinate queries and -databases. Conclusion MBNF [Lifschitz, 19921 is very expressive and thus very intractable. Therefore, we have presented a feasible approach to minimal belief and negation as failure by relying on the fact that in many cases negation as fail- ure is expressive enough to capture additionally the (nonmonotonic) notion of minimal belief. We have identified a substantial subsystem of MBNF: Feasible databases along with feasible queries. This subsystem allows for a truth-preserving translation into FONF, a first order logic with negation as failure. However, fea- sibility has its costs. For instance, FONF does not allow for “quantifying-in” not. Also, we have given a seman- tics of FONF by extending the semantics of extended logic programs [Gelfond and Lifschitz, 19901. We have developed an extended connection calcu- lus for FONF, which has been implemented in PROLOG. To our knowledge, this constitutes the first connection calculus integrating negation as failure. We wanted to keep our calculus along with its algorithm as simple as possible, so that it can be easily adopted by ex- isting implementations of the connection method, like SETHEO [Letz et al., 19921. The preservation of sim- plicity has resulted in the restriction to determinate theories, which possess only single FONF-models. This restriction is comparable with the one found in ex- tended logic programming, where one restricts oneself to well-behaved programs with only one model. As a result, we can compute determinate queries to determinate databases in MBNF. This subset of MBNF is still expressive enough for many purposes: Apart from asking what a database knows, determi- nate queries and databases allow for expressing de- fault rules, axiomatizing the closed world assumption and integrity constraints. Also, it seems that the re- striction to determinate queries and databases can be dropped in the presence of a more sophisticated algo- rithm treating multiple FONF-models separately. Moreover, our approach is still more expressive than others: First, FONF is more expressive than PROLOG: Since it is build on top of a first order logic, it al- lows for integrating disjunctions and existential quan- tification, Second, Reiter [1990] has proposed another approach, in which databases are treated as first or- der theories, whereas queries may include an epistemic modal operator. As shown in [Lifschitz, 19921, this is equivalent to testing whether BTkMBNF BF holds for objective theories T and positive formulas F of MBNF. Obviously, Reiter’s approach is also subsumed by de- terminate queries and databases, so that we can use our approach to implement his system as well. Although we cannot account for MBNF in its entirety, our approach still deals with a very expressive and, hence, substantial subset of MBNF. Moreover, from the perspective of conventional theorem proving, our translation has shown how the epistemic facet of nega- tion as failure can be integrated into automated the- orem provers. To this end, it is obviously possible to implement FONF by means of other deduction meth- ods, like resolution. In particular, it remains future work to compare resolution-based approaches to nega- tion as failure to the approach presented here. Acknowledgements We thank S. Briining, M. Lindner, A. Rothschild, S. Schaub, and M. Thielscher for useful comments on earlier drafts of this paper. This work was supported by DFG, MPS (HO 1X)4/3-1) and by BMfT, TASS0 (ITW 8900 CZ). eferences Bibel, W. 1987. Automated Theorem Proving, Vieweg, 2nd edition. Gelfond, M. and Lifschitz, V. 1990. Logic programs with classical negation. In Proc. International Confer- ence on Logic Programming. 579-597. Kowalski, R. 1978. Logic for data description. In Gal- laire, H. and Minker, J., eds., In Proc. Logic and Dutu- buses, Plenum. 77-103. Letz, R.; Bayerl, S.; Schumann, J.; and Bibel, W. 1992. SETHEO: A high-performance theorem prover. Journal on Automated Reasoning. Levesque, H. 1984. Foundations of a functional ap- proach to knowledge representation. Artificial Intelli- gence 23:155-212. Lifschitz, V. 1991. Nonmonotonic databases and epis- temic queries. In Myopoulos, J. and Reiter, R., eds., In Proc. International Joint Conference on Artificial Intelligence, Morgan Kaufmann 381-386. Lifschitz, V. 1992. Minimal belief and negation as fail- ure. Submitted. Lin, F. and Shoham, Y. 1990. Epistemic semantics for fixed-points nonmonotonic logics. In Parikh, Rohit, ed., In Proc. Theoretical Aspects of Reasoning about Knowledge. 111-120. McCarthy, J. 1980. Circumscription - a form of nonmonotonic reasoning. Artificial Intelligence 13( l- 2):27-39. Reiter, R. 1977. On closed world data bases. In Gal- laire, H. and Nicolas, J.-M., eds., In Proc. Logic and Databases, Plenum. 119-140. Reiter, R. 1980. A logic for default reasoning. Artificial Intelligence 13(1-2):81-132. Reiter, R. 1990. On asking what a database knows. In Lloyd, J. W., ed., Computational Logic, Springer. 96-113. Nonmonotonic Logic 405 | 1993 | 64 |
1,392 | A context- for default its Philippe Besnard Torsten Schaub IRISA, Campus de Beaulieu TH Darmstadt, Alexanderstrafle 10 F-35042 Rennes Ckdex, France D-6100 Darmstadt, Germany besnard@irisa.fr torsten@intellektik.informatik.th-darmstadt.de Abstract We present a new context-based approach to de- fault logic, called contextual default logic. The approach”extends the notion of a default rule and supplies each extension with a context. Contextu- al default logic allows for embedding all existing variants of default logic along with more tradi- tional approaches like the closed world assump- tion. A key advantage of contextual default lo- gic is that it provides a syntactical instrument for comparing existing default logics in a unified set- ting. In particular, it reveals that existing default logics mainly differ in the way they deal with an explicit or implicit underlying context. Thus, the primary purpose of this work is to inte- grate the different variants of default logic in a more general but uniform system, which combines the ex- pressiveness of the various default logics. The basic idea is twofold. First, we supply each default exten- sion (ie. a set of default conclusions) with an underlying context. Second, we extend the notion of a default rule in order to allow for a variety of different application conditions which arise naturally from the distinction between the initial set of facts, the default extension at hand, and its context. Introduction Default logic has become the prime candidate for formalizing consistency-based default reasoning since its introduction in [Reiter,l980]. Since then, sev- eral variants of default logic have been proposed, eg. [Lukaszewicz,1988; Brewka,l991; Delgrande et az.,1992]. Each such variant rectified purportedly counterintuitive features of the original approach. However, the evolution of default logic is diverging. Al- though it has resulted in diverse variants sharing many interesting properties, it has altered the notion of a de- fault rule. In particular, most of the aforementioned variants deal with a different notion of consistency. For instance, Reiter’s default logic employs some sort of local consistency, whereas others employ some sort of global consistency. Notions of consistency in default logics Classical default logic was defined by Reiter in [1980] as a formal account of reasoning in the absence of complete information. It is based on first-order logic, whose sentences are hereafter simply referred to as for- mulas (instead of closed formulas). In default logics, default knowledge is incorporated by means of so-called default rules. A default rule is any expression of the form v, where cy, p and y are formulas. cx is called the prerequisite, ,f3 the justification, and y the conse- quent of the default rule. Accordingly, a default theory (D, W) consists of a set of formulas W and a set of default rules D. Informally, an extension of the ini- tial set of facts W is defined as the set of all formulas derivable from W by applying classical inference rules and all applicable default rules. Usually, a default rule v is applicable, if its prerequisite a! is derivable and its justification /3 is consistent in a certain way. Up to now, we are then compelled to choose among one of the respective variants whenever we want to rep- resent default knowledge. At first sight, this seems to be a good solution, since we may select one of the vari- ants depending on its properties. However, our choice fixes the notion of a default rule. More freedom would be desirable: We should not be forced to commit our- selves to just a single variant of default logic, because all facets of default logic are worth considering. In- stead, an integrated approach is proposed below, which is based on a very general notion of a default rule. In all “conventional” default logics, the prerequisite CY of a default rule 7 is checked wrt an extension E by requiring a! E E. However, all of the afore- mentioned variants differ in the way they account for the consistency of the justification ,f3. For instance, in classical default logic [Reiter,l980] the consistency of the justification p is checked wrt the extension E by l,f3 @ E, whereas in constrained default logic [Del- grande et aZ.,1992] the same is done wrt a set of con- straints C, containing the extension E, by checking + e c- In default logics, there are thus two extreme no- 406 Besnard From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. tions of consistency: Individual and joint consistency. The former one is employed in classical default logic, whereas the latter can be found in cumulative and con- strained default logic. Individual consistency requires that no justification of an applying default rule is con- tradictory with a given extension, whereas joint con- sistency stipulates that all justifications of all applying default rules are jointly consistent with the extension at hand. As an example, consider the default theory ({ *B :-IB C’D 9 * > 0) (1) In classical default logic, this default theory has one extension Z%((C, D)). Both default rules apply, al- though they have contradictory justifications. This is because each justification is separately consistent with Th({C, D)). In th is case, the extension is somehow em- bedded in a “context” which gathers two incompatible “subcontexts”: One containing the extension and the justification of the first default rule, Z%({C, D, B}), and another one containing the justification of the second default rule, 5Yz({C, D, lB}). This is different from the approach taken in con- strained default logic. There, we obtain two con- strained extensions. We obtain one extension Z%({C}) which is supplied with a set of constraints Z%((C, B}) consisting of the justification B and the consequent C of the first default rule. We also obtain another exten- sion i&({D)) whose constraints Y%({D, -7B)) contain the justification 1B and the consequent D of the sec- ond default rule. Each set of constraints contains the extension and additionally all justifications of all ap- plying default rules. Thus, each extension is embedded in a “context” given by the set of constraints. In order to combine the variants of default logic, we have to compromise the notions of individual and joint consistency. In particular, we have to deal with joint consistency requirements in the presence of in- consistent individual consistency requirements. There- fore, we allow for “contexts” containing contradictory formulas, like B and -IB as in the previous exam- ple in classical default logic, without containing all possible formulas. Thus, we admit contexts which are not deductively closed. In the previous example, the extension Z%({C, D)) will then have the context Z%({C, D, B)) u Z%({C, D, -7B)), which is composed of two incompatible subcontexts. A useful notion is then that of pointwise closure 2%~ (T). Definition 1 Let T and S be sets of formulas. The pointwise closure of T under S is defined as l%s (T) = L&T IR(S u -@3)* If S is a singleton set {q), we simply write !I&(T) instead of 3,,(T). Given two sets of formulas T and S, we say that T is pointwisely closed under S iff T = 2%~ (T). We simply say that T is pointwisely closed whenever T = ~!%T(T) for any tautology T. Observe that the aforementioned context can now be represented as the pointwise closure of {B, 1B3 under {G D), namely I&(c,D>({B, 433). Contextual default logic We introduce a new approach to default logic by ex- tending the notions of default rules and extensions. The resulting system is called contextual default Zo- gic. We consider three sets of formulas: A set of facts W, an extension E, and a certain context C such that W C E C C. The set of formulas C is somehow es- tablished from the facts, the default conclusions (ie. the consequences of the applied default rules), as well as all underlying consistency assumptions (ie. the jus- tifications of all applied default rules). That is, our approach trivially captures the above application con- ditions for “conventional” default rules, eg. CL E E and + 4 E in th e case of classical default logic. This approach allows for even more ways of form- ing application conditions of default rules. Consider a formula cp and three consistent, deductively closed sets of formulas W, E, and C such that W C_ ~!3 C C. Six more or less strong application conditions are obtained which can be ordered from left to right by decreasing strength; whereby > is read as “implies”: QEW > QEE > QEC > lcp$E’C > l&E > lcp$?!W We can think of W as a deductively closed set of facts, E as a default extension of W, and C as the above mentioned context for E. Then, the first condition cp E W stands for first-order derivability from the facts W. The second condition cp E E stands for derivabil- ity from W using first-order logic and certain default rules. This is used in conventional default logics as the test for the prerequisite of a default rule. The third condition, cp E C, is the strangest one. It ex- presses “membership in a context of reasoning”. The last three conditions are consistency conditions. The fourth condition 19 Q C corresponds to the consis- tency condition used in constrained default logic, the fifth one lcp $ E is used in classical default logic. Fi- nally, the last condition 1~ 4 W is the one used for the closed world assumption [Reiter,1977], where it is restricted to ground negative literals. This variety of application conditions motivates an extended notion of a default rule. Definition 2 A contextual default rule 6 is an expres- sion of the form where arw, &El QC, ,&, PE, ,&, and y are formulas. CYW, a(~, ~XC are called the W-, E-, and C-prerequisites, aZso noted Prereqw(6), PrereqE(S), Prereqc(S), PC, ,8E, pw are called the C-, E-, and W-justifications, also noted Justifc(a), JustifE(b), Justifw(b), and y is called the consequent, also noted Conseq(6). ’ The six antecedents of a contextual default rule are to be treated along the above intuitions. Accordingly, a contextual default theory is a pair (D, W), where D is a set of contextual default rules and W is a deductively ‘These projections extend to sets of default rules in the obvious way (eg. J,st&(A) = U6E~{~~~tifE(b)} ). Nonmonotonic Logic 407 closed2 set of formulas. Now, a contextual extension is to be a pair (E, C), where E is a deductively closed set of formulas and C is a pointwisely closed set of formulas, as follows. Definition 3 Let (D, W) be a contextual default the- ory. For any pair of sets of formulas (T, S) let V(T, S) be the pair of smallest sets of formulas (T’, S’) such that W C T’ C S’ and the following condition holds: For any ~wlaBiaC:~Cl~BI~W E D if 7 I Z.CYwEW 2. QE E T’ 3. arc E S’ d-%74 s 5. +E@T 6. $bv@ W then 7. ‘IK,(T’) E T’ 8. tZ-$n (T’) C S’ 9. m& (S’) c S’ A pair of sets of formulas (E, C) is a contextual exten- sion of(D, W) iff V(E,C) = (E,C). Notice that the operator V is in fact parameterized by (D, W). Furthermore, observe that Conditions 1-6 basically correspond to those given above. Intuitively, we start from (IV, W) (ie. we take the facts W as our initial version of E and C) and try to apply a contextual default rule by checking conditions i-6 and, if we are successful, we enforce 7-9, ie. we add 7 to our current version of E and we add # A FE and cp A PC to our current version of C, for each 4 in the final E and for each cp in the final C. Consider the contextual default theorv (1 l~l:;l-Bi},mpj) AII;BI, along with its bnly contextual extension {E, C), where E = mu-b c, W) C = Z-h((A, C, D, E, B}) uTh({A, C, D, E, lB}). E represents the extension and C provides its context. This contextual extension is generated from the facts by applying first the first contextual default rule and then the second one. Now, w applies if its prerequisite A is monoton- ically derivable (ie. if A is derivable without contextual default rules according to Condition 1 in Definition 3) and if its E-justification B is consistent with the ex- tension E (according to Condition 5). In other words, B has to be individually consistent. This being the case, we derive C. That is, C is nonmonotonically deriv- able by means of the first contextual default rule (cf. Condition 2). Thus, C establishes the prerequisite of the second contextual default rule, I ’ 1’ i’ lB I. In or- der to derive D, we have to verify the consistency of the two justifications E and lB, ie. E has to be jointly consistent (ie. according to Condition 4, it has to be consistent with the context C), whereas 1B has to be individually consistent (ie. according to Condition 5, it has to be consistent with the extension E). Since this is fulfilled, we obtain the above contextual extension satisfying our consistency requirements. Observe that the context C is composed of two incompatible subcontexts, Th({A, C, D, E, B}) and Z%({A, C, D, E, TB}). All such subcontexts contain a common “kernel” given by the extension and all jointly consistent Cjustifications, here T?z({A, C, D}) and E. The E-justifications, B and ~8, create different sub- contexts. Why is the joint consistency of E not affected by these two incompatible formulas? This is because in our approach joint consistency only requires the con- sistency of a justification with each subcontext in turn, whereas individual consistency requires the consistency of a justification with at least one such subcontext. Embedding default logics We show that classical [Reiter,l980], justified [Lukaszewicz,1988] and constrained default logic [Del- grande et aZ.,1992] are embedded in contextual default logic. Since cumulative default logic [Brewka,l991] is closely connected to constrained default logic, neglect- ing representational issues, we obtain that variant too. As mentioned in the introductory section, classical default logic employs a sort of local consistency (which we also called individual consistency), as can be seen from the following definition of classical extensions. Definition 4 Let (D, W) be a default theory. For any set of formulas T let I’(T) be the smallest set of formulas T’ such that 1. W E T’, 2. %(T’) = T’, 9. For any y E D, if cy E T’ and -+ @ T then y E T’. A set of formulas E is a classical extension of (D, W) ifi I’(E) = E. In order to have a comprehensive example throughout the text, we extend default theory (1) by introducing an additional default rule: ({ :B :YB :-C/t-D C’D’ E > 0) 1 (2) This default theory still has one classical extension m({C, D)). A s s h own above, the first two default rules apply, although they have contradictory justifications, and then block the third default rule. In order to relate classical with contextual default lo- gic, let us identify default theories in classical default logic with contextual default theories as follows. Definition 5 Let (D, W) be a default theory. Define 4&(D,W)= ((-1 +D},m(W)). Then, classical default logic corresponds to this frag- ment of contextual default logic. Theorem 1 Let (D, W) be a default theory and let E be a set of formulas. Then, E is a classical exten- sion of (D, W) ifl (E, C) is a contextual extension of 4&(D, W) for some C. Given a classical extension E, the context C is the pointwise closure of the justifications of the generating3 default rules under E. 2This is no real r estriction, but it simplifies matters. 3 Informally, the generating apply in view of E. default rules are those which 408 Besnard Consider the contextual counterpart of default the- ory (2): ({ ll:‘c”l, ll:l;Bl, II:‘-y-D’> ,J-qq) . contextual zension (Z%yF$>), S!%({C, F\)) UZ%((C, D, lB})) whose extension corresponds to the classical extension of default theory (2). The common kernel of the two subcontexts of the context is given by the extension. In addition, the first subcontext, Z%((C, D, B})), contains the E-justification of the first contextual default rule, whereas the second one, !Z%({C, D, lB})), contains ad- ditionally the E-justification of the second contextual default rule. As in classical default logic, the third contextual default rule is blocked by the other ones. Further evidence for the generality of our ap- proach is that it also captures justified default logic [Lukaszewicz,l988]. In this approach, the justifications of the applying default rules are attached to extensions in order to strengthen the applicability condition of de- fault rules. A justified extension is defined as follows. Definition 6 Let (D, W) be a default theory. For any pair of sets of formulas (T, S) let Q(T, S) be the pair of smallest sets of formulas T’, S’ such that 1. W C T’, 2.Z%(T’) = T’, 3. Forany~ED,if~ET’andVqESU{~}.TU Cr3 t-l cr73 Y -I- th enyET’ andpESt. A pair of sets of formulas (E, J) is a justified extension of (D, W) ifi P(E, J) = (E, J). Let us return to default theory (2). This default the- ory has two justified extensions: (Z%({C, D}), {B, lB}) and (!Z%({E}), {-C A -,D}). The first one corresponds to the extension obtained in classical default logic. However, it is supplied with a set of justifications, {B, ‘B} (which, incidentally, is inconsistent). The sec- ond extension stems from applying the third default rule whose justification blocks the two other default rules by contradicting their consequents. Now, let us identify default theories in justified de- fault logic with contextual default theories: Definition 7 Let (D, W) be a default theory. Define @JDL(D, W) = ({ JaI,7,./pA7L 1 &$ E 0),2%(W)). This leads to the following correspondence. ‘Eheorem 2 Let (D, W) be a default theory and let E be a set of formulas. Then, (E, J) is a justified ex- tension of (D, W) for some J ifl (E, C) is a contextual extension of %JDL(D, W) for some C. J consists of the justifications of the generating4 de- fault rules for E, whereas C is given by the pointwise closure of the same set of justifications under E. It is interesting to observe how the relatively compli- cated consistency check in justified default logic is ac- complished in contextual default logic. For a justified extension (E, J) and a default rule &$ the condition is 41n the sense o f j ustifled default logic. V~rl E JU(p}. EU(y}U(rj) y 1. In fact, it is two-fold: It consists of a joint and an individual consistency check, ie. Vq E J. E U {r} U {q} y I and E U {-y} U {p} y 1. Transposed to the case of a contextual extension (E, 6) the two subconditions are 17 @ C and l(p A 7) $! E. The first check cares about the joint consistency of the consequent y, whereas the second one checks whether the conjunction of the justification and consequent of the default rule is individually consistent. Now, let us see what happens to default theory (2) if we apply translation @JDL: ({ 11: CIBACI I(:DJ-BADI 11: EI--XA~DhEI c ’ D 1 E } 9 m(0)) As in-justified default logic, we get two contextual ex- tensions: (Z%({C, D}), Z%({C, D,%))UIR({C, D, -B})}) and (!Z%({E}), Y%({E, X, lD})>, whose extensions cor- respond to the extensions obtained in justified default logic. Observe that the respective subcontexts differ exactly in the justifications attached to the extensions in justified default logic. Finally, we turn to constrained default logic [Del- grande et aZ.,1992], which employs a sort of joint con- sistency. In constrained default logic, an extension comes with a set of constraints. A constrained exten- sion is defined as follows. Definition 8 Let (D, W) be a default theory. For any set of formulas S let Y(S) be the pair of smallest sets of formulas (T’, S’) such that 1. W & T’ 5 S’, 2. T’ = Th(T’) and S’ = Th(S’), 3. Forany~ED,ifcv~T’andSu{p)u(~}y-L thenyET’andp,rES’. A pair of sets of formulas (E, C) is a constrained ex- tension of (0, W) iff Y(C) = (E,C). As we have seen above, constrained default logic de- tects inconsistencies among the justifications of de- fault rules. Thus, we obtain three constrained exten- sions, (m((c3), m((c, B3)h t~(-P3), m(CD, -B3)), (Z%({E)), Z%({E, X, 7D))), of default theory (2). They- are formed as described above. A default theory in constrained default logic is iden- tified with a contextual default theory as follows. Definition 9 Let (D, W) be a default theory. Define QDL(D,W) = ({v / 9 E 0),2%(W)). This yields the following correspondence. Theorem 3 Let (D, W) be a default theory and let E and C be sets of formulas. Then, (E, C) is a con- strained extension of (D, W) ifi (E, C) is a contextual extension of @c,,(D, W). Notice that C is always deductively closed whenever (E, C) is an extension in either sense. Consider the contextual counterpart of default the- ory (2) from the perspective of constrained default logic: (f II: “c^‘ll, II: -“6’“ll, II: -;Wl} , m(0)) As a . result, we obtain three contextual .exten- sions: PW3)~ ‘IRW f33))) @-7w3), 774CD, 433h Nonmonotonic Logic 409 and (Z%({E}), Th((E, X, 1D))). These are identical to the respective constrained extensions. Contextual default logic: Expressiveness This section is devoted to the novel application condi- tions of contextual default rules and-how their inter- play may influence the contents of extensions. Let us first consider the difference between IV- and E-prerequisites. In general, W-prerequisites should be preferred over E-prerequisites whenever a prerequisite has to be verified, ie. whenever it should not be deriv- able by default inferences. This cannot be modelled in conventional default logics, since they do not distin- guish sions. between monotonic and nonmonotonic conclu- As an example, consider the assertion “usually, we - can transplant an organ provided that the person is proven to be dead”. Of course, the antecedent of this rule should be more than merely concluded by default. For instance, a person whose body is fully covered with a medical blanket is usually dead, but it takes more evidence for doctors to remove organs. Now, the above rule can be formalized by means of the contex- tual default rule v, saying that an organ, 0, can be transplanted, if this is consistent with the current context, and provided that the death, D, of the per- son has been verified. Importantly, adding the contex- tual default rule 9 (saying that a person whose body is covered, C, with a blanket is usually dead, D) does not allow w to apply, even in the case where w = z%({C}). Gprerequisites are a means for weakening an- tecedents of default rules. This is because a G prerequisite allows us not only to refer to default con- clusions but also to their underlying consistency as- sumptions: A Cprerequisite is satisfied iff it belongs to some subcontext. Accordingly, certain contextual de- fault rules can only be applied .if a certain context has been established. For instance, a contextual default rule “:I$ may establish, without actually asserting, a consistency assumption A on which other contextual default rules, like w, rely. Let us now turn to the difference between G and E-justifications of contextual default rules. Notably, it can serve for imposing priorities between two implicit assumptions. This cannot be modelled easily in con- ventional default logics. For instance, in default theory (1) a precedence among the two implicit assumptions can be modelled in contextual default logic in a very straightforward way by weakening the imp&it assump- tion B compared to its negation: (i J/:1 B I 11: -B I[ C’ D This } 340)) yields one contextual extension, WCC3), WCB, C3)>* The use of W-justifications is closely related to CWA, the closed world assumption [Reiter,1977]. CWA has been introduced in order to complete a given set of facts W. In CWA, a ground negative literal is deriv- able iff the original atom is not derivable from W. Con- sidering a database about taxpayers, for instance, an individual is not a dead person unless stated other- wise. Given no other knowledge about an individual, we derive that he is not dead. This can be modelled by means of the contextual default rule w. Contextual default logic: The formal theory In the sequel, we give alternative characterizations of - . contextual extensions and describe their structure in more detail. First, we define the set of generating con- textual default rules. Deffinition I.0 Let (D, W) be a contextual default theory and T and S sets of formulas. The set of gen- erating contextual default rules for (T, S) wrt (D, W) is defined as &T,S) (D,W = { aw aEl ac : PC PI3 Pw 7 ED aw E w, &EET, Q!c E s, --& 6% +E $=, -$w @w 1 Now, we can make precise the claim made before Defi- nition 3: In a contextual extension (E,C), the set E is deductively closed and the set C is pointwisely closed. Theorem 4 Let (E, C) b e a contextual extension of (D, W) and A = GD[i’$)). Then, I&( W u Conseq( A)) = Z%(E) = E ~E”.hh&(A)( hStifj+)) = Ix,(c) = c. The first inclusion shows that extensions of contex- tual default theories are formed in the same way as in conventional default logics. That is, they consist of the initial facts along with the consequents of all applying contextual default rules. The second inclu- sion describes the respective contexts. A context is the pointwise closure of the E-justifications of the applying contextual default rules (corresponding to the individ- ual consistency requirements) under the extension and the Gjustifications of the applying contextual default rules (corresponding to the joint consistency require- ments). It follows that whenever (E, C) is a contextual extension, C contains the deductive closure of E and all formulas involved in joint consistency requirements. In symbols, 2% (E U Justifc (GD[$&)) c C. Since this set is shared ‘by all subcontexts of a context, we call it the kernel of a context. Theorem 5 Let (D, W) be a contextual default the- ory and let E and C be sets of formulas. Define Eo = W and CO = W and for i 2 0 Ai = { aw am ac : PO Pa Pw 7 ED awEW, QE E Ei, Qc E G, +-a7 e c, '@E 6% -9w e w E. a+1 = Z%( W U Conseq(A;)) C* 2-i-l = ~~,uJ11Jt~~~a;,(~UStif~(Ai)) 410 Besnard Then, (E, Cd, is a contextual extension of (D, W) iff (4 C) = (UiEo Ei, Ui”=, G). The extension E is built by successively introduc- ing the consequents of all applying contextual default rules. Also, the deductive closure is computed at each stage. For each partial context C;+l, the previous par- tial extension Ei is unioned with the Cjustifications of all applying contextual default rules. This set is unioned in turn with each E-justification of all apply- ing contextual default rules. Again, the deductive clo- sure is computed when appropriate. In this way, each partial context Ci+r is built upon the kernel of the previous partial context, Z%( E; U Justifc (Ai)). A possible worlds semantics In analogy to [Besnard and Schaub,1993], we employ Kripke structures for characterizing contextual exten- sions. A Kripke structure has a distinguished world, the “actual” world, and a set of worlds accessible from it. The idea is roughly as follows. In a class of Kripke structures, the actual worlds characterize an exten- sion, whereas the accessible worlds characterize its con- text consisting of a number of subcontexts. In con- crete terms, given a contextual extension (E, C) and a Kripke structure m, we require that the actual world we of m be a model of the extension, E, and demand that each world in m accessible from wc be a model of some subcontext of C. Thus, each world of m accessi- ble from the actual world we is to be a model of the kernel of C. First, we define the class of K-models5 associated with W as !XW = {m 1 m b 7 A q 7,7 E W}. We will semantically characterize contextual extensions by maximal elements of a strict partial order on classes of K-models. Given a contextual default rule 5, its application conditions and the result of applying it are captured by an order >a as follows. Definition 11 Let 5 = aWIaBlac~PCIPBIPw. Let 9X and 9X’ be distinct classes of K-models. Define 9.Jl >a 9.X’ ifl irJZ={nlEtrrz’Im~7Ao7AoPcAOPE} and j.mwj=Qw 2. %Q’/=CtTE 3. my= ocrc 4. f)Jz’ &t O$ 5. m /$ q lfl, 6. %+W k +W Given a set of contextual default rules D, the strict partial order >D is defined as the transitive closure of the union of all orders >a such that S E D. Then, we obtain soundness and completeness:6 Theorem 6 Let (D, W) be a contextual default the- ory. Let m be a class of K-models, E a deductively closed set of formulas, C a pointwisely closed set of and K-models stand for models Given a set of formulas 2’ OT stand for A,ETO~. of the modal logic K. let q IT stand for &~&la formulas, and CK = 2% E U Justifc ( ( CJ = JuStifE GD{f’z) such that > GD~,E’~‘j ’ >> and f)32 = {m I'm b E A EICK A (ZJ). (E, 6) is a consistent contextual extension of (D, W) iff m iS a >D -maximal non-empty class above w. Observe that the requirements on a maximal class of K-models correspond to the aforementioned intuitions. Clearly, E is the extension, C the context, CK the ker- nel and CJ consists of E-justifications distinguishing the subcontexts from each other. Conclusion Contextual default logic provides a unified framework for default logics by extending the notion of a default rule and supplying each extension with a context. Such contexts are formed by pointwisely closing certain con- sistency assumptions under a given extension. We isolated six different application conditions for default rules. We showed that only three of them are employed in conventional default logics, even though two of the three remaining ones correspond to well- known notions, namely first-order derivability and the closed world assumption. The remaining condition ex- presses “membership in a context” and needs further elaboration. Among various advantages, contextual default logic explicates the context-dependency of default logics and reveals that existing default logics differ mainly in the way they deal with an explicit or implicit underlying context. As a result, we saw that justified default logic compromises individual and joint consistency, whereas other variants strictly employ either of them. Acknowledgements We would like to thank Bob Mercer for valuable dis- cussions. This work was supported by CEC, DRUMS II (6156) and by BMfl, TASS0 (ITW 8900 C2). References Besnard, P. and Schaub, T. 1993. Possible worlds se- mantics for default logics. Fundamenta Informaticae. Forthcoming. Brewka, G. 1991. Cumulative default logic: In defense of nonmonotonic inference rules. Artificial Intelligence 50(2):183-205. Delgrande, J.; Jackson, K.; and Schaub, T. 1992. Al- ternative approaches to default logic. Artificial Intel- ligence. Submitted for publication. Lukaszewicz, W. 1988. Considerations on default logic - an alternative approach. Computational Intelligence 4:1-16. Reiter, R. 1977. On closed world data bases. In Gal- laire, H. and Nicolas, J.-M., eds., Logic and Databases. Plenum. 119-140. Reiter, R. 1980. A logic for default reasoning. Artificial Intelligence 13(1-2):81-132. Nonmonotonic Logic 411 | 1993 | 65 |
1,393 | Propositional Logic of Context Saga BuvaE & Ian A. Mason Computer Science Department Stanford University Stanford California 94305-2140 {buvac, iam}@sail.stanford.edu Abstract In this paper we investigate the simple logical properties of contexts. We describe both the syn- tax and semantics of a general propositional lan- guage of context, and give a Hilbert style proof system for this language. A propositional logic of context extends classical propositional logic in two ways. Firstly, a new modality, ist(K, 4), is in- troduced. It is used to express that the sentence, 4, holds in the context 6. Secondly, each context has its own vocabulary, i.e. a set of propositional atoms which are defined or meaningful in that con- text. The main results of this paper are the sound- ness and completeness of this Hilbert style proof system. We also provide soundness and complete- ness results (i.e. correspondence theory) for vari- ous extensions of the general system. Introduction In this paper we investigate the simple logical proper- ties of contexts. Contexts were first introduced into AI by John McCarthy in his Turing Award Lecture, [Mc- Carthy, 19871, as an approach which might lead to the solution of the problem of generality in AI. This prob- lem is simply that existing AI systems lack generality. Since then, contexts have found a large number of uses in various areas of AI. R. V. Guha’s doctoral dis- sertation [Guha, 19911 under McCarthy’s supervision was the first in-depth study of context. Guha’s con- text research was primarily motivated by the Cyc sys- tem [Guha and Lenat, 19901 (a large common-sense knowledge-base currently being developed at MCC) . Without using contexts it would have been virtually impossible to create and successfully use a knowledge base of the size of Cyc. Large knowledge bases are not the only place where contexts have found practical use. The knowledge sharing community has accepted the need for expli- cating context when transferring information from one agent to another. Currently, proposals for introduc- ing contexts into the Knowledge Interchange Format or KIF [Genesereth and Fikes, 19921 are being considered. 412 Buvac Furthermore, it seems that the context formalism can provide semantics for the process of translating facts into KIF and from KIF, one of the key tasks that the knowledge sharing effort is facing. The meaning of an utterance depends on the context in which it is uttered. Computational linguists have developed various ways of describing this context. For example, Barbara Grosz in her Ph.D. thesis, [Grosz, 19771, implicitly captures the context of a discourse by focusing on the objects and actions which are most relevant to the discourse. This representation is similar to an ATMS context [de Kleer, 19861, which is simply a list of propositions that are assumed by the reasoning system. However till now no formal logical explication of con- texts has been given. The aim of this paper is to rectify this deficiency. We describe both the syntax and se- mantics of a general propositional language of context, and give a Hilbert style proof system for this language. The main results of this paper are the soundness and completeness of this Hilbert style proof system. We also provide soundness and completeness results (i.e. correspondence theory) for various extensions of the general system. Notation We use standard mathematical notation. If X and Y are sets, then X +, Y is the set of partial functions from X to Y. I?(X) is the set of subsets of X. X* is the set of all finite sequences, and we let 3 = [xi, . . . , ~~1 range over X*. 6 is the empty sequence. We use the infix operator * for appending sequences. We make no distinction between an element and the singleton sequence containing that element. Thus we write z *ICI instead of z * [ICI]. As is usual in logic we treat X* as a tree (that grows downward). ~1 < ~0 2 E iff ~1 properly extends 30 (i.e. (3~ E X* - {E})(z~ = ZO*$). We say Y C X* is a subtree rooted at g to mean 1. $i E Y and (VZ E Y)(z 5 y) 2. (Vz E Y)(Vw E X”)(x 5 a 5 Q + ti E Y) From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. The General System A propositional logic of context extends classical propositional logic in two ways. Firstly, a new modal- ity, ist(K, 4), is introduced. It is used to express that the sentence, 4, holds in the context K. Secondly, each context has its own vocabulary, i.e. a set of proposi- tional atoms which are defined or meaningful in that context. The vocabulary of one context may or may not overlap with another context. Syntax We begin with two distinct countably infinite sets, R the set of all contexts, and EO the set of propositional atoms. The set, w, of well-formed formulas (wffs) is built up from the propositional atoms, p, using the usual propositional connectives (negation and implica- tion) together with the ist modality. Definition (W): The operations A, V and ++ are defined as abbrevia- tions in the usual way. The term literal is used to refer to a propositional atom or the negation of a proposi- tional atom. We use 4$ to represent either the formula 4, or its negation 14. We also use the following abbre- viations: ist(X,qS) := ist(bcl,ist(m,...,ist(h,+))) ist*(iZ,+) := fist(6~,fist(z.2,-.. kiiSt(fin,q5)-.-)) when E is the context sequence [IE~, ~2, . . . , ~~1. In the definition of ist * all the ist’s need not be of the same parity. PROP is the set of all well formed formulas which do not contain ist’s. If $J is a formula containing distinct atoms pi,. . . ,pn, then we write $J(&, . . . , &) for the formula which results from $J by simultaneously replacing all the occurrences of pi in $ by 4;. We say that $+A,. . . , Ad is an instance of $. Semant its We begin with a system which makes as few semantic restrictions as possible. Other systems are obtained by placing restrictions on the models. The semantics of the general system has the following three features: Firstly, the nature of a particular context may itself be context dependent. For example, in the context of the 1950’s, the context of car racing is different than than the context of car racing viewed from today’s con- text. This leads naturally to considering sequences of contexts rather than a solitary context. We refer to this feature of the system as non-flatness. It reflects on the intuition that what holds in a context can depend on how this context has been reached, i.e. from which per- spective it is being viewed. For example, non-flatness will be desirable if we represent the beliefs of an agent as the sentences which hold in a context. A system of flat contexts can easily be obtained by placing certain restrictions on what kinds of structures are allowed as models, as well as enriching the axiom system. Secondly, a context is modelled by a set of truth as- signments, that describe the possible states of affairs of that context. Therefore the ist modality is interpreted as validity: ist(K, p) is true iff the propositional atom p is true in all the truth assignments associated with context tc. Treatment of ist as validity corresponds to Guha’s proposal for context semantics, which was mo- tivated by the Cyc knowledge base. A system which models a context by a single truth assignment, thus interprets ist as truth, can be obtained by placing simple restrictions on the definition of a model, and enriching the set of axioms. Thirdly, since different contexts can have different vocabularies, some propositions can be meaningless in some contexts, and therefore the truth assignments de- scribing the state of affairs in that context need to be partial. Definition (9X): In this system a model, ??X, will be a function which maps a context sequence E E lK* to a set of partial truth assignments, .!I2 E lEc* --+P P(P +-P 2), with the added conditions that 1. (VE)(V’vi, z4 E !B@))(Dom(vi) = Dom(v2)) 2. Dam(m) is a subtree of K* rooted at some context sequence Kc. We write EM to denote the set of partial truth as- signments 9X(E). Note that Em can be empty. The collection of all such models will be denoted by Ml. We could have assumed the existence of a fixed out- ermost context which would result in Dom(fm) being a tree rooted at empty sequence E (i.e. the fixed out- ermost context). This would result in slightly simpler notation and proofs. However, although more com- plicated, our definition is based on the intuition that there is no outermost context. Vocabularies The truth assignments in our model are partial. The atoms which are given a truth value in a context are defined by a relation Vocab E lK* x IED. Definition (Vocub of tm): We define a function Vocub : Ml + P(IK* x p) which given a model returns the vocabulary of the model: Vocub(9X) := {<E, p> ] E E Dom(9X) and p E Dom(!?Q))} We say that a model iDI is cZussicuZ on vocabulary Vocab iff Vocab C Vocub(?DQ. The notion of vocabulary can also be applied to sen- tences. Intuitively, the vocabulary of a sentence relates a context sequence to the atoms which occur in the scope of that context sequence. In the definition we also need to take into account that sentences are not given in isolation but in a context. Nonmonotonic Logic 413 Definition (Vocab of 4 in E): We define a function Vocab : K* x !4’ + P(K* x P) which given formula in a context, returns the vocabulary of the formula. Vocab(E, 4) is defined inductively by: It is extended to sets of formulas in the obvious way. Note that it is only in the propositional case that we can carry out this static analysis of the vocabulary of a sentence. It will not be possible in the quantified versions. Also note that our definition of vocabulary of a sentence is somewhat different from Guha’s notion of definedness. Guha proposes to treat ist(K, 4) as false if 4 is not in the vocabulary of the context K. Satisfaction We can think of partial truth assign- ments as total truth assignments in a three-valued logic. Our satisfaction relation then corresponds to Bochvar’s three valued logic [Bochvar, 19721, since an implication is meaningless if either the antecedent or the consequent are meaningless. We chose Bochvar’s three valued logic because we intend meaningfulness to be interpreted as syntactic meaningfulness, rather than semantic meaningfulness along the lines of Kleene’s three valued logic [Kleene, 19521. Definition (b) : If v E Ed and Vocab(E,cp) G Vocub(iDl), then 9JZ,vk,piffv(p)=l, pEP !X$~~~+iffnot%R,v&+ ~,v~,~~~iff~,y~~~implies~,v~,Ijl 9X, u k, ist(Ki, 4) iff Vvl E (E*KI)~ 9X, VI j=z*Kl 4 In the last point note that i? * ~1 E Dom(9.X) since the Dom(!.?X) is a rooted subtree, and Vocab(E, 4) C Vocub(m). Wewritet)171+,4iffVvEiP ?JX,Ykf+. Formal System We now present the formal system. To do this we fix a particular vocabulary, Vocab c K* x P, and define a provability relation, t-G ‘Ocab. Since Vocab will re- main fixed throughout we omit explicitly mentioning it and write l-,4 instead. Similarly, to avoid constantly stating lengthy side conditions we make the following convention. Definedness Convention: In the sequel, when- ever we write l--,4 we will be assuming implicitly that Vocab(E, 4) C Vocab. Axioms and inference rules are given in table 1. Note that the rules of inference preserve the (definedness convention). Assuming that our system was limited to only one context, the rule (CS) would be identical to the rule of necessitation in normal systems of modal logic, and axiom schema (IX) would be identical to the the stan- dard axiom schema K. Thus in the single context case, ignoring axiom schemas (A+) and (A-), our formal system is identical to what is usually called the normal system of modal logic, characterized by (PL), (NIP), (K), and the rule of necessitation. The axiom schemas (A+) and (A-) are needed in order to accommodate the validity aspect of the ist modality. It turns that they derivable in the system which treats ist as truth and does not allow inconsistent contexts. Provability A formula q5 is provable in context K with vocabulary Voca b (formally l-z 4) iff I-, 4 is an in- stance of an axiom schema or follows from provable for- mulas by one of the inference rules; formally, iff there is a sequence [ l-x1 41, . . . , l-z, &] such that En = E, and A = 4 and for each i 5 n either l--z% +i is an axiom, or is derivable from the earlier elements of the sequence via one of the inference rules. In the case of assumptions, formula 4 is provable from assump- tions T in context Ee with vocabulary Vocab (formally T ,Joca b or again taking into account that Vocab is fixed T &, 4) iff there are formulas 41,. . . ,& E T, such that l-z0 (41 A . . . A &) --+ 4. Note that due to the definedness convention if T l-z, 4 then Vocab(T) E Voca b. Consequences Some simple theorems and derivable rules of the sys- tern are: (C) I--, ist(Ki,$) A ist(Ki,ti,) -+ ist(Ki,+A+) (Or) I-.C ist(Ki,+) V ist(Ki,1CI) --+ ist(Ki,+V+) (M) t-, ist(m, 4 A $1 --+ ist(Ki, 4) A ist(ni, $J) (K*) l-c ist(z, 4 -+ $J) + ist(c, 4) --+ is@‘, $> A slightly deeper result is that any formula is prov- ably equivalent to one in a certain syntactic form. This equivalence plays an important role in the complete- ness proof. Definition (CNF) : A formula 4 is in conjunctive normal form (CNF) iff it is of the form Ei A E2 A - - . A EI,, and each Ei is of the form oil V ai V - - - V aiTz, where each oij is either a literal, or ist*(i?, ,0) for some disjunction of literals ,0. Note that i and k can be 1. Lemma (CNF): For any formula 4, context se- quence E, there exists a formula $* which is in CNF, such that l-z+ H +*. 414 Buvac :(PL) )-F 4 provided 4 is an instance of a tautology. (K) t-, ist(m, 4 -+ @> +ist(1~1,q5)-+ ist(Kl,$) (A,) t--s;r ist(/cl,ist(Kz,qS)V+)+ ist(m,ist(K2,4))V ist(m,$,) (A-.) I-, ist(Kl, list(K2,~$) V 7j3) --+ ist(Ki, ++2,+>> V ist(Ki, $) WY h& ~id--+ti hT*/q 4 bb w t-z isth, 4) Table 1: Axioms and Inference Rules Theorem (soundness) : If I-F~, then for all mod- els f)32 classical on Vocab ?JX k=s;r 4. If T I-F 4, then for all models 9.X classical Vocab if for all 1c, E T 9X j=~ $J, then 9X +E 4. Completeness We begin by introducing some concepts needed to state the completeness theorem. Definition (satisfiability): A set of formulas T is satisfiable in context E with vocabulary Vocab iff there exists a model !JX classical on Vocab, such that for all 4 E T, i-m +F 4. Definition (consistency): A formula q5 is consis- tent in ET with Vocab, where Vocab@, 4) C Vocab iff not l-x 14. A finite set T is consistent in 7F with Vocab iff /x\ T is consistent in E with Vocab. An infinite set T is consistent in in with Vocab iff every finite subset of T is consistent in i? with Vocab. A set T is inconsistent in E with Vocab iff the set T is not consistent in 7Z with Voca b. A set T is maximally consistent in E with Vocab iff T is consistent in E with Vocab and for all 4 $ T such that Vocab(& 4) 2 Vocab, T U {4} is inconsistent in E with Vocab. As is usual, an important part of the completeness proof is the Lindenbaum lemma allowing any consis- tent set of wffs to be extended to a maximally consis- tent set. Lemma (Lindenbaum) : If T is consistent in E with Vocab, then T can be extended to a maximally consistent set To in E with Vocab. Now we proceed to state and prove the completeness of the system. Theorem (completeness) : For any set of formulas T, T is consistent in Kc with Vocab iff T is satisfiable in i?e with Vocab. Proof (completeness) : Assume T is consistent in Clearly, if 4 E T then also 4 E To and therefore by Kc with Vocab. By the (Lindenbaum lemma) we can truth lemma we get mo bz, 6 q completeness extend T to a maximally consistent set To. From To we will construct the model 9X0. For each i? = Ec * C E JK* define Tic+ := (4 I To ho ist(?, $), 4 E PROP}. Lemma (T,+): T,+ is closed under logical con- sequence: for all 4 where Vocab(i?, 4) & Vocab, if 4 tautologically follows from T,+ then 4 E T,+. Note that T,+ need not be either maximally consistent or even consistent. Now, using only the sets T,+ of formulas from PROP, we will define a model ?JXc for the set of formulas To. We define the domain of 9X0 Dom(!JXc) := {El E 5 ito, 32 E Dom(Vocab), TC’ 5 I?} and for all i? E Dom(!JJZc) i%&(Z) := {v 1 Dam(v) = Vocab@), VC#I E Tn+,v($) = 1). In the above, V is the unique homomorphic extension of v with respect to the propositional connectives. To see that 9X0 as defined is a model, we first note that it clearly meets condition 1, since all the truth assign- ments associated with a context must have the same domain. Condition 2 is met since Dom(9Xo) as defined is a subtree rooted at 7~0. Note that if T,+ is empty (which corresponds to the case where Vocab(K) = S), then 9&(E) is a singleton set, whose only member is the empty truth assignment. Finally, to establish com- pleteness we need only prove the truth lemma. The proof of the truth lemma is based on the CNF con- struction and is the novel aspect of this completeness proof. , Lemma (truth): For any 4 such that Vocab(Ec, 4) E Vocab, 4 E To 8 m0 bzo 4. Nonmonotonic Logic 415 Before we give the proof of the truth lemma, we need to state a property of the model 9X0 which is needed in the ist case of the truth lemma. Lemma (930): Let 9.X0 be a model as defined from To in the completeness proof. Then for all 4 E PROP where Vocab(zo * Z, 4) s Vocab, To l--z0 ist@, 4) iff for all v E 9JZc(Es *C) ~(4) = 1. A frequently used instance of the .9X0 lemma is that To t-+, ist(?, 4 A +) iff fme(Ee * i?) = 0, for all 4 sat- isfying the (definedness condition). Proof (truth lemma): Instead of proving 4 E To iff ?3Xe FE0 4 we will prove the statement (TL) $ is in CNF implies (+!J E To iff 9,7&o +Fo $) . To see that the former follows from the latter, assume q5 E To. By the (CNF lemma), there exists formula $* in CNF such that I-,, 4 H 4*. Using maximal consistency of To, it follows that +* E To. Therefore by (TL) it must be the case that 9X0 I=,, 4*. Our logic is sound: 930 FE0 4* iff 9X0 bn, 4, and thus we conclude that ?JXc bTzb 4. We can simply reverse the steps of the argument to prove the other direction of the biconditional. We prove the (TL) by induction on the structure of the formula $J. In the base case $J is an atom, and thus in CNF. Prom the definition of ?JXo(Ee) it follows that p E To H 9X0 kc, p. In proving the inductive step we first examine $J = x V CL. The inductive hypothesis is that the lemma is true for formulas x and ,x. Assume x V 1-1 is in CNF. Then both x and p must also be in CNF. Since To is maximally consistent x V p E To iff either x E To or p E To. By the inductive hypothesis this will be true iff either ?3Xs bEo x or 9.X0 FE0 p, and by the definition of satisfaction iff 9X0 I=,, x V p. The inductive step for conjunction and negation is similar. We make use of the fact that if x A p is in CNF, then so are both x and ~1; and if lx is in CNF, then so is x. The interesting case is when $J is an ist. Assume that $J is in CNF. Then II, must be of the form $ = ist*(i?,x), where x is a disjunction of literals. The context se- quence c will sometimes be written as pi*. . . * K~. We will examine two cases, depending on whether or not any of the sets of sentences T(E~*~‘)+ where C 5 2, are inconsistent. The sets T(E,,*Ft)+, where i3 < 2, are all consistent iff the formula (D,) ist(Z, 14) + list@, 4) is in To, for any wff 4 which satisfies the definedness condition. The proof of this is identical to the sound- ness and completeness proofs of a context system with axiom schema (D) w.r.t. the set of consistent models, dealt with shortly. Formula (D,) is equivalent to for all 4 satisfying the definedness condition; the proof carries over from normal systems of modal logic. Now we state a useful consequence of (D,) ‘s. Lemma (Dc): Let c be ~1 * -. a * ICY. If D(,,,...,,n-,~ E To, then ist*(?, 4) E To iff f ist@, 4) E To for any formula 4 which satisfies the definedness con- vention. The sign on the right hand side is positive iff there is an even number of negations in the ist* on the left hand side. Now we examine the two cases need to prove the inductive step for ist of the truth lemma. Case D(,l,...,,,-l) E TO: In this case we assume D( K,l*“‘*Kn-l lemma: ) E To and that $ E To. Then by the D, ist*(iZ, x) E To iff f ist(?, x) E To We only include the positive case. ist@, x) E To iff To l--z, ist(?, x) Now by (?%VO lemma) and the definedness condition Vocab@e * i?) 5 Vocab we have To l-z,, ist(7, x) iff (Vv E ?3Xo(i?))(i7(~) = 1) By the definition of satisfaction: (Vv E Dl0(72))(Y(x) = 1) iff 1)320 FE0 ist(iZ,)o Now since D(,,,,,,-l) E To, and by (9Ji?o lemma) we obtain: ?YJZo FE0 ist(i?,x) iff 9X0 bzo ist*(i?,)o Case D(,l,e,,-l) @ TO: Let j be the index of the first inconsistent context; formally D(,,,....++ $ To and D( nl*...*Kj-l) E To. Then for all 4 satisfying the definedness condition we have list (Ki * . . . * ~j, q5 A 14) +Z To. Now by maximal consistency of To, (K*) and (MP) ist(Ki *. . **Kj, +AT~) E To iff ist(ni*. * .*Kj, $) E To Thus, T(zowcl*...wcj)+ is inconsistent, 9720 (Ee * ~1 * . a . * Kj) = 0, and consequently ist(&i *. . .*Kj, 4) E To iff ?7Xo bxo ist(Ki*.. .*Kj, 4) for all 4 such that Vocab(Es * KI* * - - * Kj, 4) E Vocab. Then by reasoning similar to the previous case we get: ist*(?,;y) E To iff ?3Xa bzo ist*(?, x). Note that in the entire proof of the inductive step for ist, we did not need the inductive hypothesis, making use only of the special form of x which is guaranteed because $J is in CNF. qruth-Iemma 416 Buvac Correspondence Results In this section we provide soundness and completeness results for several extensions of the general system. correspond to certain intuitive principles concerning the nature of contexts. In each extension the syntax and semantics is the same as in the general case, and the (definedness convention) still holds. Only the class of models and axioms are modified. Consistency Sometimes it is desirable to ensure that all contexts are consistent. In this system we examine the class, Gnsistent, of consistent models. A model 9X E CZonsistent iff for any context sequence E E Dom(!JJX), !m(iz) # 0. The following axiom schema is sound with respect to the class of consistent models Gnsbtent: (D) l-z ist(E, 14) + +.st(K, 4) Axiom schema (D) is also commonly used in modal logic, and is sound and complete for the set of serial Kripke frames, in which for each world there is another world from which it is accessible from. Note that axiom (D) is equivalent to i--z ist(K, 4 A 14). Theorem (completeness): The general context system with (D) axiom schema is complete with re- spect to the set of models Eons&tent. Flatness For some applications all contexts will be identical regardless of where they are examined from. This type of situation will often arise when we use a num- ber of independent databases. For example, if I am booked on flight 921 in the context of the Northwest airlines database, then regardless of which travel agent I choose, in the context of that travel agent, it is true that in the context of Northwest airlines I am booked on flight 921. In this system we examine a class, 5Iat, of what we call flat models. A model %R is flat, formally 9X E Slat iff Dom(im) = K* and for any context sequences ~1 and &, and any context K, rn(izl * 6) = rn(ic2 * 6). When dealing with flat models it might be more in- tuitive to think of individual contexts rather then con- text sequences. Then 311 E Slat can be viewed as a function which maps contexts to finite sets of partial truth assignments, in other words ?m E Ku {e} I+ P(IP dp 2). with the side condition of general models that still ap- plies: (VE E JRJ{E))(V VI, v2 E SI(E))(Dom(v~) = Dom(v2)) The following flatness axiom schemas are sound with respect to the class of flat models Slat: (Fl+) t-z ist(K2, ist(Ki, 4)) f) is+% 4) (FL) l-z ist(K2, list(&i, 4)) --+ +st(Ki, 4) providing the vocabulary also satisfies the flatness con- dition: for any context sequences Tt;i and X2, and any context K, Vocab(El * K) = Vocab(& * K). The backward direction of the flatness axiom schemas (Fl+) corresponds of the modal logic axiom schema S4 (provided that ~1 is the same as ~2). Sim- ilarly, the converse of (FL) corresponds to the modal logic axiom schema S5. Note that the converse of (FL) is a theorem in the system. It is interesting to observe that in every system with ($‘I+) and (FL), (H)) is also derivable. In semantic terms, this means that any flat model is also a con- sistent model; a reasonable property for if a context was inconsistent, then in that context it would be true that all other contexts are also inconsistent. Due to flatness, this would really make all the other contexts inconsistent. Theorem (completeness) : The general context system with (Fl+) and (FL) axiom schemas is com- plete with respect to the set of flat models Slat. Truth It might be more intuitive to define the ist modality to correspond to truth rather than validity; incidently this is also where the ist predicate got its name: is true. Truth based interpretation of the basic context modality also corresponds to the original suggestions by McCarthy [McCarthy, 19931. In this case a context is associated with a single truth assignment rather than a set of truth assignments. We examine the class, Truth, of truth models. A model 9X is a truth model, formally 9JiY E Zruth iff for any context sequence E E Dam(m), The following axiom schema is sound with respect to the class of truth models Zrutlj: (Tr) I--F ist(K, 4) V ist(K, +) Note that (Tkr) is the converse of (D) . Theorem (completeness): The general context system with (Tr) axiom schema is complete with re- spect to the set of truth models Truth. Nonmonotonic Logic 417 Previously we said that (A+) and (A-) are derivable in a system which contains (D) and (Tr) . In fact, a stronger formula is true of this system: l-z ist(K, # V $) f) (ist(K, 4) V ist(6, $)). Meaninglessness as Falsity In this section we examine a slightly more elaborate modification of the general system. This modifica- tion closely models the semantics described, but not investigated, in [Guha, 19911. The general idea here is that if 4 is not in the vocabulary of K, then ist(K, 4) is taken to be false instead of meaningless or unde- fined. To cater faithfully to this interpretation, two changes must be made to the semantics of the gen- eral system. Firstly, the ist clause in the definition of Vocab : lK* x W -+ P(K* x p) must be altered to reflect the fact that ist(lc,4) will always be in the vocabu- lary of any context. Secondly, the ist clause in the definition of satisfaction must also be modified. The appropriate new clause in the definition of Vocab is: Vocab@, 4) = 0 if 4 is ist(K, $0) While the new clause in the definition of satisfaction is: 9X, u b, ist(Ki, 4) iff Vocab(+, ?? * ~1) E Vocab(l)n) and for all vi E (i? * ~1)~ 9X, ~1 kZeK1 4 The other clauses in both definitions remain the same, modulo the fact that all occurrences of Vocab in the definition of satisfaction now refer to the new defini- tion. We maintain the (definedness convention) in stating the proof system for this version, but again we point out that all occurrences of Vocab now refers to the new definition. The proof system for this version consists of the axioms and rules of the general system, together with the new axiom: (MF) t-z +st(Ki,+) if Vocab(K * t~l,$) g Vocab The completeness proof for this system is struc- turally similar to the one described in this paper. The only new points are those that arise out of the liberal definition of Vocab. elated Work Our work is largely based on McCarthy’s ideas on context. McCarthy’s research [McCarthy, 1987; Mc- Carthy, 19931 in formalizing common sense has led him to believe that in order to achieve human-like gener- ality in reasoning, we need to develop a formal theory of context. The key idea in McCarthy’s proposal was to to treat contexts as formal objects, which enables one to state that a proposition is true in a context: ist(n, 4) where 4 is a proposition and K is a context. This permits axiomatizations in a limited context to be expanded so as to transcend their original limitations. There has been other research done in this area, most notable is the work of Lifschitz, Shoham, and Guha. We briefly treat each in turn. Two contexts can differ in, at least, three ways: they may have different vocabularies; or they may have the same vocabulary but describe different states of affairs, or (in the first order case) they may have the same vo- cabulary (i.e. language) but treat it differently (i.e the arities may not be the same). The first two differences were studied in [BuvaE, 19921, and led to two differ- ent views on the use of context. Lifschitz’s early note on formalizing context [Lifschitz, 19861 concentrates on the third difference. Shoham, in his work on contexts, concentrates on the second difference [Shoham, 19911. Every proposition is meaningful in every context, but the same proposition can have different truth values in different contexts. Shoham approached the task of formalizing context from the perspective of modal and non-classical of logics. He defines a propositional lan- guage with an analogue to the ist modality, and a relation ~1 e > ~2, expressing that context ~1 is as general as context ~2. Drawing on the intuitive anal- ogy between a context K and the proposition current- context(K), Shoham identifies the set of contexts with the set of propositions. This enables him to define truth in a context ist(K,p), in terms of the the condi- tional current-context(K) -+ p, where + is interpreted as as some form of intuitionistic or relevance implica- tion. His paper gives a list of 14 benchmark sentences which characterize this implication. Guha’s dissertation contains a number of examples of context use. These demonstrate how reasoning with contexts should behave, and which properties a formal- ization of context should exhibit. The Cyc knowledge base [Guha and Lenat, 19901, which is the main moti- vation for Guha’s context research, is made up of many theories, called micro-theories, describing different as- pects of the world. Guha has tailored the design of micro-theories after contexts. There is also a clear parallel between the logic of context and the modal logics of knowledge and be- lief [Halpern and Moses, 19921. The modality ist(K, 4) may be interpreted as expressing that the agent K knows or believes the sentence 4. In the case where there is only one context, our formal system collapses to a normal system of modal logic (with two additional axiom schemas (A+) and (A-)). This is analogous to the way logics of knowledge and belief collapse to a normal system of modal logic in case of a single agent. However, the logics of knowledge and belief differ from our logic of contexts in a number of ways: Firstly, log- its of knowledge and belief do not deal with variable vocabularies and the corresponding partiality. Fur- thermore, logics of knowledge and belief are usually ascribed possible world semantics. Consequently, an agent’s belief is modeled by relations between worlds. Modeling truth or validity in a context by a relation be- tween worlds would not be intuitive because we want 418 Buvac contexts to be reified as first class objects in the se- mantics. This will allow us (in the predicate case) to state relations between contexts, define operations on contexts, and specify how sentences from one context can be Zifted into another context. Conclusions and Future Work Our goal is to extend the system to a full quantifica- tion logic. One advantage of quantificational system is that it enables us to express relations between context, operations on contexts, and state lifting rules which describe how a fact from one context can be used in another context. However, in the presence of context variables it might not be possible to define the vocab- ulary of a sentence without knowing which object a variable is bound to. Therefore the first step in this direction is to to examine propositional systems with dynamic definitions of meaningfulness. We also plan to define non-Hilbert style formal sys- tems for context. Probably the most relevant is a natu- ral deduction system, which would be in line with Mc- Carthy’s original proposal of treating contextual rea- soning as a strong version of natural deduction. In such a system, entering a context would correspond to mak- ing an assumption in natural deduction, while exiting a context corresponds to discharging an assumption. Finally, it would be interesting to show some for- mal properties of our logic. These include defining a decision procedure, in the style of [Mints, 19921. Acknowledgements The authors would like to thank Tom Costello, R. V. Guha, Furio Honsell, John McCarthy, Grigorii Mints and Carolyn Talcott for their valuable comments. This research is supported in part by the Advanced Re- search Projects Agency, ARPA Order 8607, monitored by NASA Ames Research Center under grant NAG 2-581, by NASA Ames Research Center under grant NCC 2-537, NSF grant CCR-8915663 and Darpa con- tract NAG2-703. References Bochvar, D. A. 1972. Two papers on partial pred- icate calculus. Technical Report STAN-CS-280-72, Department of Computer Science, Stanford Univer- sity. Translation of Bochvar’s papers originally pub- lished in 1938 and 1943. BuvaE, Saga 1992. Context in AI. Unpublished manuscript. de Kleer, Johan 1986. An assumption-based truth maintenance system. Artificial Intelligence 28:127- 162. Genesereth, Michael R. and Fikes, Richard E. 1992. Knowledge interchange format, version 3.0, reference manual. Technical Report Logic Group Report Logic- 92-1, Stanford University. Grosz, Barbara J. 1977. A representation and use of focus in a system for understanding dialogs. In Pro- ceedings of the Fifth International Joint Conference on Artificial Intelligence. Morgan Kaufmann Publish- ers Inc. Guha, Ramanathan V. and Lenat, Douglas B. 1990. Cyc: A midterm report. AI Magazine 11(3):32-59. Guha, Ramanathan V. 1991. Contexts: A Formal- ization and Some Applications. Ph.D. Dissertation, Stanford University. Also published as technical re- port STAN-CS-91-1399-Thesis. Halpern, Joseph Y. and Moses, Yoram 1992. A guide to completeness and complexity for modal logics of knowledge and belief. Artificial Intelligence 54:319- 379. Kleene, Steven C. 1952. Introduction to Metamathe- matics. North-Holland Publishing Company. Lifschitz, Vladimir 1986. On formalizing contexts. Unpublished manuscript. McCarthy, John 1987. Generality in artificial intelli- gence. Comm. of ACM 30(12):1030-1035. Reprinted in [McCarthy, 19901. McCarthy, John 1990. Formalizing Common Sense: Papers by John McCarthy. Ablex Publishing Corpo- ration, 355 Chesnut Street, Norwood, NJ 07648. McCarthy, John 1993. Notes on formalizing context. In Proceedings of the Thirteenth International Joint Conference on Artificial Intelligence. To appear. Mints, Grigorii E. 1992. Lewis’ systems and system T (1965-1973). In Selected Papers in Proof Theory. Bibliopolis and North-Holland. Shoham, Yoav 1991. Varieties of context. In Lif- schitz, Vladimir, editor 1991, Artificial Intelligence and Mathematical Theory of Computation: Papers in Honor of John McCarthy. Academic Press. Nonmonotonic Logic 419 | 1993 | 66 |
1,394 | Generating Exp icit Orderings for No James Cussens Centre for Logic and Probability in IT King’s College Strand London, WC2R 2LS, UK j.cussens@elm.cc.kcl.ac.uk Department of Computing Imperial College 180, Queen’s Gate London, SW7 2B2, UK abh@doc.ic.ac.uk Ashwin Srinivasan Programming Research Group Oxford University Computing Laboratory 11, Keble Road Oxford, OX1 3QD, UK ashwin.srinivasan@Dre.ox.ac.uk I ” Abstract For non-monotonic reasoning, explicit orderings over formulae offer an important solution to prob- lems such as ‘multiple extensions’. However, a criticism of such a solution is that it is not clear, in general, from where the orderings should be obtained. Here we show how orderings can be de- rived from statistical information about the do- main which the formulae cover. For this we provide an overview of prioritized logics-a gen- eral class of logics that incorporate explicit or- derings over formulae. This class of logics has been shown elsewhere to capture a wide variety of proof-theoretic approaches to non-monotonic rea- soning, and in particular, to highlight the role of preferences-both implicit and explicit-in such proof theory. We take one particular prioritized logic, called SF logic, and describe an experimen- tal approach for comparing this logic with an im- portant example of a logic that does not use ex- plicit orderings of preference-namely Horn clause logic with negation-as-failure. Finally, we present the results of this comparison, showing how SF logic is more skeptical and more accurate than negation-as-failure. Keywords: non-monotonic reasoning, statistical inference, prioritized logics, machine learning. Introduction Within the class of non-monotonic logics and associ- ated systems such as inheritance hierarchies, there is a dichotomy between those formalisms that incorporate explicit notions of preference over formulae, and those that do not. Even though using explicit orderings of- fers an effective mechanism for obviating certain kinds of ‘multiple extension’ problems, their use remains con- troversial. A major criticism is that it is unclear where the orderings come from. We address this criticism by arguing that the orderings should be derived from statistical information generated from the domain over which they operate. If we delineate the kind of infor- mation about the domain that we require, then there are generic mappings from this information into the set of orderings over data. The structure of the paper is as follows. First, we provide an overview of prioritized logics-a gen- eral class of logics that incorporate explicit preferences over formulae. $econd, we take one particular priori- tized logic, SF logic, and describe an experimental ap- proach to comparing this logic with an important ex- ample of a logic that does not use explicit orderings of preference-namely Horn clause logic with negation- as-failure. Third, we take SF logic and show how we can generate explicit orderings over the data using sta- tistical inference, and finally, we present the results of the comparison, showing how SF logic is more skeptical and more accurate than negation-as-failure. Overview of prioritized logics In order to provide a general framework for logics with explicit orderings, we use the family of prioritized log- its [Hunter, 1992; Hunter, 19931. Within this family, each member is such that: 420 Cussens From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. Each formula of the logic is labelled. The rules of inference for the logic are augmented with rules of manipulation for the labels. The labels correspond to a partially-ordered struc- ture. prioritized logic is thus an instance of a labelled deductive system [Gabbay, 1991a; Gabbay, 1991b; Gabbay and de Queiroz, 19931. A prioritized logic can be used for non-monotonic reasoning by defining a consequence relation that al- lows the inference of the formula with a label that is ‘most preferred’ according to some preference criterion. Furthermore, we can present a wide variety of existing non-monotonic logics in this framework. In particular, we can explore the notion of implicit or explicit pref- erence prevalent in a diverse class of non-monotonic logics. For example, by adopting appropriate one-to- one rewrites of formulae into prioritized logic formulae, we can show how the propositional form of a number of key non-monotonic logics including negation-as-failure with general logic programs, ordered logic, LDR, and a skeptical version of inheritance hierarchies, can be viewed as using preferences in an essentially equiva- lent fashion [Hunter, 19931. For this paper, we use a member of the prioritized logics family called SF logic and defined as follows. The language is composed of labelled formulae of the fol- lowing form, where cyg, . . . , a,, ,8 are unground literals, i is a unique label, and n 2 0. i : CYO A.. . A a, -+ p We call these formulae SF rules. We also allow uniquely labelled ground positive and negative liter- als which we call SF facts. A database A is a tuple (I’, R, >, a), where I? is a set of SF rules and facts, 52 is 10, 13 x 10’ 11, ?I is some partial-ordering relation over fi, and a is a map from labels into R. This means each la- bel corresponds to a pair of reals, and in this sense the ordering is two-dimensional. We use [0, l] x [0, l] only as a convenient representation for two-dimensional par- tial orderings. For the SF facts, c maps to (l,l). There is more than one intuitive way of combining these two dimensions of values to generate a single poset (0, >-). We define the t relation in terms of 2, the usual ordering relation for the real numbers. Consider the following definitions for the 2 relation. Definition 1 (i,p) S (j, q) i#(i > j) or (i = j and P > d Definition 2 (i, p) + (j, q) i39p (i > j and p > q) Definition 1 imposes a total ordering on St, where the ordering on the first label takes precedence, and the second label is only used as a ‘tie-breaker’. Definition 2 defines a non-total subset of the first relation, where both dimensions play an equally important role. The + relation can be used to resolve conflicts in the arguments that emanate from I and is used as such in the consequence relation for a prioritized logic. The SF consequence relation k allows an inference cx if CI! is proposed and undefeated or if a is a fact. It is proposed if and only if there is an argument for CY, such that all conditions for Q are satisfied by recursion. It is undefeated if and only if there is no more preferred arguments for the complement of o. For all databases A, atomic labels i, unground literals CY, groundings 1-1, and ground literals S, the k relation is defined as follows, where, if 5 is a positive literal then 8 = 4 and I^s = 5: Definition 3 A k S Zf; : S E A and a(i) = (1,1) A k S if S[proposed(A, i, S) and undefeated(A) i, S)] proposed(A, i, 6) ifl wo,-, ,&,p [i:po~...~/3n +~EA and P(Q) = 6 and A t- dPo>, . . . , A t- /&)I undefeated(A) i, S) ifl Vj[proposed(A, j,@ 3 a(i) + o(j)] Suppose I? = {T : (~(a), p : a(z) -+ p(x), q : ~(2) -+ UP}, where a(p) = (0.6,0.7), a(q) = (0.5,0.8) and a(r) = (1,l). Using Definition 1, a(p) + c(q), so A k ,8(u) and A k cu(a) hold. Using Definition 2, however, we have that a(p) # g(q) and a(q) # a(p) so A k ,0(u) and A k lp(u), but still A k a(u). This illustrates how Definition 2 captures a more skeptical logic. To show the value of generating explicit orderings, we now want to compare SF logic with existing non- monotonic logics. However, instead of undertaking this comparison purely on theoretical grounds, we do an empirical comparison with Horn clause logic aug- mented with negation-as-failure (Prolog). To support this, we use a machine learning algorithm, Golem [Mug- gleton and Feng, 19901, to generate definite clauses. Golem is an inductive logic programming approach to learning [Muggleton, 1991; Muggleton, 19921. Using Golem means that significantly large numbers of ex- amples can be used to generate these clauses. This facilitates the empirical study, and supports the statis- tical inference used for generating the explicit order- ing. The definite clauses are used directly by Prolog, and used via a rewrite by SF logic. We assume a set of ground literals D which express relevant facts about the domain in question and we also assume a target predicate symbol ,8. Since Prolog does not allow classical negation, we adopt the following non-logical convention: for a predicate symbol ,f3, we represent negative examples using the predicate sym- bol not-p. Golem learns definite clauses where the head of the clause has the target predicate symbol p. This is done by using a training set Tr of N randomly chosen Nonmonototic Logic 42ll literals from D, where each of these literals has either /3 or not-,0 as predicate symbol. To learn these clauses Golem uses background knowledge, which is another subset of D, where none of the literals has p or not-p as a predicate symbol. The literais in the antecedents of the learnt clauses use the predicate symbols from the background knowledge. For example, given background knowledge { o( cl), c+2), Q(C4), &7), +a)) and training examples {p(ci), P(Q), notJ(cs)}, Golem would learn the clause a(z) --+ ,0(x), which h as training accuracy 100%. In practice we use significantly larger sets of background knowledge and training examples. Also, we allow Golem to induce clauses with training accuracy below lOO%, since learning completely accurate rules is un- realistic in many domains. The induced clauses are tested using testing examples-ground literals with predicate symbol ei- ther /? or notJ, which are randomly selected from D\Tr. In the normal execution of Golem these testing examples are treated as queries to the induced set of definite clauses and are evaluated using l--p, the Prolog consequence relation. We define a function f to evaluate each test exam- ple. f takes A, the union of the learnt rules and the background knowledge, and the test example ,0(~) or not-p(Z), where c is a tuple of ground terms and re- turns an evaluation of A’s prediction concerning the test example. (Recall that none of the ground liter- als in the background knowledge have /3 as a predicate symbol.) Definition 4 f(A, P(E)) = correct f (A, P(q) if A k-p P(E) = incorrect if A tfP P(F) f(A, not-p(E)) = correct f(A, not-p(E)) = if A Ifp D(e) incorrect if A I-, p(e) To continue the above example, suppose that {P(Q), notJ(cs), P(Q), not$(c,)} were the test exam- ples, then we would have f(A, /?(c4)) = f (A, not&c,)) = correct and f(A, P(Q)) = f(A, notJ(c7)) = incor- rect. The clause a(z) --+ ,6(x) would then have the extremely poor test accuracy of 50%. We generate SF rules in two stages. First, we run Golem with target predicate p, then we rerun it with target predicate not@. To get the SF rules, we take the union of the two sets of clauses, rewrite the not-,6 symbol to the negated symbol -+, uniquely label each clause, and provide a map CT from the labels into R. This map is determined by information contained in the training data, and methods for defining it are de- scribed in the next section. Let A denote the union of the SF rules with the background data. The examples from the test set are then used to query A. Suppose y(C) were such an example, where either y = ,8 or y = lp, then one of the following obtains (l)Ak y(C); (2) A k &?); or (3) (A /& y(c) and A k y(C)). We define the function 422 Cussens g to evaluate each example (note that for SF logic we have extra category ‘undecided’). Definition 5 g(A,y(F)) = correct ifA t-?GJ g(A,y(F)) = incorrect if A k y(F) g(A, y(C)) = undecided if A k y(C), A /b $@) Generating preference orderings We now describe how to elicit a preference ordering over formulae by using facts about the domain, specif- ically facts from that subset of them which constitutes the training set of examples. To construct a preference ordering over the induced formulae, we find a pair of values which measure how well confirmed each SF rule i : o + ,D is by the training data. We then map the unique label associated with each SF rule to this pair of values in St via the mapping 0. For the first value, we calculate an estimate, denoted fi, of the probability P(p]cy) = p. p is the probability that a (randomly chosen) example, given that it satis- fies CY, also satisfies ,8. Equivalently, it is the proportion of those examples that satisfy cy which also satisfy p. p is an obvious choice as a measure of preference, it is the probability that the rule Q ---f p correctly classifies examples which it covers, i.e. examples which satisfy its antecedent o. Unfortunately, the value p can not be determined without examining all individuals in the domain that satisfy c~ and determining what propor- tion of them also satisfy ,0. This is infeasible for any domain large enough to be of interest. We show below how various estimates jj are constructed. Clearly, we want fi close to p, i.e. we want to min- imise d = ]$ - pj. The value I would be an ideal mea- sure of the precision of the estimate $, but it will be unknown, since p is unknown. Instead, we either use relative cover (see below) or P(Z < t) for some fixed t, as a measure of the reliability of j?. This gives us our second value. For details of the various estimates used, see [Cussens, 19931. We give only a brief sketch here, since the important point is that we can use straightforward and established statistical techniques to derive labels. Relative Frequency We simply set fi = r/n, where n is the number of training examples satisfying CY and r the number satisfying o A /?. The reliability of relative frequency as an estimate was measured by relative cover (n/N), which is simply the proportion of training examples which satisfy c~ and hence ‘fire’ the rule cu --+ /3. We use relative cover, since, for example, an estimate of p = 1 is more reliable with r = 100, n = 100 than with r = 2, n = 2 (recall the example in the previous section, which had r/n = 1 but test accuracy of only 50%). Bayesian In Bayesian estimation of probabilities, a prior probability distribution over possible values of p is used. This is then updated to give a posterior distribution, the mean of which is used as a point estimate 17 of p. Let X be distribution ., we then have the mean of the prior The balance between r/n and X is governed by the value r;‘, & = 0 renders jj = r/n. Various val- ues for 1? have been employed in the statistical and machine learning literature [Bishop et al., 1975; Cestnik, 1990; Cestnik and Bratko, 1991; Dzeroski et al., 19921. Below we have used the value 12 = fi; for the properties of this particular estimate see [Bishop et ad., 19751. Th e value X can be seen as a ‘guess’ at p prior to looking at domain information. We used an estimate of the value P(p) for X, an ap- proach common in the machine learning literature. P(b < t) was calculated by integrating between p” - t and fi + t on the posterior distribution. The actual value of t is not crucial. It affects the magnitude of P(d < t), but rarely affects the preference-ordering. In our experiments t was set to 0.025. Pseudo-Bayes Like Bayesian, but r(n - 7‘) Ii- = (nx _ r)2 Such an approach is pseudc+Bayesian because K, which is usually seen as a prior parameter is a func- tion of r/n-which is a parameter of the training data. Generating preference orderings in this way provides an alternative to orderings based on specificity (for ex- ample [Poole, 1985; Nute, 19881). In the context of prioritized logics, some of the issues of specificity and accuracy have been considered in [Cussens and Hunter, 1991; Cussens and Hunter, 19931, but there is a clear need to further clarify this relationship by building on more general results relating non-monotonic reasoning and probabilistic inference [Pearl, 1990; Bacchus, 1990; Bacchus et aE., 19921. A. preliminary empirical comparison In our preliminary comparison, we considered two do- mains. The first was for rules that predict whether a protein residue is part of an alpha-helix. These rules were defined in terms of relative position in a protein and various biochemical parameters. We call this do- main the protein domain. The second was for rules that predict the relative activity of drugs. These rules were defined in terms of the structure of the drug and they provided a partial ordering over the degrees of activity of the drugs. We call this domain the drugs domain. For the protein domain, from a training set of 1778 examples together with background kno.wledge consist- ing of 6940 ground literals, Golem generated 100 clauses for the predicate symbol alpha-helix and 99 clauses for the predicate symbol not-alpha-helix and hence 199 SF clauses for alpha-helix and ~aipha-helix. For the drugs domain, from a training set of 1762 examples to- gether with background knowledge consisting of 2106 ground literals, Golem generated 23 clauses for the bi- nary predicate greater-activity and 24 for the predi- cate symbol not-greater-activity, giving 47 SF rules for greater-activity and lgreater-activity. For the protein domain, Table 1 was formed using a test set of 401 not-alpha-helix examples and 322 ad- pha-helix examples. For the drugs domain, Table 2 was formed according to a test set of 513 greater-activity examples and 513 not-greater-activity examples. Accuracy The key observation from these tables is that if we define accuracy by the ratio correct/(correct + incorrect), as we do in the ‘Accuracy’ column, then the performance of Prolog is inferior to that of SF logic. Furthermore, the difference in accuracy between the two variants of SF is negligible. The marked improvement in accuracy of SF logic over Prolog is contingent on the assumption that we can ignore the examples classified as undecided. In other words, in this interpretation, the increased skep- ticism of the SF logic is not regarded negatively. How- ever, this is only one way of interpreting the undecided category. If accuracy is defined as the percentage of correct examples, as in the ‘Correct’ column, then SF is markedly less accurate. Comparing Definitions 1 and 2 When comparing the variants of SF logic, we find, after rounding, that the same results are obtained using Definition 1 for the three different estimation techniques-this is because they all return similar values. Also, since Definition 1 gives a total ordering on the labels, only those exam- ples that are not covered by any rule are undecided. Using the more skeptical Definition 2, we find that ac- curacy was close or equal to Definition 1 in all cases. Skepticism and the Protein Domain In the pro- teins domain, skepticism, as measured by the percent- age of undecided examples, increases significantly as we move from Definition 1 to Definition 2. This increase was greatest using relative frequency, since there, the first value of its label, j? = r/n, which estimates the accuracy of a rule, can be high even if n and conse- quently n/N is low. Using Definition 2 with relative frequency, a rule is preferred over another if and only if both the first value of its label (6 = r/n) and the second (n/N) are greater than the respective parts of the label of the competing rule. So many rules with high r/n values but low n values will be preferred over competing rules using Definition 1, but not when using Definition 2. In contrast, for R = fi and pseudo-Bayes, values for @ substantially higher than the prior mean X are only possible if n is reasonably high. If n is high then, Nonmonotonic Logic 423 Correct Incorrect Undecided Accuracy Prolog with alpha-helix clauses 58 42 0 58 Prolog with not-alpha-helix clauses 59 41 0 59 ICY 53 31 16 63 g relative frequency 45 25 30 64 L Definition 1 using relative frequer Definition 2 usin Definition 1 using K = fi Definition 2 using K = fi Definition 1 using pseudo-Bayes Definition t 2 using pseudo-Bayes 53 31 16 63 48 27 25 64 53 31 16 63 1 50 1 29 I 21 63 I Table 1: The Protein Domain (all values are percentages) Correct Prolog with greater clauses Prolog with not-greater clauses L VIlLIlVlVll I U’-‘LLL6 lblW”1 v b IIL-yUL,LIb,y Definition 2 using relative frequency Definition 1 using K = fi Definition 2 using 1r’ = &i 79 80 8” 70 70 I In f correct Undecided 21 ccura 79 70 I 5 25 II 93 5 25 5 25 93 93 ICY i 1 Definition 1 using pseudo-Bayes 70 5 25 93 Definition 2 using pseudo-Bayes 70 . 5 25 93 Table 2: The Drugs Domain (all values are percentages) usually, so will be the second part of the label, P(I < t). So in these cases, the second part of the label is usually high when the first part is, which explains the smaller increase in skepticism as we move from Definition 1 to Definition 2. Skepticism and the Drugs Domain In the drugs domain, we have, after rounding, the same results for both variants of SF logic and all estimates. This is be- cause conflicts between arguments in I’ occurred only for relatively few test examples. Summary of Empirical Comparison If we allow increased skepticism, the results given here indicate how a richer formalism such as prioritized logics can be used for increased accuracy in reasoning. Furthermore, this shows how a clearer understanding of generating explicit orderings in terms of statistical inference can support this improved capability. Discussion It has been widely acknowledged that non-monotonic logics are of critical importance for artificial intelli- gence, yet there is some dissatisfaction with the rate and nature of progress in the development of non- monotonic logics that address the needs of artificial intelligence. Many developments in non-monotonic logics have been based on a set of reasoning problems concerning, for example, inheritance, multiple extensions, and cu- mulativity. Existing non-monotonic logics can be used to capture these problems in an intuitive fashion by appropriate encoding. Yet for the user of these kinds of formalism, it is not clear which is the most appro- priate. We believe that developing non-monotonic logics for artificial intelligence is, in part, an engineering problem and that the space of possible logics that could consti- tute a solution is enormous. It is therefore necessary to augment theoretical analyses of non-monotonic logics with sound empirical analyses. This should then focus the endeavour on improving the performance of rea- soning with uncertain information, and as a matter of course should raise further important and interesting theoretical questions. Acknowledgements This work has been funded by UK SERC grants GR/G 29861 GR/G 29878 and GR/G 29854. The authors are grateful for helpful feedback from Dov Gabbay, Don- ald Gillies and Stephen Muggleton, and also from two anonymous referees. References Bacchus, Fahiem; Grove, Adam; Halpern, Joseph Y.; and Koller, Daphne 1992. From statistics to belief. In Tenth National Conference on Artificial Intelligence (AAAI-92). 602-608. Bacchus, Fahiem 1990. Representing and Reasoning with Probabilistic Knowledge: -4 Logical Approach to Probabilities. MIT Press, Cambridge, MA. 424 Cussens Bishop, Yvonne M. M.; Fienberg, Stephen E.; and Holland, Paul W. 1975. Discrete Muk?ivariate Anal- ysis: Theory and Practice. MIT Press, Cambridge, Mass. Cestnik, Bojan and Bratko, Ivan 1991. On estimat- ing probabilities in tree pruning. In Kodratoff, Yves, editor 1991, Machine Learning-EWSL-91. Lecture Notes in Artificial Intelligence 482, Springer-Verlag. 138-150. Cestnik, Bojan 1990. Estimating probabilities: A cru- cial task in machine learning. In Aiello, L., editor 1990, ECAI-90. Pitman. 147-149. Cussens, James and Hunter, Anthony 1991. Using de- feasible logic for a window on a probabilistic database: some preliminary notes. In Kruse, R. and Seigel, P., editors 1991, Symbolic and Quantitative Approaches for Uncertainty. Lecture Notes in Computer Science 548, Springer-Verlag. 146-152. Cussens, James and Hunter, Anthony 1993. Using maximum entropy in a defeasible logic with proba- bilistic semantics. In Information Processing and the Management of Uncertainty in Knowledge-Based Sys- tems (IPMU ‘92). Lecture Notes in Computer Sci- ence, Springer-Verlag. Forthcoming. Cussens, James 1993. Bayes and pseudo-Bayes es- timates of conditional probability and their reliabil- ity. In European Conference on Machine Learning (ECML-93). Springer-Verlag. Dzeroski, Sa$o; Cestnik, Bojan; and Petrovski, Igor 1992. The use of Bayesian probability estimates in rule induction. Turing Institute Research Memoran- dum TIRM-92-051, The Turing Institute, Glasgow. Gabbay, Dov and de Queiroz, Ruy 1993. Extending the Curry-Howard interpretation to linear, relevance and other resource logics. Journal of Symbolic Logic. Forthcoming. Gabbay, Dov 1991a. Abduction in labelled deductive systems: A conceptual abstract. In Kruse, R. and Seigel, P., editors 1991a, Symbolic and Quantitative Approaches for Uncertainty. Lecture Notes in Com- puter Science 548, Springer-Verlag. 3-l 1. Gabbay, Dov 1991b. Labelled deductive systems. Technical report, Centrum fiir Informations und Sprachverarbeitung, Universitat Miinchen. Hunter, Anthony 1992. A conceptualization of pref- erences in non-monotonic proof theory. In Pearce, D. and Wagner, G., editors 1992, Logics in AI. Lec- ture Notes in Computer Science 633, Springer-Verlag. 174-188. Hunter, Anthony 1993. Using priorities in non- monotonic proof theory. Technical report, Imperial College, London. Submitted to the Journal of Logic, Language, and Information. Muggleton, Stephen and Feng, Cao 1990. Efficient induction of logic programs. In Proc. of the First Conference on Algorithmic Learning Theory, Tokyo. 473-49 1. Muggleton, Stephen 1991. Inductive logic program- ming. New Generation Computing 8:295-318. Muggleton, Stephen, editor 1992. Inductive Logic Programming. Academic Press. Nute, Donald 1988. Defeasible reasoning and decision support systems. Decision Support Systems 4( 1):97- 110. Pearl, Judea 1990. Probabilistic semantics for non- monotonic reasoning: A survey. In Shafer, Glen and Pearl, Judea, editors 1990, Readings in Uncertain Reasoning. Morgan Kaufmann, San Mateo, CA, USA. Poole, David L. 1985. On the comparison of theories: Preferring the most specific explanation. In Ninth In- ternational Joint Conference on Artificial Intelligence (IJCAI-85). 144-147. Nonmonotonic Logic 425 | 1993 | 67 |
1,395 | uestion-based Ac Catherine Baudin” Smadar Kedar** ody Gevins Underwoo Artificial Intelligence Research Branch NASA Ames Research Center, Mail Stop 269-2 Moffett Field, CA 94035 baudin@ptolemy.arc.nasa.gov, kedar@ptolemy.arc.nasa.gov, gevins@ptolemy.arc.nasa.gov Vinod Baya Center For Design Research Bldg. 530, Duena street Stanford University bay@ stmrise.stanford.edu Abstract Information retrieval systems that use conceptual indexing to describe the information content perform better than syntactic indexing methods based on words from a text. Nowever, since conceptual indices represent the semantics of a piece of information, it is difficult to extract them automatically from a document, and it is tedious to build them manually. We implemented an information retrieval system that acquires conceptual indices of text, graphics and videotaped documents. Our approach is to use an underlying model of the domain covered by the documents to constrain the user’s queries. This facilitates question-based acquisition of conceptual indices: converting user queries into indices which accurately model the content of the documents, and can be reused. We discuss Dedal, a system that facilitates the indexing and retrieval of design documents in the mechanical engineering domain. A user formulates a query to the system, and if there is no corresponding index, Dedal uses the underlying domain model and a set of retrieval heuristics to approximate the retrieval, and ask for confirmation from the user. If the user finds the retrieved information relevant, Dedal acquires a new index based on the query. We demonstrate the relevance and coverage of the acquired indices through experimentation. 1. otivation Information retrieval systems based on conceptual indexing can access the underlying meaning of text, graphics or videotaped documents. Conceptual indices focus on the important concepts of a domain (the semantics) rather than on the multiple ways these concepts are represented in a document (the syntax). This facilitates information retrieval [Salton et al. 89][Hayes et al. 891 because: (1) the number of concepts in a document is smaller than the number of (*) employee of RECOM Technologies. (**) Research performed while employed by Sterling Software Inc. and while a visiting scientists at the Institute for the Learning Sciences, Northwestern University. (***) employee of Sterling Software Inc. their possible syntactic representations, thus facilitating vocabulary selection when formulating queries to a system, and (2) since conceptual indices represent the content of a piece of information they can be used by a reasoning component to facilitate the match between a query and the information in the documents [Baudin et al. 92b]. The following example, extracted from a technical design report, illustrates the difference between conceptual and syntactic indexing. “The inner hub holds the steel friction disks and causes them to rotate when the road input is transmitted through the connecting link to the rotating inner shaft... This paragraph can be indexed by words from the text such as inner hub, friction disk, inner shaft, connecting links. However, the content of this text refers to concepts like the function of the inner-hub, or the relation between the road input and the way the device works. Accessing these concepts enables an information retrieval system to accurately answer questions about the function of each part of the device, their operation and the way they interact. Conceptual indexing combined with knowledge of the relations among the objects in a domain can be used by a reasoning component to draw inferences about how to locate a piece of information. In this example, the content of the above paragraph can be summarized by one concept: “operation of disk stack” to convey the fact that it describes how the disk stack device works. A reasoning component can then infer that the paragraph might describe the function of each part of the disk stack and the way they interact. In this case, the component hub is a subpart of the disk stack mechanism and its function is referenced in the Paragraph. However, since conceptual indices represent the underlying meaning of a piece of information, the language used to build these indices is usually different from the language in the documents. In particular, conceptual indices can be complex entities taht involve several objects and relations. This abstraction level mismatch between the indexing language and the language used to convey the information makes it difficult to automatically extract 452 Baudin From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. conceptual indices, for instance by interpreting sentences in a text [Mauldin 9 l] [Tong et al. 891. On the other hand, the creation of conceptual indices by human indexers is a labor intensive task that is difficult to perform exhaustively. This is particularly true for a large volume of documentation where concepts are closely interrelated, as is the case for technical documents that describe the operation, diagnosis or design of complex artifacts. Our approach is to use a conceptual query language plus feedback from the user on the relevance of the documents retrieved in response to a query, to incrementally acquire new conceptual indices for that document. The user formulates a query to the system. If no document description exactly matches the query, the system approximates the retrieval and prompts the user for feedback on the relevance of the references retrieved. If a reference is confirmed, the query is turned into a new index, This extends relevance feedback techniques [Salton et al. 68][Salton et al. 881 to the acquisition of conceptual indices. This approach uses a question-based indexing paradigm [Osgood et al. 911 [Schank 91][Mabogunje 901 where the query language and the indexing language have the same structure and use the same vocabulary. The assumption is that the questions asked by users indicate the objects and relationships that are relevant to describe the content of the documents at a conceptual level appropriate for a class of users. However, in order to use the queries to acquire new indices the following conditions must be met by the query language: 1. Reusabilitv; The query language must be general enough to create indices that will match a class of queries. 2. Relevance: The query language must be able to describe the information that the user is interested in. Articulating queries to acquire information in order to achieve a goal is in general a difficult task [Croft et al. 9O][Graesser et al. 851. In our approach, the query formulation is constrained by a model of the domain covered by the documents and a model of the type of information designers are interested in (see section 3). 3. Context indenendence: The query language must be able to generate indices that can be reused in different situations - that is, for different users and different tasks. In the next two sections we describe Dedal, a system that acquires conceptual indices to facilitate the reuse of multimedia design documents in the mechanical engineering domain. In section 5 we discuss three experiments conducted at Stanford’s Center for Design Research and at NASA Ames where conceptual indices were created by Dedal while mechanical engineers used the system to access information about a shock absorber design. 3. We developed Dedal, an information retrieval system that uses conceptual. indexing to represent the content of multimedia text, graphics and videotaped design information. Dedal is currently applied to documents of mechanical engineering design. It is an interface to records such as meeting summaries, pages of a designer’s notebook, technical reports, CAD drawings and videotaped conversations between designers. 1 Conceptual Language to Query an esign Information Based on studies of the information seeking behavior of designers conducted at Stanford’s Center for Design Research and NASA Ames [Baya et al. 921, we identified a language to describe and query design information [Baudin et al. 92a]. This language combines concepts from a model of the artifact being designed with a task vocabulary representing the classes of design topics usually covered by design documents. For instance, “function,” “operation,” or “alternative” are topics of the task vocabulary. A conceptual index can be seen as a structured entity made of two parts: the body of the index which represents the content of a piece of information and the reference part that point to a region in a document. In De an index has the follow’ form: ctop level of detail E medium where S is a from a domain model and T, L and M are member of the task vocabulary. The reference part of an index contains a pointer to the record and segment corresponding to the starting location of the information in a document (e.g. document name and page number or video counter). A segment of information is described by several conceptual indices, each of which partially describing its content. For instance: “The inner hub holds the steel friction disks and causes them to rotate” is part of a paragraph in page 20 of the record: report-333. It can be described by two indexing patterns: <topic function subject inner-hub level- of-detail configuration medium text in- record report-333 segment 20> <topic relation subject inner-hub and steel-friction-disks level-of-detail configuration medium text in-record report-333 segment 20>. The queries have the same structure as the body of an index and use the same vocabulary. A question such as: “How does the inner hub interact with the friction disks?” can be formulated in Dedal’s language as: <get-information-about topic relation regarding subject inner-hub and steel- friction-disks with preferred medium equation>. Novel Methods in Knowledge Acquisition 453 3.2 The domain model In the mechanical engineering design domain, the model includes a representation of the artifact structure, some aspects of its function, the main decision points and alternatives considered. It also includes concepts that are part of the problem but external to the device representation. The main relations in the model are isa, part-of, attribute-of, and depends-on (see Figure 1). The isa and part-& attribute-qf hierarchies are used by Dedal to compare a query with a given index. For instance in Figure 1, given that metal-disk is part of the disk-stack the pattern: “function of metal-disk” will be considered more specific than the pattern “function of disk-stack”. In the same way, the subject: “resistive-force of disk-stack” is more specific than the subject “disk-stack”. 3.3 Retrieval strategy The retrieval module takes a query from the user as input, matches the question to the set of conceptual indices and returns an ordered list of references related to the question. The retrieval proceeds in two steps: (1) exact match: find the indices that exactly match the query and return the associated list of references. If the exact match fails: (2) annroximate match: activate the proximity retrieval heuristics. Figure 1: Objects and relations in the domain model Dedal currently uses fourteen proximity retrieval heuristics to find related answers to a question. For instance, segments described by concepts like “decision for lever material” and “alternative for lever material” arc likely to be located in nearby regions of the documentation. The heuristics are described in detail in [Baudin et al. 92b]. Each retrieval step returns a list of references ordered according to a set of priority criteria. The user selects a reference and if the document is on line, goes to the corresponding segment of information (using the hypertext facility that supports the text and graphics documents). A user dissatisfied with the references retrieved can request more information and force Dedal to resume its search and retrieve other references. 4. Index Acquisition in Dedal acquires a new index in two phases: (1) an index creation phase, and (2) an index refinement phase. 4.1. Index Creation Figure 2 illustrates with an example how Dedal acquires a new index. Given a user query and feedback from the user on the relevance of the documents retrieved. The index creation phase goes through the following steps: 1. Ouery formulation: The user’s question in English is “what is the function of the hub?“. After the user selects the subject inner-hub from the domain model and the topic function from the task vocabulary, the corresponding query in Dedal is: < topic: function of subject: inner-hub>(In the following paragraphs we will use a shortened syntax for queries where the words topic and subjects are omitted and where domain concepts are indicated in bold). 2. Ouerv-Index manning: Dedal tries to find an index that exactly matches the query. In this case, it does not find an exact match and applies a proximity heuristic to guess where the required information may be located. The heuristic states that any information describing how a mechanism works might also describe the function of its parts. In this case, given that inner-hub is a subpart of the disk-stack mechanism, Dedal matches the query “function of inner-hub” with two indices 11 and 12 pointing to two information regions describing the “operation of disk- stack”. 3. Relevance Feedback: The user looks at the two references retrieved, finds that the reference pointed to by the index 12 (page 12 in the record report-333) describes the function of the inner hub while the document associated with index 11 does not. The user rates the reference 12 as relevant. 4. Index Acauisition: The query: “function of inner-hub” is more specific (see section 3) than the index “operation of disk-stack”. In this case Dedal creates a new index 13. The system now knows that page 12 of report-333 explicitly describes the function of the inner-hub. Each time a reference is retrieved by the approximate match and is relevant, Dedal attaches the reference of the selected index to the query, turning the query into a new index (as shown in step 4 in figure 2). In addition, the procedure records the type of inference that relates each subject of the new question to the subject of the matching index. There are four types of inferences: identity, specialization, generalization and extension. These inferences determine the type of the subjects associated with the new index created. The type of a new subject is identity if this subject is identical to a subject of the matching index. The type of the new subject is a specialization if it is related to a subject of the matching index by a subpart or isa relationship, or if its value denends-on the value of the matching subject. The type of subject is a generalization if the matching subject is related to the new subject by a 454 Baudin subnart or &J relation, or if its value denends-on the value of the new subject. The type of the new subject is an extension if it has no relations with any of the matching subjects. Finally if an index is defined manually by a user, the type of its subjects is: human-indexer. For instance, if the query : “relation between solenoid and lever” matches the index: <topic: operation, subject: solenoid, reference: (meeting-10/2/91, 12)>, the new index will be: <topic: relation, subject1 : solenoid and subject2: lever, reference(meeting- 10/2/g 1, 12)>, where type(subject1) = identity and type(subject2)= extension. When a query is matched to a new index created by Dedal, the type of each subject is taken into account in the determination of the ordering coefficient. The greatest confidence is attached to subjects in the following order: human-indexer, identity, specialization, generalization and extension. This means that there is high confidence in a new index created by a human while little confidence if the index is overgeneralized or provides an unrelated reference. 4.2 Index Refinement Two factors may impact the ability of an acquired index to accurately describe the associated information: (1) incompleteness of the domain model: If the model is missing the particular subject the user is interested in and the user selects a related subject, the approximate match might still retrieve a relevant document. In this case the user query does not exactly describe the information required by the user and the resulting index will be inaccurate; (2) multiple subject problem: when a query involves several subjects from the model, the user might feel satisfied with a document that refers to a subset of these subjects. For instance, if the query is of the form “relation between outer-cage, solenoid and might feel satisfied with a reference which only describes the relation between outer-cage and solenoid, the third argument: lever will then incorrectly describes the content of the referenced document. The index refinement phase keeps track of the relevance of each subject in the newly acquired indices. Each time a query Q matches an acquired index I, and a subject Sq of Q is related to the subject Si of the index I (where related means either is the identity, a specialization, a generalization or an extension), the following procedure is activated: if the corresponding reference is relevant, the success rate of Si is incremented. If the reference retrieved is irrelevant, the failure rate of Si is incremented. The idea is that after some time, the indices that are suspect (whose failure rate is above a certain threshold) will be presented to a human indexer who will decide what indices should be maintained or deleted and what subjects should be dropped from the index. For example, if the question is “what component interacts with the lever?“, the corresponding query : 6 relation (between) lever and $X> (where $X is a variable) matches the body of the index I: c relation (between) solenoid, lever, shaft >. If the match is rated by the user as relevant, the coefficient of success of the subject lever in index I will be reinforced. If the user indicates that the reference retrieved is not relevant, the coefficient of failure of subject lever in I will be reinforced. Eventually if the index I fails to match any query about lever, the subject lever will be dropped. -_ 1. User: what is the function of the inner-hub? 2. Exact match: Find a conceptual index with topic = function and subject = inner-hub -> the retrieval fail Heuristic: If the query is function of X and X is a subpart of Y, look for documents that describe how Y works ---> find indices with topic = operation and subject = Y. From the domain model: inner-hub is part of disk-stack. Approximate match: Find an index with topic = operation and subject = disk-stack - ->Two indices are found: 4i!&---- 11: topic = operation, subject = disk-stack 12: topic: operation, subject: disk-stack level-of-detail = conceptual level-of-detail = detailed in-record: meeting- 12/2/90 j~~~~~~~~. . . . ‘. :. .‘. ,,?~~~3~iiiiiiiiiriiiiiiiiiiirii . . . . . . . . . .,.,.~.,.,.,.,.,.,.,.,.,.,.....,.,.,.,.,.,.,.,., _.,.,.,. :.:.:+:.:.: .,.,.,.,.,.,.,.,._._...,.,.,.,.,.,., in-segment: 2 ~~~~~~~~~~~~~~:~~~~,~~ . . . . . . . . . . . . . . . . . . . . _.:.>,.: .,.,. ~:.:.~:.:.:.:.:.:.:.:.:.:.~:.:.:.:.:.:.:.:.:.:.:.:.:.;.:.:.:.:.:.: 3. Relevance feedback: Get feedback from the user on the relevance of the information retrieved. ---> User: page 12 in the record report-333 is relevant 4. Index acquisition: create a new index 13 13: topic: function, subject: inner-hub level-of-detail: detailed i&f&&-&y$~-3~~ in~segmeixti-1’2 ,. rrgure L: LreaLing a new muex Novel Methods in Knowledge Acquisition 455 index: EXPl-Q17 index: EXPl-Q59 in-record: DAMPER-DRD-WINTER-1990, in-record: DAMPER-DRD-SPRING-1990, in-segment: 23 in-segment: 12 Topic: LOCATION of Subject: ARM Topic: RELATION of Subjects: created by rule: PART-OF (SUSPENSION-SYSTEM, DAMPER) from question: Q15 created by rule: OR-RULE from index G205: from question: Q58 Topic: DESCRIPTION from index G329: of Subject: ROTARY-DAMPER Topic: RELATION of Subject: (SUSPENSION-SYSTEM, CAR) (a) @I Figure 3: Two indices generated by Dedal 5. Experiments and Preliminary In this section we report on experiments and preliminary results to evaluate the effectiveness of Dedal’s index acquisition. Index Acquisition is considered efective by three criteria: reusability, relevance and context independence of the indices in future retrieval (see section 2 for a description of these criteria). We conducted experiments where we observed mechanical engineers using Dedal to ask questions in the context of a modification of a shock absorber designed at Stanford’s Center for Design Research for Ford Motor Corporation [Baudin et al. 92a]. The engineers rated the references retrieved by Dedal as relevant or irrelevant. In these experiments we considered three contextual factors: the user, the problem being solved, and the specific goal that motivates each query. Experiment 1: In the first experiment a mechanical designer unfamiliar with the shock absorber design queried the system during redesign. In this experiment we measured the relevance and the reusability of the indices acquired within the same problem solving process. As the new indices were created, they were reused to answer slightly different questions. Out of 71 indices created, 13 were reused and out of those 70% were found relevant by the user. The main causes of irrelevance were the incompleteness of the model and the multiple subjects problem, where indices that involve relations among multiple subjects need more training to be refined (see Section 4.2). Experiment 2: An expert designer used the system for a similar redesign task. In this study we observed how the indices created during experiment1 were reused in experiment2. This gave us an idea of the reusability and relevance of these indices, with a user of different design experience, and during the course of another problem solving process. In this experiment many questions were about the relation among multiple subject and we focused on the reusability and relevance of indices that have more than one subject. Each time a multiple subject index is reused, the success or failure coefficients of its subjects are updated by the system. We found that: (1) The number of irrelevant new indices retrieved outweighed the number of new indices that were relevant, thus degrading the performance of the system. In this 456 Baudin experiment 30 indices created during experiment1 were reused and out of these, 40% were relevant. As expected, this degradation was due to the multiple subject problem, mainly to the introduction of incorrect subject extensions. (2) In the new indices, each incorrect subject was showing positive failure rates and no success rates. The new indices created were shown to another designer that confirmed the trend that the system recorded. This suggests that the accuracy of the indices created is improving and will lead to a better performance in future retrieval. Figure 3b shows an index generated during the first experiment, in this index the subject “damper” is an incorrect extension of the original index “relation (between) suspension-system, car”. The rating (not shown on the figure) of the subject “damper” showed a positive failure rating and no success rating after we conducted the second experiment. Exueriment 3: We presented the 71 indices created (see Figure 3) during experiment 1 along with the associated information regions to a designer familiar with the shock absorber documentation. The designer rated each of these indices as relevant or irrelevant depending on his appreciation of the ability of the index to describe (part of )the associated information. In this experiment, the designer reviewed the relevance and context independence of the indices created. The three contextual factors (user, problem and goals) in our experiment were removed: the designer was different from the users who conducted the experiments, he rated the indices independently of any problem solving task, and he had no access to the English version of the questions that motivated the queries. The designer rated 86% of the acquired indices as relevant. Here again the irrelevant indices acquired were due to the incompleteness of the domain model and the introduction of incorrect subject extensions in indices with more than one argument. The three criteria, reusability, relevance and context independence, of the acquired indices don’t give us a direct measure of the impact of these indices on the global retrieval performance in terms of the precision and recall of the retrievall. However, when the newly acquired indices are reusable and relevant across contexts, the references associated with them can be retrieved through an exact match instead of an approximate match. Our assumption is that this provides better performance since the precision of the exact match retrieval is higher than the precision of the approximate retrieval [Baudin et al. 92b] and since the exact match will now retrieve more references. The intuition is that the user will see more relevant references sooner while more irrelevant inferences will be pruned form the first set of documents proposed to the user. For instance in the example discussed in Section 4.1, the system retrieved two references in response to the query “function of inner-hub”, only one of this reference being relevant the precision of this retrieval was 50%. After Dedal acquired the new index 13 and the next time the same question is asked, only the relevant reference will be retrieved through an exact match in the first set of answers proposed to the user. 6 Future work Performance evaluation: Our preliminary experimental results are mostly qualitative. They are useful in indicating the main features of the effectiveness of the index acquisition in terms of the reusability, relevance and context independence of the acquired indices. In order to have a more precise notion of the effectiveness of the method we plan to quantitatively evaluate the impact of these indices on the global performance of the system in terms of the gain in the precision and recall of Dedal. However, The quantitative evaluation of the method on a meaningful sample of queries is a difficult task because: (1) the questions submitted to the system during the experiments must be motivated by specific goals (such as the redesign of the shock absorber in our first experiment); and (2) the questions asked during these experiments must overlap so as to involve the new indices created. Index refinement: Our refinement algorithm is preliminary and can be expanded in two directions. One direction is to add to the refinement procedure the capability to automatically analyze which subjects cause the success or failure of an index so that, after multiple queries, Dedal can automatically decide how to modify an index based on the rating of its subjects. Another direction is to increase the interaction between the system and the user in order to elicit more knowledge about the causes of failure when a new index led to the retrieval of an irrelevant reference. This would be similar to the dialogue triggered by retrieval failure in Protos [Porter et al. 901. 1 These are two criteria used to measure the performance of information retrieval systems,. Precision is the number of relevant references retrieved over the total number of references retrieved in response to a query. Recall is the number of relevant references retrieved in response to a query over the total number of existing relevant references. Interactive modification of the domain model: The query language is designed to describe as much as possible the information required by the user. However, any language that uses concepts from a model is inherently incomplete. A missing domain subject forces the user to fall back on a related subject and is a source of inaccuracy in the use of queries for indexing purposes. One way of alleviating this problem is to allow the user to define new domain subjects when he cannot find a suitable concept in the model. We implemented a question formulation component that interacts with the user to understand how a new subject relates to the domain model and we plan to test this functionality with a user. Definition of the domain model: Our conceptual query language is (1) task dependent: It is adapted to the type of questions that designers are interested in when they access design documents, and (2) is constrained by a domain model and requires this model to be built for each new design project. With respect to this method, an advantage of technical domains that relate to the operation, diagnostic or design of engineered artifacts, is that the scope of the domain model is usually well defined. For instance, in the engineering design domain a large part of technical documentation can be indexed using terms from a structural model (part-of hierarchy of components) of the designed artifact. The domain model becomes a design glossary whose terms are linked by different types of relations. Although model building might be considered a burden when compared to domain independent information retrieval systems, it is interesting to note that this type of “super glossary” is actually useful to the members of a design project as it explicitly defines what is meant by the vocabulary used by each member of the team. 7. elated war Information retrieval systems have used relevance feedback techniques for two purposes: (1) to help refine user queries, and (2) to help refine indices associated with textual documents. Approaches such as [Salton 68][Croft et al. 9O][Tou et al. 821 are domain independent methods that operate at the syntactic level in that they use a combination of words from a text to index and query the information. By comparison we constrained the query language and we use the queries to index the documents at a conceptual level appropriate to represent the content of the information in a given domain. The CID project [Boy 891 starts with words from the text to index pages of textual documents. Index acquisition is performed by attaching contextual information such as the user profile to restrict the applicability of the indices. In our approach, the queries partially describe the content of the target information at the “appropriate” conceptual level and can directly be turned into an index. In this respect contextual factors such as the domain relations or the type of user are already embedded in the model underlying the query language and therefore become part of the acquired indices. Novel Methods in Knowledge Acquisition 457 RUBRIC (Tong 89) uses evidential reasoning and natural language processing techniques to infer the content of a text. For instance, an evidential rule can define which words and relations among words suggest a given concept. It is not clear, at this point, how much background knowledge would be needed to automatically extract the document descriptions from our text-based documents. 8. Summary We applied the use of relevance feedback to the acquisition of conceptual indices. We turn user queries into indices that partially describes the content of text, graphics and videotaped information at a conceptual level appropriate for a given class of users in a given domain. Using queries to describe pieces of information is made possible by: (1) constraining the query language: this requires studying the information needs of users in a given domain to identify generic types of questions this class of users is interested in, and (2) using a model of the domain to be able to match the queries with more general or related conceptual indices. Although the principle of our approach is domain independent, its implementation requires to build a domain model. Our approach is particularly well adapted to the indexing of technical documents that describe the operation, diagnosis or design of complex artifacts where the domain model can be clearly circumscribed. Acknowledgments: Thanks to Ade Mabogunje, Guy Boy and Nathalie Mathe for discussions on indexing and relevance feedback. We are grateful to Fred Lakin from the Performing Graphics Company for his support of the Electronic Design Notebook system that interacts with Dedal. Thanks to Michel Baudin for his help on early drafts of this paper. References Baudin, C., Gevins, J, Baya, V, Mabogunje, A. 1992a “Dedal: Using Domain Concepts to Index Engineering Design Information”, Proceedings of the Meeting of the Cognitive Science Society, Bloomington, Indiana. Baudin, C., Gevins, J., Baya, V., “Using Device Models to Facilitate the Retrieval of Multimedia Design Information”, in proceedings of IJCAI 93 Chambery, 1992b. Baya, V, Gevins, J, Baudin, C, Mabogunje, A, Leifer, L., Toye, G.,“An Experimental Study of Design Information Reuse”, in proceedings of the 4th International Conference on Design Theory and Methodology, 1992. Boy, G., “The block representation in knowledge acquisition for computer integrated documentation”, in: Proceedings Knowledge Acquisition for Knowledge-based systems, AAAI Workshop, Banff, Canada - 1989. Croft, W.B, Das, R., “Experiments with Query Acquisition and Use in Document Retrieval Systems”. in Proceedings of SIGIR 1990. Graesser, A.; Black, J., 1985. The Psychology of Questions. Lawrence Elrlbaum associates. Hayes, P., Pepper, J. “Towards An Integrated Maintenance Advisor” in Hypertext ‘89 Proceedings. Mabogunje, A. “A conceptual framework for the development of a question based design methodology”, Center for Design Research Technical Report (19900209), February 1990. Mauldin, M. L., “Retrieval Performance in Ferret, A Conceptual Information Retrieval System”. in Proceedings of SIGIR 1990. Osgood, R., Bareiss, R. “Question-based indexing”, Technical report 1991, The Institute for the Learning Sciences, Northwestern University. Porter, B, Bareiss R., Holte C. “Concept learning and heuristic classification in weak-theory domains” in the AI Journal 45 1990 p 229-263. Salton, 6. Automatic Information Organization and Retrieval”. MC Graw-Hill, New York; 1968 Salton, G., Buckley, C. “Improving Retrieval Performance by Relevance Feedback”, Technical Report, Cornell University, 1988 Schank, R., Ferguson, W., Birnbaum, L., Barger, J., Greising, M., “ASK TOM: An Experimental Interface for Video Case Libraries” ILS technical report, March 199 1. Tong, M. R., Appelbaum, A., and Askman V. “A Knowledge Representation for Conceptual Information Retrieval”, International Journal of Intelligent Systems. vol. 4,259-283, 1989 Tou, F.M. et al. “RABBIT: An intelligent database assistant”. Proceedings AAAI-82,3 14-3 18, 1982. 458 Baudin | 1993 | 68 |
1,396 | Learning: Interface Agents v Pattie Maes MIT Media Laboratory 20 Ames Street Rm. 401a Cambridge, MA 02139 pattie@media.mit.edu Abstract Interface agents are computer programs that employ Artificial Intelligence techniques in order to provide assistance to a user dealing with a particular comput- er application. The paper discusses an interface agent which has been modelled closely after the metaphor of a personal assistant. The agent learns how to as- sist the user by (i) observing the user’s actions and imitating them, (ii) receiving user feedback when it takes wrong actions and (iii) being trained by the us- er on the basis of hypothetical examples. The paper discusses how this learning agent was implemented us- ing memory-based learning and reinforcement learning techniques. It presents actual results from two proto- type agents built using these techniques: one for a meeting scheduling application and one for electronic mail. It argues that the machine learning approach to building interface agents is a feasible one which has several advantages over other approaches: it provides a customized and adaptive solution which is less cost- ly and ensures better user acceptability. The paper also argues what the advantages are of the particular learning techniques used. Introduction Computers are becoming the vehicle for an increas- ing range of everyday activities. Acquisition of news and information, mail and even social interactions be- come more and more computer-based. At the same time an increasing number of (untrained) users are in- teracting with computers. Unfortunately, these devel- opments are not going hand in hand with a change in the way people interact with computers. The currently dominant interaction metaphor of direct manipulation [Schneiderman 19831 requires the user to initiate all tasks and interactions and monitor all events. If the ever growing group of non-trained users has to make effective use of the power and diversity the computer provides, current interfaces will prove to be insufficient. The work presented in this paper employs Artificial Intelligence techniques, in particular semi-intelligent semi-autonomous agents, to implement a complemen- ta.ry style of interaction, which has been referred to as indirect management [Kay 19901. Instead of uni- directional interaction via commands and/or direct v Robyn Kozierok MIT Media Laboratory 20 Ames Street Rm. 401~ Cambridge, MA 02139 robyn@media.mit .edu manipulation, the user is engaged in a cooperative pro- cess in which human and computer agent(s) both initi- ate communication, monitor events and perform tasks. The metaphor used is that of a personal us&stunt who is collaborating with the user in the same work envi- ronment. The idea of employing agents in the interface to delegate certain computer-based tasks was introduced by people such as Nicholas Negroponte [Negroponte 19701 and Alan Kay [Kay 19841. More recently, sev- eral computer manufacturers have adopted this idea to illustrate their vision of the interface of the fu- ture (cf. videos produced in 1990-1991 by Apple, Hewlett Packard, Digital and the Japanese FRIEND21 project). Even though a lot of work has gone into the modeling and construction of agents, currently avail- able techniques are still far from being able to pro- duce the high-level, human-like interactions depicted in these videos. Two approaches for building interface agents can be distinguished. Neither one of them pro- vides a satisfactory solution to the problem of how t,he agent acquires the vast amounts of knowledge about the user and the application which it needs to success- fully fulfill its task. The first approach consists in making the end-user program the interface agent. Malone and Lai’s Oval (formerly Object-Lens) system [Lai, Malone, & Yu 19881, for example, has “semi-autonomous agents” which consist of a collection of user-programmed rules for processing information related to a particular task. For example, the Oval user can create an electronic mail sorting agent by creating a number of rules that process incoming mail messages and sort them into dif- ferent folders. Once created, these rules perform tasks for the user without having to be explicitly invoked by the user. The problem with this approach to building agents is that it requires too much insight, understand- ing and effort from the end-user. The user has to (1) recognize the opportunity for employing an agent, (2) take the initiative to create an agent, (3) endow the agent with explicit knowledge (specifying this knowl- edge in an abstract language) and (4) maintain the agent’s rules over time (as work habits change, etc.). The second approach, also called the “knowledge- based approach”, consists in endowing an interface l’dovel Methods in Knowlledge Acquisition 459 From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. agent with a lot of domain-specific background knowl- edge about its application and about the user (called a domain model and user model respectively). This ap- proach is adopted by the majority of people working on intelligent user interfaces [Sullivan & Tyler 19911. At run-time, the interface agent uses its knowledge to recognize the user’s plans and find opportunities for contributing to them. For example, UCEgo [Chin 19911 is an interface agent designed to help a user solve problems in using the UNIX operating system. The UCEgo agent has a large knowledge base about how to use UNIX, incorporates goals and meta-goals and does planning, for example to volunteer information or correct the user’s misconceptions. One problem with this approach to building interface agents is that it requires a huge amount of work from the knowledge engineer: a large amount of application-specific and domain-specific knowledge has to be entered and little of this knowledge or the agent’s control architecture can be used when building agents for other applica- tions. A second problem is that the knowledge of the agent is fixed once and for all: it is possibly incor- rect, incomplete, not useful, and can be neither adapt- ed nor customized (e.g. to individual user differences or to the changing habits of one user). Finally, it can be questioned whether it is possible to provide all the knowledge an agent needs to always be able to “make sense” of the user’s actions (people do not always be- have rationally, unexpected events might happen, the organization might change, etc.). A Machine Learning Approach In our work we explore an alternative approach to building interface agents which heavily relies on Ma- chine Learning. The scientific hypothesis that is test- ed is that under certain conditions, an interface agent can “program itself”, i.e. it can acquire the knowl- edge it needs to assists its user. The agent is given a minimum of background knowledge and it learns ap- propriate “behavior” from the user. The particular conditions that have to be fulfilled are (1) the use of the application has to involve a lot of repetitive behav- ior, and (2) this repetitive behavior is very different for different users. If the latter condition is not met, i.e. the repetitive behavior demonstrated by different users is the same, a knowledge-based approach might prove to yield better results than a learning approach. If the former condition is not met, a learning agent will not be able to learn anything (because there are no regu- larities in the user’s actions to learn about). Our machine learning approach is inspired by the metaphor of a personal assistant. Initially a personal assistant is not very “customized” and may not even be very useful. Some amount of time will go by before the assistant becomes familiar with the habits, preferences and particular work methods of the person and orga- nization at hand. However, with every experience, the assistant learns, and gradually more tasks that were initially performed by the person directly, can be tak- en care of by the assistant.The goal of our research is to demonstrate that a learning interface agent can in a similar way become gradually more “helpful” to its user. In addition, we attempt to prove that the learn- ing approach has several advantages. First, it requires less work from the end-user and application developer. Second, the agent is more adaptive over time and the agent automatically becomes customized to individual user preferences and habits. The results described in a later section support all of the above hypotheses and predictions. A particular additional advantage of the learning ap- proach to building interface agents is that the user and agent can gradually build up a trust relationship. Most likely it is not a good idea to give a user an interface agent that is from the start very sophisticated, quali- fied and autonomous. Schneiderman has convincingly argued that such an agent would leave the user with a feeling of loss of control and understanding [Myers 19911. On the other hand, if the agent gradually devel- ops its abilities - as is the case in our approach - the user is also given time to gradually build up a model of how the agent makes decisions. A particular advan- tage of the machine learning technique we use, name- ly memory-bused learning [Stanfill & Waltz 19861, is that it allows the agent to give “explanations” for its reasoning and behavior in a language that the user is familiar with, namely in terms of past examples which are similar to the current situation. (“I thought you might want to take this action because this situation is similar to this other situation we have experienced before, in which you also took this action.“) We have developed a generic architecture for build- ing “learning interface agents”. The following section discusses the design and implementation of this archi- tecture. For more technical detail, the reader should consult [Kozierok & Maes 19931. We also built concrete examples of interface agents using this generic archi- tecture. These include (i) a “mail clerk”, which learns how a specific user prefers to have electronic messages handled and (ii) a “calendar manager” which learns to manage the calendar of a user and schedule meetings according to his or her preferences. Figure 1 shows some screen snaps from the calendar agent implemen- tation. The last section discusses the status of these prototypes and discusses the results obtained so far. Learning Techniques The interface agent uses several sources for learning (1) learning by observing the user, (2) learning from user feedback and (3) 1 earning by being trained. Each of these methods for learning is described in more detail below. More detail on the learning algorithms used can be found in [Kozierok & Maes 19931. 460 Maes Figure 1: The alert agent observes and memorizes all of the user’s interactions with the calendar application (left picture). When it thinks it knows what action the user is going to take in response to a meeting invitation, it may offer a suggestion (right picture), or even automate the action if its confidence is high enough (not shown). (4 memory of examples memorized situation< --> action 1 situation 2\ action 2 \\ .I ‘. - -. dl ----w * - \ -2,. <2,/ ,T new sduatlon --> 9 new . . . d3,’ / situation n --> action n ,,’ w17- wltb wig= w20= w21= 0.65 0.4 0.05 0.43 0.32 ----- Figure 2: (a) The agent sugg es s t an action to perform based on the similarity of the current situation with previous (memorized) situations. d; is the distance between the jth memorized situation and the new situation. (b) The distance between two situations is computed as a weighted sum over the features. The weight of a feature and the distance between the two values for it depend upon the correlation statistics computed by the agent. The figure shows some possible feature weights and distances from the calendar manager agent. Learning by Observing the User The interface agent learns by continuously “looking over the shoulder” of the user as the user is performing actions. The interface agent can monitor the activities of the user, keep track of all of his/her actions over long periods of time (weeks or months), find recurrent pat- terns and offer to automate these. For example, if an electronic mail agent notices that a user almost always stores messages sent to the mailing-list “intelligent- interfaces” in the folder patt ie : email : int-int . txt , then it can offer to automate this action next time a message sent to that mailing-list has been read. The main learning technique used in our imple- mentation is a variant on nearest-neighbor techniques known as, memory-based learning [Stanfill & Waltz 19861 (see illustration in Figure 2(a)). As the user per- forms actions, the agent memorizes all of the situation- action pairs generated. For example, if the the user saves a particular electronic mail message after having read it, the mail clerk agent adds a description of this situation and the action taken by the user to its mem- ory of examples. Situations are described in terms of a set of features, which are currently handcoded. For example, the mail clerk keeps track of the sender and receiver of a message, the Cc: list, the keywords in the Subject : line, whether the message has been read or not, whether it has been replied to, and so on. When a new situation occurs, the agent compares it against the memorized situation-action pairs. The most similar of these memorized situations contribute to the decision of which action to take or suggest in the current situ- ation. Novel Methods in Knowledge Acquisition 461 The distance between a new situation and a memo- rized situation is computed as a weighted sum of the distances between the values for each feature as de- tailed in [Stanfill & Waltz 19861 (see Figure 2(b)). The distance between feature-values is based on a metric computed by observing how often in the example-base the two values in that feature correspond to the same action. The weight given to a particular feature de- pends upon the value for that feature in the new sit- uation, and is computed by observing how well that value has historically correlated with-the action tak- en. For example, if the Sender: field of a message has shown to correlate to the action of saving the message in a particular folder, then this feature (i.e. whether or not it is the same in the old and new situation) is given a large weight and thus has a high impact’on the distance between the new and the memorized situ- ation. At regular times, the agent analyzes its memory of examples and computes the statistical correlations of features and values to actions, which is used to de- termine these feature-distances and -weights. Once all the distances have been computed, the agent predicts an action by computing a score for each action which occurs in the closest N (e.g. 5) memo- rized situations and selecting the one with the highest score. The score is computed as where S is the set of memorized situations predicting that action, and d, is the distance between the current situation and the memorized situation s. Along with each prediction it makes, the agent com- putes a confidence level for its prediction, as follows: dpredictsd npredictsd dothcr nother nt0tal N where: ON is, as before, the number of situations considered in making a prediction, adpredicted is the distance to the closest situation with the same action as the predicted one, edother is the distance to the closest situation with a different action from the predicted one, ~~~~~~~~~~~ is the number of the closest N situations with distances less than a given maximum with the same ac- tion as the predicted one, O?Z,,theT is the minimum of 1 or the number of the closest N situations with distances within the same maximum with different actions than the predicted one, and entotal closest npredicted + nother, i.e. the total number of the situations with distances below the maximum. If the result is < 0, the confidence is truncated to be 0. This OCCUrS when dpredictedlnpredicted < d other/nether which is usually the result of several pif- ferent actions occurring in the top N situations. If every situation in the memory has the same action at- tached to it, dother has no value. In this case the first term of the confidence formula is assigned a value of 1 (but it is still multiplied by the second term, which in this case is very likely to lower the confidence val- ue as this will usually only happen when the agent has had very little experience). This computation takes in- to account the relative distances of the best situations predicting the selected action and another action, the proportion of the top N situations which predict the selected action, and the fraction of the top N situa- tions which were closer to the current situation than the given maximum. If the confidence level is above a threshold Tl (called the “tell-me” threshold), then the agent offers its sug- gestion to the user. The user can either accept this suggestion or decide to take a different action. If the confidence level is above a threshold T2 > Tl (called the “do-it” threshold), then it automates the action without asking for prior approval. The agent keeps track of all the automated actions and can provide a report to the user about its autonomous activities whenever the user desires this. The two thresholds are set by the user and are action-specific, thus the user may, for example, set higher “do-it” thresholds for ac- tions which are harder to reverse. (A similar strategy is su 1992 .) f gested for computer-chosen thresholds in [Lerner The agent adds the new situation-action pair to its memory of examples, after the user has approved of the action. Occasionally the agent “forgets” old examples so as to keep the size of the example memory manageable and so as to adapt to the changing habits of the user. At the moment, the agent deletes the oldest example whenever the number of examples reaches some maxi- mum number. We intend to investigate more sophisti- cated “forgetting” methods later. One of the advantages of this learning algorithm is that the agent needs very little background knowledge. Another advantage is that no information gets lost: the agent never attempts to abstract the regularities it de- tects into rules (which avoids problems related to the ordering of examples in incremental learning). Yet an- other advantage of keeping individual examples around is that they provide good explanations: the agent can explain to the user why it decided to suggest or auto- mate a particular action based on the similarity with other concrete situations in which the user took that action - it can remind the user of these prior situations and point out the ways in which it finds them to be similar to the current situation. Examples provide a familiar language for the agent and user to communi- cate in. There is no need for a more abstract language and the extra cognitive burden that would accompany it. One could argue that this algorithm has disadvan- tages in terms of computation time and storage re- quirements. We believe that the latter is not an issue 462 Maw because computer memory becomes cheaper and more available every day. The former is also less of a problem in practice. Computing the statistical correlations in the examples is an expensive operation (O(n2)), but it can be performed off-line, for example at night or dur- ing lunch breaks. This does not mean that the agent does not learn from the examples it has observed ear- lier the same day: new examples are added to memory right away and can be used in subsequent predictions. What does not get updated on an example basis are the weights used in computing distances between ex- amples. The prediction of an action is a less compu- tation intensive operation (O(n)). This computation time can be controlled by restricting the number of ex- amples memorized or by structuring and indexing the memory in more sophisticated ways. Furthermore, in a lot of the applications studied real-time response is not needed (for example, the agent does not have to decide instantly whether the user will accept a meet- ing invitation). In the experiments performed so far, all of the reaction times have been more than satisfac- tory. More details and results from this algorithm are described in a later section and in [Kozierok & Maes 19931. Learning from User Feedback A second source for learning is direct and indirect us- er feedback. Indirect feedback happens when the user neglects the suggestion of the agent and takes a differ- ent action instead. This can be as subtle as the user not reading the incoming electronic mail messages in the order which the agent had listed them in. The us- er can give explicit negative feedback when inspecting the report of actions automated by the agent (“don’t do this action again”). One of the ways in which the agent learns from negative feedback is by adding the right action for this situation as a new example in its database. Our agent architecture also supports another way in which the agent can learn from user feedback. The ar- chitecture includes a database of priority ratings which are relevant to all situations. For example, the calen- dar manager keeps a database of ratings expressing how important the user thinks other users of the sys- tem are, and how relevant the user feels certain key- words which appear in meeting descriptions are. These ratings are used to help compute the features which de- scribe a situation. For example, there is an “initiator import ante” feature in the calendar manager, which is computed by looking up who initiated the meeting, and then finding the importance rating for that person. When the agent makes an incorrect suggestion, it solic- its feedback from the user as to whether it can attribute any of the blame to inaccuracy in these priority ratings, and if so in which ones. It can then adjust these ratings to reflect this new information, increasing or decreas- ing them as the difference in the “positiveness” of the suggested versus actual action dictates. The details of how this is done are described in [Kozierok & Maes 19931. Learning by eing Trained The agent can learn from examples given by the us- er intentionally. The user can teach/train the agent by giving it hypothetical examples of events and situa- tions and showing the agent what to do in those cases. The interface agent records the actions, tracks relation- ships among objects and changes its example base to incorporate the example that it is shown. For example, the user can teach the mail clerk agent to save all mes- sages sent by a particular person in a particular folder by creating a hypothetical example of an email mes- sage (which has all aspects unspecified except for the sender field) and dragging this message to the folder in question. Notice that in certain cases it is necessary to give more than one hypothetical example (e.g. if the user wants to train the system to save messages from different senders in the same folder). This functionality is implemented by adding the ex- ample in memory, including “wildcards” for the fea- tures which were not specified in the hypothetical sit- uation. The new situation-action pair will match all situations in which an email message has been received from a user with the same name. One of the unresolved questions is how such hypothetical examples should be treated differently both when selecting an action and when compiling statistics. [Kozierok 19931 explores this issue, and describes how both default and hard- and-fast rules can be implemented within the memory- based learning framework. This paper also discusses how rules may be used to compress the database, when either all or most of the situations the rule would rep- resent have occurred. esults The generic architecture for a learning interface agent is implemented in CLOS (Common Lisp Object Sys- tem) on a Macintosh. We have evaluated the design and implementation of this architecture by construct- ing agents for several application programs. We cur- rently have a prototype of a mail clerk as well as a calendar manager agent. The application software (the meeting scheduling system and electronic mail system) was implemented from scratch, so as to make it easi- er to provide “hooks” for incorporating agents. Both applications have a graphical direct manipulation in- terface (also implemented in Macintosh CLOS). The agent itself is hardly visible in the interface: a carica- ture face in the corner of the screen provides feedback to the user as to what the current state of the interface agent is. Figure 3 lists some of these caricatures. They help the user to quickly (in the blink of an eye) find out “what the agent is up to”. We have performed testing of both agents with sim- ulated users. We are currently testing the calendar agent on real users in our own office environment, and Novel Methods in owledge Acquisition 463 Alert Thinking Suggestion Surprised Confused Gratified Pleased Unsure Working Figure 3: Simple caricatures convey the state of the agent to the user. The agent can be (a) alert (tracking the user’s actions, (b) thinking (computing a suggestion), (c) offer- ing a suggestion (when above tell-me threshold) (a sugges- tion box appears under the caricature), (d) surprised the suggestion is not accepted, (e) gratified the suggestion is accepted, (f) unsure about what to do in the current situa- tion (below tell-me threshold) (suggestion box only shown upon demand), (g) confused about what the user does, (h) pleased that the suggestion it was not sure about turned out to be the right one and (i) working or performing an automated task (above do-it threshold). will begin real-user testing of the email agent shortly as well. These users will be observed and interviewed over a period of time. The results obtained so far with both prototypes are encouraging. The email agent learns how to sort messages into the different mailboxes cre- ated by the user; when to mark messages as “to be followed up upon” or “to be replied to”, etc. Current results on the meeting scheduling a ent are described in detail in [Kozierok & Maes 1993 . A collection of B seven such agents has been tested for several months worth of meeting problems (invitations to meetings, scheduling problems, rescheduling problems, etc). All seven agents learned over time to make mostly correct predictions, with high confidence in most of the correct predictions, and low confidence in almost all of the in- correct ones. Figure 4 shows the results for a represen- tative agent. Again, the results obtained demonstrate that the learning interface agent approach is a very promising one. From this graph one can see that the correct predictions tend to increase in confidence level, while the incorrect ones tend to decrease. (We expect to see similar results in our real-user tests, but the inconsistencies and idiosyncracasies of real users will probably cause the agents to take longer to converge on such positive results. However, the memory-based learning algorithm employed is designed to allow for inconsistencies, so we have confidence that the agents will indeed be able to perform competently within a reasonable timeframe.) Providing the user with access to this type of performance information allows him to easily set the thresholds at reasonable levels, as shown in the figure. 0.2. I I II 0.0, I = - . . - ’ . 0 10 20 SO 40 50 Figure 4: Results of a representative agent from the meet- ing scheduling application. The graph shows the right and wrong predictions made by the agent as plotted over time (X-axis). The Y-axis represents the confidence level the agent had in each of predictions. The picture also shows possible settings for the “tell-me” (Tl) and “do it” (T2) thresholds. Related Work The work presented in this paper is related to a similar project under way at CMU. Dent et. al. [Dent et al. 19921 describe a personal learning apprentice which as- sists a user in managing a meeting calendar. Their ex- periments have concentrated on the prediction of meet- ing parameters such as location, duration and day-of- week. Their apprentice uses two competing learning methods: a decision tree learning method and a back- propagation neural network. One difference between their project and ours is that memory-based learning potentially makes better predictions because there is no “loss” of information: when suggesting an action, the detailed information about individual examples is used, rather than general rules that have been abstract- ed beforehand. On the other hand, the memory-based technique requires more computation time to make a particular suggestion. An advantage of our approach is that our scheduling agent has an estimate of the qual- ity or accuracy of its suggestion. This estimate can be used to decide whether the prediction is good enough to be offered as a suggestion to the user or even to automate the task at hand. 464 Maes The learning agents presented in this paper are al- so related to the work on so-called demonstrational interfaces. The work which is probably closest is Cypher’s “ea er [Cypher B personal assistant” for HyperCard 1991 . This agent observes the user’s ac- tions, notices repetitive behavior and offers to auto- mate and complete the repetitive sequence of actions. Myers [Myers 19881 and Lieberman [Lieberman 19931 built demonstrational systems for graphical applica- tions. One difference between the research of all of the above authors and the work described in this paper is that the learning described here happens on a longer time scale (e.g. weekly or monthly habits). On the other hand a system like Eager forgets a procedure af- ter it has executed it. A difference with the systems of Lieberman and Myers is that in our architecture, the user does not have to tell the agent when it has to pay attention and learn something. Conclusion We have modelled an interface agent after the metaphor of a personal assistant. The agent gradu- ally learns how to better assist the user by (1) ob- serving and imitating the user, (2) receiving feedback from the user and (3) being told what to do by the user. The agent becomes more helpful, as it accumu- lates knowledge about how the user deals with certain situations. We argued that such a gradual approach is beneficial as it allows the user to incrementally build up a model of the agent’s behavior. We have present- ed a generic architecture for constructing such learning interface agents. This architecture relies on memory- based learning and reinforcement learning techniques. It has been used to build interface agents for two real applications. Encouraging results from tests of these prototypes have been presented. Acknowledgments Cecile Pham, Nick Cassimatis, Robert Ramstadt, Tod Drake and Simrod Furman have implemented parts of the meeting scheduling agent and the electronic mail agent. The authors have benefited from discussions with Henry Lieberman, Abbe Don and Mitch Resnick. This work is sponsored by the National Science Foun- dation (NSF) under grant number IRI-92056688. It has also been partially funded by Apple Computer. The second author is an NSF fellow. References Chin D. 1991. Intelligent Interfaces as Agents. In: J. Sullivan and S. Tyler eds. Intelligent User Interfaces, 177-206. New York, New York: ACM Press. Crowston, K., and Malone, T. 1988. Intelligent Soft- ware Agents. BYTE 13( 13):267-271. Cypher, A. 1991. EAGER: Programming Repetitive Tasks by Example. In: CHI’91 Conference Proceed- ings, 33-39. New York, New York: ACM Press. Don, A. (moderator and editor). 1992. Panel: Anthro- pomorphism: From Eliza to Terminator 2. In: CHI’92 Conference Proceedings, 67-72. New York, New York: ACM Press. Kay, A. 1984. Computer Software. Scientific Ameri- can. 251(3):53-59. Kay, A. 1990. User Interface: A Personal View. In: B. Laurel ed. The Art of Human-Computer Interface Design, 191-208. Reading, Mass.: Addison-Wesley. Kozierok, R., and Maes, P. 1993. A Learning Interface Agent for Scheduling Meetings. In: Proceedings of the 1993 International Workshop on Intelligent User Interfaces, 81-88. New York, New York: ACM Press. Kozierok, R. 1993. Incorporating Rules into a Memory-Based Example Base, Media Lab Memo. Dept. of Media Arts and Sciences, MIT. Forthcom- ing. Lai, K., Malone, T., and Yu, K. 1988. Object Lens: A “Spreadsheet” for Cooperative Work. ACM Truns- actions on Ofice Information Systems 5(4):297-326. Laurel, B. 1990. Interface Agents: Metaphors with Character. In: B. Laurel ed. The Art of Human- Computer Interface Design, 355-366. Reading, Mass.: Addison- Wesley. Lerner, B.S. 1992. Automated customization of struc- ture editors. International Journal of Man-Machine Studies 37(4):529-563. Lieberman, H. 1993. Mondrian: a Teachable Graph- ical Editor. In: A. Cypher ed. Watch what I do: Programming by Demonstration. Cambridge, Mass,: MIT Press. Forthcoming. Dent, L., Boticario, J., McDermott, J., Mitchell, T., and Zabowski D. 1992. A Personal Learning Ap- prentice. In: Proceedings, Tenth National Conference on Artificial Intelligence, 96-103. Menlo Park, Calif.: AAAI Press. Myers, B. 1988. Creating User Interfaces by Demon- stration. San Diego, Calif.: Academic Press. Myers, B. (moderator and editor). 1991. Panel: Demonstrational Interfaces: Coming Soon? In: CHI’91 Conference Proceedings, 393-396. New York, New York: ACM Press. Negroponte, N. 1970. The Architecture Machine; Towards a more Human Environment. Cambridge, Mass.: MIT press. Schneiderman, B. 1983. Direct Manipulation: A Step Beyond Programming Languages. IEEE Computer 16(8):57-69. Stanfill, C., and Waltz, D. 1986. Toward Memory- Based Reasoning. Communications of the ACM 29( 12):1213-1228. Sullivan, J.W., and Tyler, S.W. eds. 1991. Intelligent User Interfaces. New York, New York: ACM Press. Novel Methods in Knowledge Acquisition 465 | 1993 | 69 |
1,397 | On the Adequateness of the Connection Method Antje Beringer and Steffen EGlldobler Intellektik, Informatik, TH Darmstadt, Alexanderstrafie IO, D-6100 Darmstadt (Germany) E-mail: {antje, steffen}@intellektik.informatik.th-darmstadt.de Abstract Roughly speaking, adequatness is the property of a theorem proving method to solve simpler problems faster than more difficult ones. Au- tomated inferencing methods are often not ade- quate as they require thousands of steps to solve problems which humans solve effortlessly, sponta- neously, and with remarkable efficiency. L. Shastri and V. Ajjanagadde - who call this gap the ar- tificial intelligence paradox - suggest that their connectionist inference system is a first step to- ward bridging this gap. In this paper we show that their inference method is equivalent to rea- soning by reductions in the well-known connec- tion method. In particular, we extend a reduction technique called evaluation of isolated connections such that this technique - together with other reduction techniques - solves all problems which can be solved by Shastri and Aianagadde’s sys- tem under the same parallel time and space re- quirements. Consequently, we obtain a semantics for Shastri and Ajjanagadde’s logic. But, most im- portantly, if Shastri and Ajjanagadde’s logic really captures the kind of reasoning which humans can perform efficiently, then this paper shows that a massively parallel implementation of the connec- tion method is adequate. ntsoduction Adequateness is one of the assumptions underlying au- tomated deduction. Following W. Bibel [1991], there is an adequate general proof method that can auto- maticaEly discover any proof done by humans provided the problem (including all required knowledge) is stated in appropriately formalized terms. Adequateness is, roughly speaking, understood as the property of a the- orem proving method that, for any given knowledge base, the method solves simpler problems faster than more difficult ones. Furthermore, simplicity is mea- sured under consideration of all (general) formalisms available to capture the problem and intrinsic in this assumption is a belief in the existence of an algorithm that is feasible (from a complexity point of view) for the set of problems humans can solve. Later on, Bibel defines general research goals in the field of automated deduction, the first goal being the search for general and adequate proof methods. This paper is concerned with adequate proof meth- ods. That adequateness is one of the main prob- lems in automated deduction has been realized by many researchers (cf. [Levesque and Brachman, 1985; Levesque, 19891). [1990; 19931 L. Shastri and V. Ajjanagadde even call the gap between the ability of hu- mans to draw a variety of inferences effortlessly, spon- taneously, and with remarkable efficiency on the one hand, and the results about the complexity of rea- soning reported by researchers in artificial intelligence on the other hand, the artificial intelligence paradox. But they also developed a connectionist computational model - called SAM in the sequel - which can en- code a knowledge base consisting of millions of facts and rules, and performs a class of inferences with par- allel time bound by the length of the shortest proof and space bound by the size of the knowledge base.l More- over, SAM is consistent with recent neurophysiological findings and makes specific predictions about the na- ture of reflexive reasoning - ie. spontaneous reasoning as if it were a reflex [Shastri, 19901 - that are psycho- logically significant. Shastri and Ajanagadde suggest that their computational model is a step towards re- solving the artificial intelligence paradox. Logically, the knowledge bases considered by Shastri and Ajanagadde are sets of definite clauses - ie. uni- versally closed clauses of the form Al A . . . A A, 3 A, where the A, Ai, 1 < i 5 n, are atomic sentences. The knowledge bases are queried by universally closed atomic sentences. Such queries are answered positively if they are logical consequences of the knowledge base, and negatively otherwise. The query as well as the facts and rules are restricted in some particular way, which makes the system both interesting and difficult at the same time. It is difficult to give a semantics ‘SAM shares many features with the connectionist rea- soning system ROBIN developed by Lange and Dyer [1989]. They differ mainly in their technique to represent variable bindings. From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. for the class of problems considered in [Shastri and Aj- janagadde, 19931 and to understand the influence of the various restrictions on the massively parallel com- putational model. It is interesting as we would like to understand what kind of problems can be handled by a massively parallel computational model in parallel time bound by the length of the shortest proof and in space bound by the size of the knowledge base. In the following section we present the various restrictions in detail and outline the computational model underlying Shastri and Ajjanagadde’s approach. In [HGlldobler, 19931 the suggestion was made that reflexive reasoning is reasoning by reductions and, con- sequently, that the problems solved by SAM are simpler than the problems investigated by the artificial intel- ligence community. In this paper we will show that this suggestion holds by comparing SAM with the con- nection method and its various reduction techniques [Bibel, 19871. An optimal parallel implementation of these reduction techniques along the lines of CHCL [HZlldobler, 1990a; Hijlldobler and KurfeB, 19921 needs the same space and answers queries at least as fast as SAM. We also demonstrate that there are reflexive rea- soning tasks which can be solved in essentially one step by the parallel connection method, whereas SAM needs parallel time bound by the depth of the search space. We present the connection method and the relevant reduction techniques. To prove our main result we ex- tend the definition of isolated connections [Bibel, 19881 to so-called pointwise isolated connections (PIGS). The evaluation of PIGS is a general reduction technique ap- plicable to unrestricted first-order formulas. With the help of this technique we can prove our main result relating SAM and the connection method. Thus, the paper gives a semantics for the class of formulas considered in [Shastri and Ajjanagadde, 19931 and extends the definition of isolated connections such that this reduction technique solves reflexive reasoning tasks in parallel time bound by the length of the short- est proof and space bound by the size of the knowledge base. But, most importantly, if Shastri and Ajjana- gadde’s logic is the kind of logic needed for representing reasoning tasks which can be solved effortlessly, spon- taneously, and with remarkable efficiency by humans, then this paper shows that a parallel implementation of the connection method is adequate. These and related results are discussed in the final section. Reflexive Reasoning Shastri and Ajjanagadde [1990; 19931 identified a class of problems computable in space bound by the size of the knowledge base and in parallel time which is at worst sublinear in - and perhaps even independent of - the size of the knowledge base. The was moti- vated by the observation that humans can perform a limited class of inferences extremely fast although their knowledge base is extremely large. In this section we introduce the backward reasoning system SAM. As we are mainly interested in the logic of the system we neither give technical details nor discuss the biological plausibility of the connectionist model underlying SAM or the psychological significance of re- flexive reasoning. A detailed discussion of these topics can be found in [Shastri and Ajjanagadde, 19931. Let C be a finite set of constants. A knowledge base KB in [Shastri and Ajjanagadde, 19931 is a conjunction of rules and facts. The rules are of the form VXl...Xm [P1(...)A...hp,(...)* 3K . ..fi $I(...)], (1) where p, pi, 1 5 i 5 n, are multi-place predicate sym- bols, the arguments of the pi are variables from the set {XI, -. .,&), and the arguments of p are from {Xl,... ,zn)u{~l,-~ , Yk}UC.2 The facts and queries (or goals) are of the form 321 . . . Zl q(. . .), (2) where Q is a multi-place predicate symbol and the argu- ments of q are from 121,. . . , Zl} U C. The rules, facts, and goals are restricted as follows. 1. There are no function symbols except constants. 2. Only universally bound variables may occur as ar- guments in the conditions of a rule. 3. All variables occurring in a fact or once and are existentially bound. goal occur only 4. An existentially quantified variable (occurring in the head of a rule or in a fact) is only unified with variables. 5. A variable ditions of which a rule occurs more must occur than once in the con- in the conclusion of the rule and must be bound when the conclusion is unified with a goal. 6. A rule is used only a fixed number of times. From an automated deduction point of view some of these restrictions seem to be rather peculiar. But they are closely related to the mechanism used by Shas- tri and Ajjanagadde for representing variable bindings. The variable binding problem is one of the major prob- lems in connectionist systems (cf. [Barnden, 19841). Due to lack of space we cannot’discuss connection& solutions to this problem and the interested reader is referred to [Shastri and Ajjanagadde, 19931. In this paper we want to concentrate on the semantics of the logic described above. One should observe that re- strictions 4-6 cannot be checked statically, but must be checked dynamically. The final restriction is concerned with the problem of how many copies of a rule or fact are needed for a proof of a first-order formula. The dynamic creation of copies is again a major problem in connectionist systems not discussed here. Following [Shastri and Ajjanagadde, 19901, we assume wlog. that each rule may be used at most once. As it is unpre- dictable how many copies of a rule are needed, SAM is incomplete. A positive answer indicates that the goal @ is entailed by the knowledge base KB. A negative answer indicates that either G is not entailed by KB 2We use upp ercase letters for variables and lower letters for constants, function and predicate symbols. case 10 Beringer or it cannot be proven that G is entailed by KB under the given restrictions. Showing that KB entails G is equivalent to show- ing that KB A 1G is unsatisfiable. To determine un- satisfiability we may replace existentially bound vari- ables occurring in KB A 1G by Skolem terms and obtain a formula KB’ A G’ which is equivalent to KBAlG wrt. unsatisfiability. Let u be the substitution {6 ++ fl(&,-•,xm), --*, yk - fk(&,-.,X,)} and 8 be the substitution (2, H cl, . . . , 21 H cl}, where the fi, 1 < i 5 k, are pairwise different Skolem functions and the CJ , 1 5 j 5 I, are pairwise differ- ent Skolem constants, each of which does not occur in the set C of constants. Then each rule of the form (1) in KB is replaced by the (universally closed) clause “p(. . .) t pl(. . .) A . . . A pa (. , .) or, equivalently, by up(. . .) a/ -p1(. . .) v . . . V 1~~ (. . .) and each fact of the form (2) in KB is replaced by the ground fact @(I(. . .). A query of the form (2) corresponds to the (universally closed) goal clause lg(. . .). Altogether, the knowledge base is a set of definite clauses and a query is a goal clause as used in pure PROLOG. Observe- that now condition 4 is checked by the unification computation as Skolem constants and-functions cannot occur in the goal. Eg. consider the following knowledge base. Pb, Y) v -,cz(Y)* P(h 2) v -q-q. If a query like lp(X, a) is posed then SAM an answer in a three-step process as follows. computes 1. Constants occurring in the query are recursively propagated to all atoms with the query’s predicate symbol; a unification computation is performed. In our example, a is propagated as second argument to Pb, Y), P(h q, and p(c, a). The unification com- putations are succesful and yield the substitutions {Y H a), {Z I+ a}, and c (the empty substitution), resp. After an application of these substitutions a is propagated from lg(u) and -T(U) to q(b), q(c), and T(U), resp. The first two unification computations yield failures, whereas the last one is successful. Re- sulting from this step, each leaf of the search tree is labeled with either SUcceSS or failure. Figure 1 shows the example’s search tree at this time.- 2. The success labels at the leafs are propagated back- wards to the root of the search tree. Thereby it is checked whether each condition of a rule is satis- fiable, ie. is the root of a successful branch. Oth- erwise, all branches starting from the conditions of the rule are turned into failure branches. 3. The bindings for the variables occurring in the ini- tial query are now obtained by propagating the vari- ables through the search space along the successful branches. In our example we obtain the bindings {X H c) and {X H b}. Clearly, the first and second step are the most impor- tant as they determine the success and failure branches of the search space. The third step only collects the answer substitutions. To define a semantics for SAM I success a@)* a(c)* failure failure +4* success Figure 1: The example’s search tree after the first step of SAM. (Substitutions are not applied.) present in the following section. he Connection ethod The connection method is a formalism to compute the relationships between different statements in a first- order logic language [Bibel, 19871. Although usually presented as an affirmative method for proving the va- lidity of a formula in first-order logic, we present a dual version for proving the unsatisfiability of a set of Horn clauses, ie. a logic program and a single query. The connection method is based on the observation that a proof of a formula is essentially determined by a so- called spanning set of connections. A connection is an unordered pair of literal occurrences with the same predicate symbol but different signs. A literal L is con- nected in a set S of connections iff S contains L as an element of a connection. A set S of connections for a Morn formula of the form KB 3 G consisting of a knowledge base or logic program KB and a goal G is called spanning iff each literal occurring in G is con- nected in S and, if the head of (a copy of) a clause in KB is connected in S, then each literal occurring in its body is also connected in S. A spanning set is minimaE iff there is no spanning subset. A spanning set S of connections for KB =+ G determines a proof iff there is a substitution u such that g simultaneously unifies each connection in S. Hence, searching a proof for P 3 G amounts in generating a spanning set of connections and, then, simultaneously unifying each pair of connected literals. In Figure 2 the connections for (3) are shown. There are two minimal spanning sets of connections which other spanning sets {(-P(& a),~+, Y)>, h(Y), a(b))) and ((1p(X, a),p(a, Y)), (‘Q(Y), q(c))} do not determine proofs as the variable Y cannot be bound to two dif- ferent constants - viz. a, b and a, c, resp. The problem whether a goal follows logically from a knowledge base is undecidable as one cannot determine in advance the number of needed copies of program clauses. However, if the number of copies is restricted Automated Reasoning 11 P(W). ++ p(h 2). q(c)- q(b)- P@,Y). J Figure 2: A simple logic formula with its connections. Each row represents either a fact, a rule, or a goal. Observe that the connection structure corresponds pre- cisely to the search space shown in Figure 1. as in SAM, then the problem is decidable as now the number of connections - and, hence, the number of spanning sets - is finite. Given a knowledge base and a goal, a procedure like SLD-resolution [Lloyd, 19841 may be used to find a proof if it exists. SLD-resolution is sound and complete, however, it may require expo- nential time (in the size of the formula) to find a proof. Hence, formulas should first be reduced as far as pos- sible before a rule like SLD-resolution is applied. The notion of a reduction rule is not uniquely de- fined. But reduction rules do not change the (un-) satisfiability of a formula while decreasing some com- plexity measure assigned to formulas. Here we strengthen these conditions by requiring that reduc- tion techniques are applicable in linear parallel time and linear space with respect to the knowledge base. This is in spirit of our ultimate goal to find a class of problems which can be solved extremely fast on a massively parallel machine and has reasonable space requirements. In this paper we are particularly con- cerned about the following reduction techniques. Connections between non-unifiable literals can be removed. Eg. the connections (1q(u),q(b)) and (-q(u), q(c)) are non-unifiable. Useless clauses may be removed. A clause is useless if its conclusion is not connected or its condition con- tains a subgoal which cannot be solved. Eg. the rule Pbl4 v 4 a is useless and, in this case, all con- ) nections with this rule can be removed. Isolated connections can be evaluated, ie. the con- nected literals can be unified. A connection (L, L’) is isolated iff the literals L and L’ are either ground or not engaged in any other connection or one of the literals is ground and the other one is not engaged in any other connection. If an isolated connection is unifiable, then the corresponding clauses may be replaced by their resolvent ; otherwise, the connec- tion can be removed. In Figure 1, there is the single isolated connection (T(U), 17(Z)). The literals are unifiable with the substitution (2 I+ a) and, thus, the rule p(b, 2) v -T(Z) and the fact r(u) may be replaced by their resolvent p(b, a). After the reduction of the isolated connection (44 +q> th e f ormula shown in Figure 2 cannot 12 Beringer be further reduced with the reduction techniques men- tioned above. We would have to apply SLD-resolution, which may be exponential. However, as we will show in the sequel, the definition of isolated connections can be extended such that the problem shown in Figure 2 be- comes solvable in linear space and linear parallel time with respect to the knowledge base by applying reduc- tion techniques only. The extension is based on the following obser- vation. If an isolated connection of the form (p(sl,. . ., s,), lp(tl,. . . , tn)) is to be evaluated, then P(b.. , sn) and p(tl, . . . , tn) are unified. The first step of the unification computation [Robinson, 19651 is to decompose the problem into the unification prob- lems consisting of si and t;, 0 < i 5 n, and, then, to unify these (sub-)problems simultaneously. The exten- sion consists of anticipating this step and considering pointwise isolated connections (PIGS) between corre- sponding arguments of connected literals. A connection (p(sr , . . . , s,), lp(tl, . . . , tn)) is called isolated in its i-th argument (or point) iff either the connected literals are not engaged in any other con- nection or si and ti are ground or si is ground and -p(tl, . . . , tn) is not engaged in any other con- nection or ti is ground and p(sl, . . . , sn) is not en- gaged in any other connection. Eg. the connection <PC% Y), -(P(& 4) occurring in the example shown in Figure 2 is isolated in its second argument, but not isolated in its first argument. Evaluating the isolated point yields the substitution {Y I+ a). Applying this substitution yields the connections (q(b), la(u)) and (q(c), da)), both f h h o w ic are non-unifiable and can be removed. As now the subgoal lq(u) is no longer connected, the rule p(u, a) V-q(u) becomes useless and can be removed as well and we obtain the following re- duced formula. Pk, 4. P(h 4. Both connections are pointwise isolated and unifiable in their second argument. Both connections define proofs with answer substitutions {X I+ c) and {X H b). One should observe that this formula corresponds precisely to the successful branches of the search space shown in Figure 1. The following proposition is an immediate conse- quence of the definition of PIGS. Proposition 1 1. A connection is isolated i# it is isolated in each point. 2 Let F’ be obtained from a formula F by evaluating PIGS. F is unsatisfiable in F’ is unsatisfiable. PIGS were introduced as an extension of isolated con- nections [Bibel, 1988]. But they can also be viewed as a special case of the v-rule defined in [Munch, 1988] for the connection graph proof procedure. Whereas the application of the (more complicated) v-rule is expen- sive to control, the evaluation of PIGS is quite efficient. eflexive Reasoning is easoning by eductions The goal of this section is to elucidate the relation be- tween SAM and the connection method. Let KB be a knowledge base, G a goal, F the formula KB A 1G, and F’ be obtained by reducing F as far as possible. We assume that KB satisfies the conditions l-4 defined in the second section. As conditions 5 and 6 must be tested dynamically by a meta-level controller, we as- sume that they are satisfied. By definition the search space explored by SAM is defined by the connections of the given formula. As the first and second step of SAM determine the suc- cess and failure branches of the search space we have to show that the first and second step of SAM can be simulated by reductions in the connection method. In the first step the constants occurring in the intial goal are propa ated through the search space. Recall that following Shastri and Ajjanagadde, 19901 we have as- f sumed that a rule is used only once. In the connection method this assumption translates into the condition that the conclusion of each rule is connected at most once. Hence, if a constant occurs at the i-th argument of a goal, then the connection between the goal and the conclusion of a rule is isolated in its i-th point. Recall further that facts are always ground as they are skolemized. Thus, if a constant occurs at the i-th argument of a goal, then the connection between the goal and a fact is isolated in its i-th point. Hence, the PIGS can be evaluated - ie. unified - as in SAM. One should observe that condition 2 guarantees that after the evaluation of the PIGS between the goal and the conclusion of a rule each constant occurring in the conditions of the rule occurs also in the goal. Using these arguments it can be shown by induction on the depth of the search space that SAM’S propagation of constants occurring in the initial query corresponds to evaluations of PIGS. Thereafter, all non-unifiable connections are eliminated in the connection method. Finally, if a condition of a rule is not the root of a successful branch, then the rule becomes useless and is eliminated. This takes care of the second step of SAM. Altogether we obtain the following result. Theorem 2 Let S be the search space after the second step of SAM in an attempt to show the unsatisfiability of F. The connections of F’ correspond precisely to the successful branches of S. Hence, after reducing F we are left with all the suc- cessful branches of the search space. To show that SAM is sound, we have to prove that each minimal spanning set of F’ determines a proof, ie. that for each minimal spanning set there is a substitution which simultane- ously unifies each connection in the spanning set. By induction on the size of the minimal spanning set it can be shown that in the unification computation of the connected literals in a minimal spanning set each variable may be bound to at most one constant. By condition 1 no complex data structures can be built up via unification. Conditions 2, 3, and 5 ensure that there are no multiple occurrences of variables in the goal, the facts, the conclusion and the condition of a rule. This implies the following result. Theorem 3 Each minimal spanning set of F’ deter- mines a proof. The soundness of SAM is an immediate consequence of Theorems 2, 3 and Proposition 1. As the knowl- edge base in [Shastri and Ajjanagadde, 19931 is a logic program, we can define the usual model-theoretic, fixpoint, and operational semantics (based on SLD- resolution) as in [van Emden and Kowalski, 19761 or the S-semantics as in [Falaschi et al., 19891. Since the connection method is equivalent to SLD-resolution for Horn formulas SAM is sound with respect to these se- mantics. As already mentioned SAM is incomplete as conditions 5 and 6 cannot be checked in advance. One could easily change the operational and fixpoint se- mantics such that these conditions are obeyed. Sim- ilarly, the model-theoretic semantics can be refined by expressing these conditions as higher-order axioms which have to be satisfied. This, however, is just an exercise in defining semantics, iscussion We have extended the reduction technique of evaluat- ing isolated connections. The basic idea of consider- ing PIGS is that, whenever a binding for a variable can uniquely be determined, this binding should be applied and propagated. Secondly, we have shown that reflex- ive reasoning as defined by Shastri and Ajjanagadde [1990; 19931 or [L ange and Dyer, 19891 is reasoning by reduction and, consequently, we have formally estab- lished the soundness of reflexive reasoning. There exists already a connectionist model of the connection method for Horn formulas called CHCL [Hclldobler, 1990a; HGlldobler and Kurfefi, 19921. In CHCL isolated connections are evaluated, non-unifiable connections and useless clauses are removed in parallel as soon as these reductions become applicable. CHCL can easily be extended to evaluate PIGS as CHCL deter- mines the property of being isolated with the help of Proposition l(1) (although PIGS were not introduced in [HGlldobler, 1990a; Hiilldobler and Kurfel3, 19921). Interestingly, CHCL solves some problems even faster than SAM does. For example, if the knowledge base consists of the rules Pl(Xl)V7P2(-G), ea.1 Pn-l&-l) v ~Pn(&-1) and the fact ~~(a), then the query PI(U) is solved in essentially one step by CHCL as all connections are isolated and, hence, simultaneously evaluated. SAM needs essentially n steps as the constant a occurring in the goal has to be propagated through the search Automated Reasoning 13 space. On the other hand, CHCL is less space effi- cient than SAM. CHCL does not require that formulas obey conditions l-5 in the second section. Rather, for- mulas may be arbitrary Horn formulas. Hence, CHCL must solve arbitrary unification problems. This is done by a connectionist unification algorithm [Hclldobler, 199Oc], which uses a quadratic number of units with respect to the size of the knowledge base. If the for- mulas and, consequently, unification was restricted as in SAM, then the design of CHCL could be changed such that it needs the same space and answers queries at least as fast as Shastri and Ajjanagadde’s system. The class of problems considered by Shastri and Aj- janagadde [1990; 19931 does not seem to be the largest class of problems which is computable in space bound by the size of the knowledge base and in parallel time bound by the depth of the search space. The presented reduction techniques are not restricted to Horn formu- las, but may be applied to general first-order formu- las. The special unification problems solved by Shastri and Ajjanagadde [1990; 19931 are not the largest class of unification problems which can be parallelized in an optimal way. Whereas unification is inherently se- quential [Dwork et al., 19841, matching is known to be efficiently parallelizable [Ramesh et al., 19891. The results of this paper show that SAM computes by reductions. From Shastri and Ajjanagadde’s work we learn that automated theorem provers which apply these reduction techniques in parallel are adequate in the sense that they solve simpler problems faster than more difficult ones. [Shastri and Ajjanagadde, 19931 also contains some predictions on the question whether common sense reasoning problems are expressible in Shastri and Ajjanagadde’s logic. It remains to be seen whether these predictions hold. In fact, if the predic- tions hold then the gap between the ability of humans to draw a variety of inferences as if it was a reflex and the results about the complexity of reasoning reported by researchers in artificial intelligence is not a paradox at all. If the problems which can be solved effortlessly, spontaneously, and with remarkable efficiency by hu- mans can be expressed in Shastri and Ajjanagadde’s logic, then these problems are just simpler than the problems investigated in the artificial intelligence com- munity. Acknowledgements We like to thank S. Rriining for useful comments on earlier drafts of this paper. The first author is supported by the Deutsche Forschungsgemeinschaft (DFG) within project MPS under grant no. HO 1294/3-l References J. A. Barnden. On short term information processing in connectionist theories. Cognition and Brain Theory, 7:25- 59, 1984. W. Bibel. Automated Theorem Proving. Vieweg Verlag, Braunschweig, second edition, 1987. W. Bibel. Advanced topics in automated deduction. In Fundamentals of A&jicial Intelligence II, p. 41-59. Springer, LNCS 345, 1988. W. Bibel. Perspectives on automated deduction. In Au- tomated Reasoning: Essays in Honor of Woody Bledsoe, p. 77-104. Kluwer Academic, Utrecht, 1991. C. Dwork, P. C. Kannelakis, and J. C. Mitchell. On the sequential nature of unification. Journal of Logic Pro- gramming, 1:35-50, 1984. M Falaschi, G. Levi, M. Martelli, and C. Palamidessi. Declarative modelling of the operational behavior of logic languages. TCS, 69(3):289-318, 1989. S. HGlldobler and F. Kurfe8. CHCL - A connectionist inference system. In Parallelization in Inference Systems, p. 318 - 342. Springer, LNAI 590,1992. S. HGlldobler. CHCL - A connectionist inference system for Horn logic based on the connection method and using limited resources. TR-90-042, ICSI, Berkeley, 1990. S . Hiilldobler . On high-level inferencing and the vari- able binding problem in connectionist networks. In Proc. 6GAI, p. 180-185. Springer IFB 152, 1990. S. HSlldobler. A structured connectionist unification al- gorithm. In Proc. AAAI, p. 587-593, 1990. S. HSlldobler. On the artificial intelligence paradox. Jour- nal of Behavioral and Brain Sciences, 1993. (to appear). T. E. Lange and M. 6. Dyer. High-level inferencing in a connectionist network. Connection Science, 1:181 - 217, 1989. H. J. Levesque and R. J. Bra&man. A fundamental tradeoff in knowledge representation and reasoning. In Readings in Knowledge Representation, p. 41-70. Morgan Kaufmann, 1985. H. J. Levesque. Logic and the complexity of reasoning. KRR-TR-89-2, Dept. of CS, Univ. of Ontario, Toronto, 1989. J. W. Lloyd. Foundations of Logic Programming. Springer, 1984. K. H. Munch. A new reduction rule for the connection graph proof procedure. JAR, 4:425-444, 1988. R. Ramesh, R. M. Verma, T. Krishnaprasad, and I. V. Ramakrishnan. Term matching on parallel computers. Journal of Logic Programming, 6~213 - 228, 1989. J. A. Robinson. A resolution principle. on the machine-oriented logic based JACM, 12:23-41, 1965. L. Shastri and V. Ajjanagadde. An optimally efficient lim- ited inference system. In Proc. AAAI, p. 563-570, 1990. L. Shastri and V. Ajjanagadde. From associations to systematic reasoning: A connectionist representation of rules, variables and dynamic bindings using temporal syn- chrony. Behavioural and Brain Sciences, 1993. (to ap- pear). 1 L. Shastri. Connectionism and the computational effec- tiveness of reasoning. Theoretical Linguistics, 16(1):65-87, 1990. M. H. van Emden and R. A. Kowalski. The semantics of predicate logic as a progr 23(4):733-742, 1976. amming language. JA CM, 14 Beringer | 1993 | 7 |
1,398 | Learning from an A Noisy E Somkiat Tangkitvanich and Masamichi Shimura Department of Computer Science, Tokyo Institute of Technology 2-12-1 Oh-Okayama, Meguro, Tokyo 152, JAPAN sia@cs.titech.ac.jp Abstract This paper presents an approach to a new learll’ing problem, the problem of learning from an approx- imate theory and a set of noisy examples. This problem requires a new learning approach since it cannot be satisfactorily solved by either indic- tive, or analytic learning algorithms or their exist- ing combinations. Our approach can be viewed as an extension of the minimum description length (MDL) principle, and is unique in that it is based on the encoding of the refinement required to transform the given theory into a better theory rather than on the encoding of the resultant the- ory as in traditional MDL. Experimental results show that, based on our approach, the theory learned from an approximate theory and a set, of noisy examples is more accurate than either the approximate theory itself or a theory learned from the examples alone. This suggests that our ap- proach can combine useful information from both the theory and the training set even though both of them are only partially correct. Introduction Previous machine learnin pirically from noise-free Y approaches learn either em- Mitchell, 19781 or noisy ex- amples [Quinlan, 19831, or analytically from a correct theory and noise-free examples [Mitchell et al., 1986; DeJong and Mooney, 19861, or empirically and analyti- cally from an approximate theory and noise-free exam- ples [Richards, 1992; Pazzani et ul., 1991; Cohen, 1990; Wogulis, 1991; Ginsberg, 1990; Cain, 1991; Bergadano and Giordana, 19881. This paper discusses the problem of learning from an approximate theory and a set, of noisy examples, a new learning problem which cannot be satisfacto- rily solved by the previous approaches. In this prob- lem, it is harmful to place full confidence in either the given theory or the training set. Thus, an analytic ap- preach will not learn successfully since it places full confidence in the theory which is only approximately correct. An inductive approach does not satisfactorily ;olvc the problem since it, cannot take advantage of the theory for biasing the learning. The approaches that combine analytic with inductive techniques, and mod- ify the given theory to fit, the examples, will not learn correct,ly since they place full confidence in the noisy training set. Consequently, we require a new learning approach that can balance the confidence in the theory against that in the examples. Our approach, presented in this paper, can take ad- vantage of the given tlioory for biasing the learning and can tolerate noise in the training examples. In our ap- proach. the given theory is refined as little as necessary while it,s ability to explain the examples is increased as much as possible. The amount of the refinement and the ability of the theory to explain the examples are judged by the encoding lengths required to describe the refinement, and the examples with the help of the theory. respectively. By keeping a balance between the amount of refinement and the ability to explain the ex- amples, our approach places full confidence in neither the given theory nor the examples. Although our approach can be applied to the learn- ing of a theory represented in any language, we demon- strate its applicattion t,o the learning of a relational theory. The prototype system, which we call LATEX (Learning from an &)proximate zhcory and noisy Example), has been tested in the chess endgame do- main in both knowledge-free and knowledge-intensive environments. The results show that a theory learned by LATEX from an approximate theory and a set of noisy examples is remarkably more accurate than the approximate theory itself, a theory learned by LATEX from the examples alone, or a theory learned by the FOIL learning system [Quinlan. 19901. This suggests that our system can combine useful information from the theory and the training set even though both of them are only partially correct. escription of the Problem Our learning problem can be defined as follows: o Given: - a prior knowledge in form of an approximate the- ory, 466 Tangkitvanich From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. - a set, of noisy positive and negative exa~iiples. e Find: - a theory phich is similar to the given t,heory but is expected to he more accurate in classifying the unseen examples. The fundamental assumption of learning from an iui- tial theory is that although the theory is flawed, it is still relatively ‘bclose” or “a~)~~roxilllates’” to the tar- get theory [Mooney, To appear]. Intuitively, au ap- proximate theory is supposed to facilitate the leariiiug rather than hinder it. Such a theory can be obtaiiied from a human expert, a prior learning [Muggleton et (II., 19921, a textbook [Cohen, 19901 or other sources. It has been pointed out that, the accuracy of a theory is not a good criterion to quantify its approxi~~~ate- ness [Mooney, To appear: Pazzani and Kibler, 19921. Unfortunately, there has been no satisfactory criterion yet. In the next section, we show that t,he notion of an approximate theory can be precisely defined based 011 the encoding length of the refinement. ur Approach We present a new learning approach based 011 ~iicod- ing a refinement and examples. The approach can be viewed as an extension of the nlininlum description length (MDL) principle [Wallace and Boulton. 1968; Rissanen, 19781. a general principle applicable to any inductive learning task involving sufficient, exalnples. According to the principle, the simplest theory that, can explain the examples well is the best one. Simplic- ity of a theory is judged by its length under an encod- ing scheme chosen subjectively by the cxl>erilllollt,c~r. The ability of a theory to explain the given ex~tmples is judged by the length of the examples oncod~td with the help of the theory, with shorter length indicating more ability to explain. From the MDL perspective, learning occurs only when the sum of these encoding lengths is less than the explicit encoding length of the examples, that is when there is a compression. General as it is, the MDL principle in its original form cannot take advantage of a prior knowledge in the form of an initial theory. This is a weakness since the information in the theory may be esseiit,ial for i~ii ax:- curate learning when sufficient exanlples are ilot, avail- able. Our approach extends the MDL principle so that, it can take advantage of au initial theory. Extending MDL to Learn from an Approximate Theory In our approach, the theory that, is most similar to the given theory and can explain the ex;~nlplcs well is the best theory. Similarity betweeu a theory T’ alld the giveu theory T is judged by the length un- der some encoding scheme to describe the r~:finxmcrt.t required to trausforln T into T’. The ability of a t.h(:- ory to explain the examples is judged by the l~~ugth of the examples eiicotled wit,11 the h(alp of the theory, with shorter leiigth indicat~iug more ability to explain. Qualitatively, the best, theory is the theory with the niiniinuiii description length calculated from the sum of 1) the description length of the refinement, and 2) the description length of the examples encoded with the help of the refined t,heory. The refilled theory ill 2) c:~n be obtained from the initial theory and the description of the refinement. Wlicu the bias for a siuiilar ttilieory is appropriate, as iii the case of learuing from an approximate theory, a learning algorit,hm based 011 our approach will achieve a higher accuracy than an algorithm that cannot learn from a theory. It, should be noted that when there is no initial theory, the bias reduces to one that prefers the simplest theory t,hat, cau explain the examples, and our approach degenerates to the MDL approach. The enq~hasis 011 encoding refinemcmt is a unique feature of our approach. It, has t,lie following advan- tages for our learning problem. It, can balance the coiifidmcc in the theory agaiiist that8 in the training set. Coiisequeutly, our approach can take advantage of the information in both the t,heory and the training set, while being able to avoid the pitfalls of placing full confideiice in either of them. From a Bayesian perspective, our approach can be interpreted as assigning a prior probability to each theory iii the liypot8hesis space, favoring the ones which are similar t#o t,lic givcii theory, updating the probability by using the traiiiiiig examples. and solectiug the t,hoory t,liazt llt~ts it inaxiiiiuni posterior probability. However, in comparison wit,11 a Bayesian approach, an approach based on the MDL princi- ple provides the user wit,11 the coiiceptually simpler problem of coinputing code lengths, rather than es- timating probabilities [Quinlan and Rivest, 19891. It, provides a precise way to define au approximate theory. Intuitively, ail approxiinato theory is a the- ory that, fa.cilita.tes the learning ratther thau hinder it. Since learning is relat,cd to producing a compression in the encoding leiigt81i. an approxiniate theory can be judged based 011 thcl help it, provides in shortening the description length of the target, theory. Given au al~l~roximate thmry. t,hcb encoding of the target, theory as a sequeiic~~ of refineiuents of that theory should be shorter than a direct encoding of the tar- get, t,heory. This leads t,o the following definition. Definition 1 Approximate Theory A thxxwy To is ~~71. approzirr~~atr theory of T, uln&r an encoding scheme E ifllE(T”, T,) < lE(q5, T,), ~ud~~ere ZE (Ti, Tj ) is the 1~71,gth r~(/,~~ir~d to f”G’*COdC thse trUT)sSfOT- zmsu,tion. from Ti in,to Tj . CXTM~ 4 i.~ ~~71. ~~~,~t~ th.cory. How ca11 such a theory facilitate learning? Within t,hc: PAC learnability framework [Valiant,, 19841, the followiiig tlieorcm shows that it. reduces the sample Novcell M&hods in owledge Acquisition 467 complexity of any algorithm that accepts an initial t,he- ory Z’s and i) never examines a hypothesis hl before another hypothesis h2 if ZE(TO, h2) < ZE(TO, hl), and ii) outputs ‘a hypothesis consistent with the training examples. Such an algorithm reflects important, ele- ments of a number of existing algorithms (e.g., [Gins- berg, 1990; Cain, 19911) that modify the initial theory to fit the examples. Theorem 1 Let L be any algorithm th*at satisfies i) and ii). For a finite hypothesis space, L with. an ap- proximate theory To has less sam,ple complexity th.an L with an empty theory 4. The above theorem is applicable when the examples are noise-free. The proof of the theorem and an analo- gous theorem for learning from noisy examples is given in [Tangkitvanich and Shimura, 19931. Learning a Relational Theory In learning a relational theory, the examples and the theory are represented in form of tuples and a set of function-free first-order Horn clauses, respectively. Encoding Training Examples From the MDL perspective, learning occurs when there is a compression in the encoding length of the ex- amples. In learning a relational theory, although both the positive and negative examples are given, it is a common practice to learn a theory that char- acterizes only the positive examples [Quinlan, 1990; Muggleton and Feng, 19901. In encoding terms, this suggests that it is appropriate to compress only the encoding length of the positive examples.’ One way to produce a compression is to encode the examples with the help of an approximate theory. From Shannon’s information theory, the optimal code length for an object e that has a probability p, is - log,p, bits. Without the help of an initial theory, the optimal code length required to indicate that an example is positive is thus -log, po bits, where pe is the probability that an example left in the training set is a positive example. In contrast, with the help of CZ, a clause in an approximate theory that covers the example, the optimal code length becomes - log2 pcl bits, where p,l is the probability that an example cov- ered by CZ is a positive example. Consequently, there is a compression produced by CZ in encoding a positive example if p,l > po, that is when the positive examples are more concentrated in the clause than in the train- ing set. The total compression obtained in encoding n positive examples covered by CZ is Compress(Cl) = n x (log, PO - log, m 1. (1) ‘On the contrary, if a theory that characterizes both types of examples (e.g., a relational decision tree [Watanabe and Rendell, 19911) is to be learned, it would be appropriate to compress the encoding length of both types. The compression produced by a theory is the sum of the compressions produced by all the clauses in the theory. By using a more accurate theory to encode the examples, we can obtain further compressions. Such a theory can be obtained by means of refinement. How- ever, the compression is not obtained without any cost since we have to encode the refinement as well. Encoding Refinements We assume that the head of each clause in the theory is identical. This limits the ability to learn an intermedi- ate concept but simplifies the encoding of a refinement. With this assumption. it is possible to encode auy re- finement by using only two basic transformations: a literal addition and a literal deletion. Other forms of t,ransformation can be encoded by using these. For ex- ample, literal replacement can be encoded by using a combination of a literal addition and a literal deletion. Clause addition can be encoded by using a combina- tion of literal additions. The head of a new clause need not be encoded since it is identical to that of an exist- ing clause. Clause deletion can be encoded in a similar way by using a combination of literal deletions. Thus, the overall refinement can be encoded by the following self-delimiting code: In1 1 refinel,lI refineI, I... Irefinel,,, 1 In2 1 refine2,1 I re f ine2.2 I... (refine.2,, I . . . . . . . . InkI refinek.11 refineb*2I...jrefinek,,,I In the above encoding scheme, ni is the encoding of an integer indicating the number of refinements ap- plied to clause i (with n; = 0 indicating no refine- ment), refinei,j is the encoding of the j-th refinement to clause i. refine;,j is composed of a one-bit flag indicating whether the refinement is a literal addition or a literal deletion, and the encoding of the content of the refinement. For a literal addition, the encod- ing of the content contains the information required to indicate whether the literal to be added is negated or not, and from which relation and variables the literal is constructed. For a literal deletion, the encoding of the content contains the information required to indi- cate which literal in the clause is to be deleted. Note that the encoding scheme is natural for our learning problem in that it requires a shorter encoding length for a refinement that has a littler effect on the theory. For example, adding a literal requires a shorter length than adding a clause. We now quantify the relationship between the re- finement and its effect on the compression in the en- coding length. Let CZi be a clause in the initial theory, Length(refinei,j) be the length required for re fine;,j, and Cl: be the refined clause obtained from CZ; and refine+. The compression produced by the refine- ment, can he estimated by 468 Tangkitvanich Compression(re f ine;,j) = Compress( Cl:)- Compress(CZi) - Length(refinei,j). (2) The Learning Algorithm The algorithm of LATEX is very simple. In each it- eration, the theory is refined by using all the possible applications of the following refinement operators: a clause-addition operator, a clause-deletion operator, a literal-addition operator and a literal-deletion opera- tor. The literal-addition operator adds to an existing clause a literal in the background knowledge. The lit- eral must contain at least one existing variable and must satisfy constraints to avoid problematic recur- sion. The clause-addition operator adds t,o the the- ory a new clause containing one literal. Among all the possible refinements, LATEX selects one that pro- duces maximal positive compression. The system ter- minates when no refinement can produce a positive compressions. The refined theory is then passed to a simple post-processing routine which removes clauses that cover more negative than positive examples. Admittedly, the current greedy search strategy and the simple refinement operators prevent LATEX from learning some theories, e.g., those contain literals that do not discriminate positive from negative examples. We are now incorporating a more complex search strat- egy and better operators to overcome this limitation. Experimental Evaluations LATEX has been experimented on the king-rook- king board classification domain described in [Quinlan, 1990; Richards, 19921. In this domain, there are two types of variable, representing row and column. For each type, there are three relations given as the back- ground knowledge: eq(X, Y) indicating that X is the same as Y, a&(X, Y) indicating that X is adjacent to Y and Zessthan(X, Y) indicating that X is less than Y. In our experiments, the training sets are randomly generated and noise are randomly introduced into them. To introduce noise, we adopt the classification noise model used in [Angluin and Laird, 19881. 111 this model, a noise rate of 7 implies that the class of each example is replaced by the opposite class with proba- bility 7. The test set is composed of 10,000 examples selected independently to the training set. The experiments are made on 5 initial theo- ries which are the operationalized version of those used by FORTE [Richards, 19921. The theo- ries are generated by corrupting the correct theory with six operators: clause-addition, clause-deletion, literal-addition, literal-deletion, literal-replacement and variable-replacement operators. Each corrupted theory is an approximate theory according to our def- inition and is averagely 45.79% correct. The average number of clauses in an operationalized theory is 14.2, and the average number of literals in a clause is 2.8, Figure 1 compares the learning curves of LATEX with an initial theory (LATEX-Th), LATEX without an initial theory (LATEX-NoTh), and FOIL [Quinlan, 19901, for q = 10%. The curves demonstrate many interesting points. First, throughout the learning ses- sion, the theory learned by LATEX from an initial theory and the examples is significantly more accurate than that, learned by LATEX from the examples alone. Further, although the training examples are consider- ably noisy, the theory learned from the initial theory and the examples is much more accurate than the ini- tial theory itself. This means that LATEX can extract useful information from the noisy examples to improve the accuracy of the theory. In other words, the experiments show that by com- bining the information in the theory and the examples, LATEX achieves a higher accuracy than it could with either one alone. Both are beneficial to the system. This suggests a dual view of our approach: as a means of refining an initial theory using examples, or as a means of improving the learning from examples using an initial theory. It is also interesting to compare the learning curve of LATEX with that of FOIL. Without an initial theory, LATEX degenerates to an ordinary inductive learn- ing system based on the MDL principle. Through- out the training sessions, the theory learned by LA- TEX is significantly more accurate than that learned by FOIL. However, another experiment which is not reported here shows that there are no significant dif- ferences in the accuracies achieved by the two systems when there is no noise in the training set. Hence, the differences can be attributed to the differences in the noise-handling mechanisms of the two systems. Inves- tigation reveals that, when the examples are noisy, the theories learned by FOIL contain more literals and re- quire longer encoding lengths than those learned by LATEX. In other words, t,he theories learned by FOIL are much more complex. elated Work In this section, we discuss three related approaches for learning a relational theory from noisy exam- ples. Other approaches (e.g., [Towell et al., 1990; Drastal et cd., 1989; Ginsberg, 19901) will be discussed in the full paper. e FOIL Unlike LATEX. FOIL [Q inn an, ’ 1 19901 is a relational learning system that cannot take advantage of an initial theory. However, it is informative to com- pare the two systems from an inductive learning per- spective. FOIL uses an information-based heuris- tic called Gain to select a literal and uses another information-based heuristic as its stopping criterion to handle noise. In contrast, LATEX uses a single compression-based criterion for both tasks. When used to select a literal, our criterion and Gain are Novel Methods in Knowledge Acquisition 469 Accuracy 100 95 90 85 80 75 70 65 60 55 10 20 30 40 50 Number of Training Examples 60 Figure 1: Learning curves of LATEX and FOIL similar in that they suggest selecting a literal that discriminates positive from negat,ive examples. How- ever, when used to handle noise, our criterion and FOIL’s stopping criterion have different effects. The experiments reveal that a theory learned by FOIL is much more complex than that learned by LATEX. This is because FOIL’s stopping criterion allows the building of long clauses to cover a small number of examples. Former study [Dzeroski and Lavrac. 19911 also arrived at the same conclusion. 8 FOCL FOCL extends FOIL to learn from an initial theory in an interesting way. However, the ori rithm of FOCL [Pazzani and Kihler, 7 inal algo- 1992 is unsuit- able for learning from noisy examples since it refines the given theory to he consistent with them. Two extensions of FOCL are designed to deal with noise: one with FOIL’s stopping criterion, the other with a pruning technique [Brunk and Pazzani, 19911. Currently, we are not aware of any experimental re- sults of testing any FOCL algorithms in learning from an initial theory and noisy examples. How- ever, it should be noted that while FOCL with the pruning technique requires another training set, for pruning, and both extensions of FOCL use separate mechanisms for selecting liter& and handling noise, LATEX requires a single training set and uses a sin- gle mechanism for both tasks. e Muggleton et. al’s Learning System Recently, Muggleton et. al. [Muggleton cf o,Z., 19921 proposed an approach to learn a theory represented as a logic program from noisy examples. The sys- tem based on their approach receives as an input an overly general theory learned by the GOLEM system [Muggleton and Feng, 19901. It then specializes the theory by using a technique called closed-world spe- cialization [Bain and Muggleton, 19901. If there are several possible specializations, the one that yields maximal compression is selected. Since the system attempts to minimize the encoding length. it can be viewed as incorporating the MDL principle. From the point, of view of learning from an initial theory. there is an important difference between the theory acceptable by their system and that accept- able by LATEX. While their system assumes an overly general theory produced by a prior learning of GOLEM, LATEX requires no such assumptions. LATEX can accept a theory that is overly general, overly specific, or both. The theory can be obtained from an expert, a textbook, a prior learning and other sources. Conclusion We presented an approach for learning from an ap- proximate theory and noisy examples. The approach is based on minimizing the encoding lengths of t,he refine- ment and the examples, and can be viewed as an exten- sion of the MDL principle that, can take advantage of an initial theory. We also demonstrated the applicabil- ity of our approach in learning a relational theory, and showed that the system based on our approach can tol- erate noise in both the knowledge-free and knowledge- intensive environment. In the knowledge-free environ- ment, our system compares favorably with FOIL, In the knowlodge-illt,ol~sive environment. it combines use- ful information from both the theory and the training 470 Tangkitvanich set and achieves a higher &curacy than ii could wit,11 either one alone. Consequently, our approach (*ail be viewed either as a means to improve the accuracy of an initial theory using training examples. or as a means to improve the learning from examples using an iiiit,ial theory. Directions for future work include ex~~Criillc~llt~i~lg with other models of noise in the examples and corn-- paring an approximate theory according to our formal- ization with a theory obtained from a knowledge source in a real-world domain. Acknowledgments We would like to thank Hussein Almuallim for his in- sightful comments. We are also indebted to Boonserm Kijsirikul for his excellent help in implementing LA- TEX, Tsuyoshi Murata and Craig Hicks for their help in preparing the paper. eferences Angluin, D. and Laird, P. 1988. Learning from noisy examples. Machsine Learning 2:343 -370. Bain, M. and Muggleton, S. 1990. Non-irlonotoilic learning. In Muchsine InteZZigeneelZ Oxford Univer- sity Press. Bergadano, F. and Giordaila, A. 1988. A knowledge intensive approach to concept induction. In Prop. ;the Fifth International Conference on Machsine Learning. Morgan Kaufmann. 305-317. Brunk, C. and Pazzani, M. 1991. An investigation of noise-tolerant relational concept learning algorithms. In Proc. the Eighth, Intem.ntionm! Workshop on Mu- chiane Learning. Morgan Kaufman. 389 393. Cain, T. 1991. The DUCTOR: A theory revision sys- tem for propositional domains. In Proc. the Eighth. International Worksh.op on Machine Learning. Mor- gan Kaufman. 485-489. Cohen, W. 1990. Learning from textbook knowledge: A case study. In Proc. the Eighth, National Con- ference on Artificial Intelligence. AAAI Press/MIT Press. 743-748. DeJong, G. and Mooney, R. 1986. Explanation-based learning: An alternative view. Machine Lewn,ing 1(2):145 - 176. Drastal, G.; Czako, G.; and Raatz, S. 1989. Induction in an abstract space: A form of constructive induc- tion. In IJCAI 89. Morgan Kaufman. 708-712. Dzeroski, S. and Lavrac, N. 1991. Learning relations from noisy examples: an empirical comparison of LI- NUS and FOIL. In Proc. th.e Eigh,th. Intern.ution~cLl Workshop on Muchin.e Learn,in.g. Morgan Kaufman. 399-402. Ginsberg, A. 1990. Theory reduction, theory revision, and retranslation. In AAAI 90. Morgan Kaufman. 777-782. Mitchctll, T.M.: Keller. R .M.: and Kedar-Cabelli, S.T. 1986. ExI’l;Lnattioll-bas~~(l learning: A unifying view. Machine Lemw,ing 1( 1):47 80. Mitcllcll, T. M. 1978. Version spaces: An approach to concept, learning. Tc(.hnical R.eport, HPP-79-2, Stan- ford University, Palo Alto. CA. Mooney, R. To appear. A preliminary PAC analysis of theory revision. In Pet,sche. T.; Judd, S.; and Han- son, S., editors. Cornp~~tntional Lewrr~.in.g Theory and Natuml Lmr+n.ing Systmm volume 3. MIT Press. Mllggleton, S. and Feng, C. 1990. Efficient induction of logic programs. In Proc. th.e First Corlfwence on Algorithmic Learning Tiwory. OHMSHA. 368 -381. Muggleton. S.; Srinivasan, A.: and Bain, M. 1992. Compression, significance and accuracy. In Proc. the Ninth Ir~,ter76crtiorl,cl Cortjerencc on. Machine Leum- ing. Morgan Kaufmann. 339-347. Pazzani, M. and Kibler, D. 1992. The utility of knowl- edge in inductive learning. M&&,e Learning 9:57--94. Pazxani, M.; Brunk, C.; and Silverstein, G. 1991. A krlowlcdge-int,ellsive approach to learning relational concepts. In Proc. the Eighth, Inkrn~ationml Workshop on. Machine Learning. Morgan Kaufman. 432- 43G. Quinlan, J.R. and Rivcst, R.L. 1989. Inferring deci- sion t,rees using the minimum description lengt,h prin- ciplc. Injommtion ur1.d Corn.putation. 80:227 248. Qllinlan. J. R. 1983. Learning from noisy data. In Proc. the 1983 Intvnutional Mach~ine Learning Workshop. 58 - 64. Quinlan, J.R. 1990. Learning logical definitions from relations. Machine Lmrning 5:239 2GG. Richards, B. 1992. An Operator-based Approach, to First-order Th,eory Revision. Ph.D. Dissertation, The University of Texas at, Austin. AI92-181. Rissanen, J. 1978. Modeling by shortest data descrip- tion. Automatica 14:465 471. Tangkit,vanich, S. and Shimura, M. 1993. Theory re- finement, based on the minimum description length principle. Technical Report, 93TR.-001, Department of Computer Science, Tokyo Institute of Technology. Towell, 6. G.; Shavlik, J. W.; and Noordewier, M. 0. 1990. Refinement of approximate domain theories by knowledge-based neural networks. In AAAI 90. AAAI Press / The MIT Press. 861 -866. Valiant, L. 1984. A t,heory of the learnable. CACM. 27:1134 -1142. Wallace, C. and Boulton. D. 1968. An information measure for classification. Comp&er J. 11:185 194. Watanabe, L. and Rendell, L. 1991. Learning struc- tural decision trees from examples. In IJCAI 91. Mor- gan Kaufman. 77 -77G. Wogulis, J. 1991. Revising relational theory. In Proc. the Eighth Intwnutionul Workshop on Mechine Learning. Morgan Kaufman. 462 -4GG. Novel Methods in Kmowkdge Acquisition 471 | 1993 | 70 |
1,399 | Scientific Model-Building as Search in Matrix Spaces Ratil E. Valdks-Pkez Jan M. kytkow Herbert A. Simon School of Comp. Sci. & Dept. of Comp. Sci. Dept. of Psychology Center for Light Microscope Wichita State Univ. Carnegie Mellon Univ. Imaging and Biotechnology Carnegie Mellon Univ. Abstract Many reported discovery systems build discrete models of hidden structure, properties, or pro- cesses in the diverse fields of biology, chemistry, and physics. We show that the search spaces un- derlying many well-known systems are remarkably similar when re-interpreted as search in matrix spaces. A small number of matrix types are used to represent the input data and output models. Most of the constraints can be represented as ma- trix constraints; most notably, conservation laws and their analogues can be represented as matrix equations. Typically, one or more matrix dimen- sions grow as these systems consider more complex models after simpler models fail, and we introduce a notation to express this. The novel framework of matrix-space search serves to unify previous sys- tems and suggests how at least two of them can be integrated. Our analysis constitutes an advance toward a generalized account of model-building in science. Introduction The discovery of models of atomic and molecular struc- ture, of chemical processes, and of genetic transmission are celebrated events in the history of science. Far from being isolated historical instances, discovery of hidden structure in the form of discrete models is a universal and current task across the natural sciences. Several discovery systems reported in the AI liter- ature discover models of discrete, hidden structure. These systems include DALTON [Langley et al., 19871, GELL-MANN [Fischer and Zytkow, 19901, MECHEM [Valdes, 1992; 1993 (in press)], MENDEL [Fischer and Zytkow, 19921, BR-3/PAULI [Kocabas, 1991; Valdes, accepted], STAHL [Zytkow and Simon, 1986] and its descendents STAHLp [Rose and Langley, 19861 and RE- VOLVER [Rose, 19891. Of these, DALTON, MECHEM, and STAHL perform in chemistry, GELL-MANN and BR-3/PAULI in physics, and MENDEL in biology. Despite the diversity of scientific domains that these systems treat, there lurk striking similarities in the rep- resentation of models, problem-solving methods, and domain knowledge used in model construction. Some of these similarities were pointed out elsewhere [Fis- cher and Zytkow, 19921. These similarities may even- tually allow us to develop a unified discovery system able to search for many types of discrete models. As a prerequisite, we should study existing systems that have already demonstrated a degree of competence on historical or current science. An important theoretical task of comparative analysis, which is relatively scarce in the AI literature, is to identify a unitary core among these systems. Without this, progress is limited to the accumulation of special-purpose programs. The purpose of this paper is to identify a com- mon representation of discrete models and a systematic analysis of the search spaces and domain constraints using the language of matrices. Our analysis intro- duces a small set of matrix types that represent the input data, the output models, and the spaces to be searched by the discovery system. Models are proposed by assigning numeric values to entries in a matrix, most assignments being ruled out by the domain constraints. The matrix representation enables the use of power- ful methods of matrix algebra and combinatorial algo- rithms to improve the search for discrete models. We also introduce a new notation to express how dis- covery systems carry out the search for models by pos- tulating new entities, processes, and properties. This notation is used later to show how two specific systems that were developed separately can be integrated. Systems In this section we will m-interpret six discovery systems and show that they have a surprising degree of similar- ity. Three types of matrices used in these systems will be highlighted: a reaction matrix R, a structure ma- trix S, and a property matrix P, defining them in the context of each system. We use the language of matri- ces and matrix algebra to describe the constraints in these systems. We also show how each system system- atically changes the sizes of some few matrices in the course of performing its task. The emphasis in this paper is on the spaces searched 472 Valdes-PCrez From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved. R hydrogen nitrogen oxygen ammonia water , % [< 01 0 [< 0) 0 [>O] 1 -ii 1= react (hydrogen >OXYP) -+ water Ra = react (hydrogen ,nitrogen 3 + ammonia Figure 1: Reaction Matrix in DALTON by the systems, and not on the detailed ways each sys- tem carries out its search, which varies across systems and sometimes even within systems, since several of the programs possess more than one internal search space. One view of problem-solving in science is that it typically proceeds over several spaces which can be quite heterogeneous. Initially proposed by Lea and Simon [Lea and Simon, 19741, this idea has been ex- panded and applied in the discovery system FAHREN- HEIT [Zytkow, 19871, while Klahr and Dunbar [Klahr and Dunbar, 19881 have investigated it as a psycholog- ical model. Some comments on notation follow. Matrices will be represented as tables with two intersecting perpendic- ular line segments, one to mark the rows, the other the columns. Additional marks are used to show whether a matrix dimension grows, shrinks, or is static: an out- going (ingoing) arrow means that the dimension grows (shrinks), and a cap means that it is static during prob- lem solving. We will see that most of the systems pro- gressively enlarge their matrix models when smaller models prove inadequate. DALTON DALTON’s task is to find structural models of chemical reactions and substances in terms of atoms [Langley et al., 19871. For example, given the following data: 1. two volumes of hydrogen and one volume of oxygen react to form two volumes of water; 2. three volumes of hydrogen and one volume of nitro- gen react to form two volumes of ammonia; 3. hydrogen, oxygen, and nitrogen are elementary sub- stances; 4. water consists of hydrogen and oxygen, and ammo- nia consists of hydrogen and nitrogen; DALTON uses its bias for simplicity, conservation laws, and the Gay-Lussac law to report correctly that (1) two hydrogen molecules react with one of oxygen to form two water molecules, while three hydrogen molecules combine with one nitrogen molecule to form two am- monia molecules, and that (2) hydrogen, oxygen and nitrogen are diatomic, water is Hz0 and ammonia is NHs. In making these inferences, DALTON can be inter- preted as filling in two matrices. The first matrix de- scribes inputs and outputs for each reaction; the ex- ample discussed in [Langley et al., 19871 has the initial S N 0 HI1 hydrogen 0 0 [> 0] nitrogen [>0] 0 0 - oxygen 1 PO1 ammonia &I A water 0 &I [>Ol Figure 2: Struyure Matrix in DALTON form shown in Figure 1. The bracketed constraints represent conventional matrix depictions of reactions [Aris and Mah, 19631: the reactants have negative en- tries, and the products have positive entries. All non- participating substances have zero entries. In this pa- per, we will always denote such a reaction matrix by 72 r Xs, where r is the number of reactions, and s is the number of chemical substances. A second, structure matrix in DALTON (Figure 2) represents the structure of the chemical substances in terms of atomic elements. Initially, some of the en- tries are zero to indicate that certain substances do not contain certain atoms. The remaining entries in the matrix are constrained to positive integers. We denote this structure matrix as SSxe, where s as be- fore is the number of substances, and e is the number of chemical elements involved. The sizes of the 72 and S matrices are fixed, as in- dicated in the figures by a “double cap” notation that prevents the matrix from changing size. DALTON does not conjecture new reactions, substances, nor chemical elements, so it never enlarges the two matrices which it receives as input. DALTON’s task is to fill in the reaction matrix and the structure matrix completely with integer entries, subject to the constraints stated above, a criterion of simplicity of entries, and a conservation law on atoms, which is expressed in matrix algebra as follows: 72 rxs x &xe = or,, (1) This equation implies that each of r reactions must conserve the atoms of all e elements: the product R x S gives the zero matrix 0 of dimensions r x e. Conser- vation means that the net production of atoms of each element is zero for each reaction. Simplicity has a role in choosing the magnitudes of the entries (integers of smaller magnitude are simpler). Equation 1 is the stan- dard way to express linear conservation conditions in sciences such as chemistry. In our example, DALTON outputs the two matri- ces in Figure 3 (the output matrix R is shown trans- posed to fit on the page). The matrix R quantifies the qualitative reaction matrix input to DALTON, e.g., three hydrogen molecules enter into the ammonia re- action. The output matrix S specifies the elementary constituents of each substance. For example, a value of 2 for the matrix entry (hydrogen, H) in S means that hydrogen molecules include two atoms of hydro- gen. Since all other entries in the hydrogen row are Novel Methods in Knowledge Acquisition 473 RT water ammonia reaction reaction hydrogen -2 -3 11 nitrogen 0 -1 oxygen -1 0 ammonia 0 2 water 2 0 - - S N 0 H hydrogen 0 0 2 rl nitrogen 2 0 0 oxygen 1 0 2 0 ammonia 1 0 3 water 0 1 2 Figure 3: Ou&t of DALTON S quark1 . . . quark, particle1 . . I . particle, !L 72 property1 . . . property, quark1 II . . . quark, Figure 4: Matrix Structure of GELL-MANN zero, the hydrogen molecule is diatomic, i.e., has struc- ture H2. GELL-MANN GELL-MANN’s task is to propose quark models that account for the known property values of the particles in elementary-particle families [Fischer and Zytkow, 19901. The models constructed by GELL-MANN are filled-in pairs of matrices shown in Figure 4. The structural S matrix is analogous to the S matrix in DALTON: s particles (or “substances”) will contain e quarks (or “elements”). The second matrix in GELL- MANN is a property matrix P which assigns values of p properties to e quarks. The domain constraints on the S matrix are: - e The matrix entries are non-negative integers. e The sum of entries over each row equals k, which is the number of quarks contained in each particle. o The number of L-combinations of the set of e quarks (with infinite repetition number), which by a the- orem in elementary combinatorics [Brualdi, 1981] equals C(e - 1+ k, k), satisfies s 5 C(e - I+ k, k) 5 3s, where s is the number of input particles. Contrary to DALTON, GELL-MANN enlarges the num- ber of columns in the first matrix (and perforce the Figure 5: Matrix Structure of MECHEM number of rows in the second) if it cannot find an acceptable model for the current number of quarks. Adding another column corresponds to postulating one more quark. During its search for an acceptable model, GELL-MANN also increments the value of 1, the number of quarks per particle. Each k leads to C(e - I+ k, k) possible quark combinations, each rep- resented by one row in the expanded S matrix. The number of input particles is constant, and equals S. The quark models proposed by GELL-MANN must also be consistent with the observed property values of the particles. For example, since a proton has a charge of 1, the sum of charges for quarks which constitute the proton must be also 1. This constraint is called an “additivity law” in [Fischer and Zytkow, 19901, and is analogous to laws of conservation. Whereas conser- vation in DALTON (and generally) is expressed by a constraint of the form R x S = 0, additivity in GELL- MANN is expressed as S, Xe x Pexp = P’, xP. The matrix P’ contains property values of particles, which are constants given as input to the system. Matrices P and P’ both contain property values: the first for hidden objects postulated in the model, the second for observable objects given in the input. Those rows in GELL-MANN’s S matrix correspond- ing to particles input to the program are tested using the additivity law. However, GELL-MANN also pre- dicts unseen particles by taking advantage of those quark combinations (numbering C(e - 1 + k, k) - s) that were not used to model the known particles. In these cases, the properties of these new particles are predicted by simply pre-multiplying the matrix P by these C(e - 1+ k, k) - s rows. MEC MECHEM’s task is to discover the simplest pathway able to explain all the experimental evidence about an aggregate chemical reaction [Valdes, 1992; 1993 (in press)]. MECHEM searches the space of two matrices shown in Figure 5. Some constraints on the R matrix are: 1. matrix entries admit only integer values, 2. For each row, the sum of the negative integers is -1 or -2. The sum of the positive integers is 1 or 2 474 Valdes-P&z [Each reaction has at most two reactants and two products]. 3. Each column contains at least one nonzero entry [All substances must occur somewhere in the reaction]. 4. For each column corresponding to observed or con- jectured products, the top-most nonzero entry is positive [Each product must be formed before it can be consumed]. The fourth constraint is used to define a canonical order on reactions in the service of search efficiency [Valdes, 19911; t i is not derived from chemical theory. New rows and columns can be added in the R ma- trix as the program fails with simpler hypotheses, so we see that MECHEM has two dimensions of expansion that guide its search in this matrix space. MECHEM prefers adding new reactions (by incrementing T) over incrementing the number of conjectured substances, so usually the matrix is growing vertically. Three sys- tems considered in this paper (MECHEM, MENDEL, and GELL-MANN) enlarge matrices along two dimen- sions . In the S matrix, the molecular formulas for the start- ing materials and observed products are known; the program determines the formulas, or matrix entries, for the conjectured substances. This task is common to all systems which fill in entries in the S matrix. As in DALTON, the conservation conditions can be expressed as the equation RrXs x Ssxe = O,,,, in which S is a structure matrix that contains the molecular formula (involving e chemical elements) of each substance, and 0 ,.Xe is the zero matrix. In addition to conservation of the elements, MECHEM incorporates other chemical constraints that arise from properties of substances, such as free energy or oxidation number. These constraints can be inter- preted as an equation 72,., x PsxP = ZTxP, in which the constraint on the entries of the p columns of 2 vary according to the property. For example, in certain ox- idation reactions, the oxidation number should never decrease across a reaction, so all the entries under the oxidation-number column of 2 would be non-negative. The above are not the only search spaces in MECHEM. For example, to perform its task at a mod- ern level of competence, molecular structures must be inferred for the conjectured substances, not only for- mulas. The space of molecular structures can also be represented as a matrix, similar to the search space in DENDRAL [Lindsay et al., 19801. MENDEL MENDEL’s task is to devise genetic explanations for observed inheritance patterns (or “reactions”) among phenotypes [Fischer and Zytkow, 19921. Each pheno- type is explained by one or more genotypes. MENDEL searches the pair of matrices 7t and S in Figure 6, in analogy to the matrix R in DALTON, MECHEM, S gene1 . . . gene, genotype1 . . genotype, i Figure 6: Matrix Structure of MENDEL and STAHL, and in analogy to S in DALTON, GELL- MANN, and MECHEM. The domain constraints on S are identical to GELL-MANN’s: o The matrix entries admit only non-negative integers. e The sum of entries over each row equals k, which is the number of genes making UP a genotype. d The number s def C(e-l+k, k) of possible genotypes having k genes (analogous to the constraint in GELL- MANN) satisfies the constraint f 5 C(e - 1+ k, k) 5 3f, where f is the fixed number of input phenotypes. MENDEL enlarges the number of columns in S if it cannot find explanations of genetic reactions within a specific number of genes. Adding one more column to the matrix corresponds to postulating one more gene. MENDEL, like GELL-MANN, carries out a subordinate search by varying the values of the parameter k, which together with the number e of genes leads to postu- lating C(e - 1 + k, k) g enotypes; these determine the number of rows in the S matrix and columns in the R matrix. MENDEL’s search for gene combinations into genotypes is similar to GELL-MANN and DALTON, al- though several genotypes may be needed to explain one phenotype and several genotype reactions may be needed to explain one phenotype reaction. The relative number of reactions between genotypes which look identical at the phenotype level is accept- able when it is approximately equal to the observed inheritance statistics that govern mating between phe- notypes. Rather than using a predefined conservation principle, MENDEL searches for the right conserva- tion/combination principle for genetic reactions, and finds out that one gene per parent is preserved in each offspring. Since the same genotype can occur on both sides of a genetic reaction, and the occurrences should not cancel out, the entries in the R matrix need to be pairs (nr, nP), where n, is the number of reactants and nP is the number of products of a particular genotype. %3/PAULH PAULI’s goal [Valdes, accepted], like its predecessor BR-3 [Kocabas, 19911, is to postulate a small number of Novel Methods in Knowledge Acquisition 475 z particle1 . . . particle, good reactions II bad reactions 1. - p qP1 *** QPP) particle1 . particle, 1 - Figure 7: Matrix Structure of BR-S/PAUL1 lime R 1 rgt flx.ed rnaa. gria calcined air B magnesia, RI - + + Rs~+ - 0 0 0 - + RI = J3.2 (lime} + {quick lime, fixed air} = {quick lime, magnesia alba) + {lime, calcined magnesia} Figure 8: Reaction Matrix of STAHL quantum properties, together with values for the prop- erties for each known elementary particle. New prop- erties must explain how certain reactions in physics do not occur and how others occur. The (“good”) re- actions that occur must conserve each of the postu- lated properties, while every (“bad”) disallowed reac- tion must disconserve at least one of the properties. PAULI’s matrix search space is shown in Figure 7. A filled-in R matrix is input to the program. PAUL1 fills in the P matrix, and enlarges its number of columns, i.e., the number of quantum properties that it postu- lates, when simpler models fail. The only constraint that applies directly to the P matrix is that the quantum properties of parti- cle/antiparticle pairs should be of equal magnitude and opposite sign. Further constraints on solutions involve both conservation and disconservation. Letting g and b denote the “good” and “bad” reactions respectively, the following matrix equation must be satisfied: [ gf,x:] x&p= [ 2;] The first matrix is input to the program, the sub- matrix 0, xp has only zero entries, and the sub-matrix 2axp enforces the disconservation: each row of 2 must contain a nonzero entry. Like GELL-MANN, BR-3 and PAUL1 could predict many unseen good and bad reac- tions by combining particles in various ways and test- ing whether conservation of all properties holds. STAHL Unlike other systems, the STAHL program of [Zytkow and Simon, 19861 discovers qualitative models rather than quantitative ones. Consequently, to describe STAHL’s search problem we use qualitative matrix en- tries rather than numbers. calcined magnesia 0 0 + + Figure 9: Structure Matrix of STAHL We use an example from page 128 of [Zytkow and Simon, 1986] for illustration. The input to the pro- gram consists of qualitative input/output facts about chemical reactions shown in Figure 8. A negative en- try ‘-’ corresponds to a reactant, a positive entry ‘+’ corresponds to a reaction product, while any non- participating substance receives a zero entry. To rep- resent reaction schemes in which the same substance occurs both as a reactant and a product, pairs of signs can be used, e.g., (-, +). STAHL’s task is to discover the elements and the make-up of substances in terms of these elements, i.e., an S matrix. In the above example, from the first re- action STAHL notices that lime consists of quick lime and fixed air, and then combining the first and the sec- ond, that magnesia alba consists of calcined magnesia and fixed air. In effect the S matrix in Figure 9 is cre- ated. If two rows in the S matrix have the same entries, STAHL concludes that two substances having different names are in fact identical. In such a case, one row in S (and the column in R) can be deleted to give a simpler model; STAHL is the only system in this paper that can be viewed to shrink matrices. The columns of S can be viewed either as growing and shrinking, or as only shrinking from a maximal possible set of ele- mentary substances. The number of rows in R grows, since STAHL makes “new” reactions from arithmetic combinations of known ones. STAHL’s R and S matrices satisfy a qualitative con- servation principle: each element which occurs in a reaction should appear both in its reactants and in its products. This can be expressed identically to DAL- TON and MECHEM as R,., x SJxe = Cbrxe, where matrix multiplication uses qualitative arithmetic fol- lowing expected rules, for instance pos x neg = neg, pos + pos = pos, pos x 0 = 0. The qualitative arith- metic is not associative (e.g., pos + pos + neg could equal pos or 0), but the order of production-rule firing determines how expressions are simplified. Contradictions can arise when the product R x S has nonzero entries. Such nonzero entries indicate reactions which according to current knowledge (and STAHL’s qualitative arithmetic) are unbalanced. iscussion The six systems examined in this paper propose dis- crete underlying models of empirical phenomena across a variety of tasks and sciences. All of the systems 476 Valdes-PCrez find models of either the structure or properties of substances; this is the main task of DALTON, GELL- MANN, MENDEL, BR-3/PAULI, and STAHL. In addi- tion, DALTON, MECHEM, and MENDEL find models of processes (reactions) in terms of hidden objects. DAL- TON takes a set of qualitative reactions and specifies them quantitatively, while MECHEM finds a simplest set of reactions (a pathway) from scratch. All of the systems fill in the entries of one or more matrices. All except DALTON enlarge one or more ma- trix dimensions, and all including DALTON use con- straints expressible as matrix equations of the form AB = C or weaker forms of conservation.’ The con- cept of simplicity has a strong presence, as reflected es- pecially by growth in the matrices, which corresponds to entertaining more complex models. Three matrix types are observed to recur. The most frequent is the reaction matrix R, which appears in all of the systems except GELL-MANN. Either the struc- ture matrix S or the property matrix P appears in all of the systems; all except DALTON and STAHL postu- late either new objects or new properties. Other discovery systems DENDRAL [Lindsay et al., 19801 and TETRAD-II [Spirtes et al., 19901 discover models of molecular and causal structure, respectively. These models are in the form of graphs, which as is well known can always be represented as adjacency matrices. However, these two systems use different matrix types and different con- straints than the ones discussed here, so we have not included them in the analysis of this paper. AM [Lenat, 19821, GRAFFITI [Fajtlowicz, 19881, and BACON [Langley et al., 19871 are other notable discov- ery systems that do not seem to fit the present frame- work. AM and GRAFFITI find plausible mathematical conjectures in elementary number theory and graph theory. BACON finds descriptive, empirical laws in data. These programs make inductive generalizations and introduce new theoretical terms, but do not build discrete models of hidden structure. What is gained? It is always possible to view one thing as another thing. A better understanding of a subject is often claimed as a virtue of a new viewpoint. However, since “under- standing” is a slippery notion, it is more convincing if the new viewpoint enables new practical accomplish- ments or unifies seemingly unrelated phenomena. This section discusses what is gained by the matrix rep- resentation of discrete models and the matrix-search viewpoint, and culminates by suggesting ways to inte- grate separately-developed discovery systems. There are several gains from the interpretation and notation introduced by this paper. First, they provide ‘The order of matrix multiplication in AB = C has no special significance, since the theorem (AB)T = BTAT allows rewriting the former as BTAT = CT. a unifying framework that demonstrates a broad sim- ilarity of input/output representation, constraint rep- resentation, and elements of search. These similarities raise the question of whether a more general scheme could incorporate these systems as special cases. A second benefit from the matrix viewpoint is that several constraints can be expressed and satisfied using explicit algebraic techniques, such as Gaussian elimina- tion or linear programming. MECHEM and PAUL1 do use matrix manipulation to satisfy some constraints. MECHEM converts pathways to matrices in order to solve for the unknown substances by imposing the con- servation law of reaction balance. MECHEM also uses matrix algebra to test whether a pathway can explain observed concentrations data. Finally, MECHEM and PAUL1 both use the simplex algorithm of linear pro- gramming to implement some constraints, and the sim- plex algorithm uses the matrix representation explic- itly in the form of tableaus. A third benefit is that matrix-based heuristics can guide us to find and address other scientific problems that resemble the current ones. One should look for problems that: 8 Progressively enlarge classes of objects, structural elements, processes, or properties. Mention of sim- plicity or Occam’s Razor in this connection is a fa- vorable sign. Involve integral numbers of combinations of things. Involve linear tivity laws. constraints, e.g., conservation or addi- Examples of possible matches are Feynman diagrams in particle physics (in which the simplest diagrams are called “leading-edge” diagrams), models of ions, and models of atomic nuclei. Finally, the next section uses the matrix-search viewpoint to demonstrate how an integration of GELL-MANN with BR-3/PAULI could be carried out. ntegrating systems The concept of search in matrix spaces can be applied to show how the task of GELL-MANN can be integrated smoothly with the task of BR-S/PAULI. GELL-MANN’s search fills out the two matrices s 1 particles I quarks and ’ quarz i properties subject to the constraint s sxe X Pexp = pi,,- Given a reaction matrix 73 1 particles reactlons , BR- 3/PAULI’s search fills out a property matrix P” 1 properties particles 1 subject to the constraint A combined system that carries out the tasks of GELL-MANN and BR-3/PAULI simultaneously would Novel Methods in Knowledge Acquisition 477 fill out the R, S, and P matrices and would need to satisfy at least the constraint X pexp = In the integrated system, four distinct matrix dimen- sions can be enlarged: the two from GELL-MANN, one from BR-3/PAULI, but also a fourth for the reaction dimension; since many new unseen “good” and “bad” reactions can be postulated just as GELL-MANN could postulate unseen particles and their property values. The combined system could accept exactly the 72 in- put to BR-3/PAULI as before, and carry out a substan- tial theoretical effort by postulating quarks, properties, values, and unseen reactions all within a single system. Conclusion We have shown that the search carried out by a num- ber of well-known systems that induce discrete models of hidden structure can be represented by sets of ma- trices on which constraints are placed. The typical di- mensions of the matrices involve reactions (p&esses), substances (types of objects), and properties of sub- stances. For example, DALTON finds structural mod- els of chemical reactions and substances which can be described by a reaction matrix R,,, and a structure matrix Ssxe. RrX, describes reactions by the number of molecules of each substance in input and outnut. and Ssxe describes the composition of each substance in terms of numbers of atoms of elements. Conserva- tion of atoms is expressed by 72 x S = 0. The common matrix representation eases compar- ing these systems, reveals their underlying commonal- ities and sometimes shows how two systems (e.g., BR- 3/PAULI and GELL-MANN) can be integrated into a single one. It also suggests how search algorithms de- signed for one system could be applied to-others. Hypothesizing new reactions, substances, or proper- ties is accomplished by enlarging a matrix along one or - more dimensions. The sizes of the matrices provide an * (inverse) measure of a model’s simplicity, so that gen- erating small matrices first, then successively enlarging them as required, assures that simpler hypotheses are . - considered first, and that only as many hidden entities are introduced as are required to account for the data. Representing model-building as search over a small matrix set does much to reduce the apparent diversity among the various systems, and shows that a few prin- ciples are fundamental to the organization and func- tioning of most of them. Hence, the representation is a significant advance toward a general theory of dis- crete model-building in scientific discovery. Acknowledgments. RVP was supported partly by a W.M. Keck Foundation grant for advanced training in computational biology to the University of Pittsburgh, Carnegie Mellon, and the Pittsburgh Supercomputing Cen- ter. JMZ contributed to this paper while on sabbatical at Carnegie Mellon. eferences Aris, R. and Mah, R.H.S. 1963. Independence of chemical reactions. Ind. Eng. Chem. Fundam. 2:90-94. Brualdi, Richard A. 1981. introductory Combinatorics. North Holland, New York, NY. (Theorem on page 37). Fajtlowicz, Siemion 1988. On conjectures of Graffiti. Dis- crete Mathematics 72:113-118. Fischer, P. and Zytkow, Jan M. 1990. Discovering quarks and hidden structure. Proc. of 5th International Sympo- sium on Methodologies for Intelligent Systems. Fischer, P. and Zytkow, Jan M. 1992. Incremental gen- eration and exploration of hidden structure. Proc. of the ICML-92 Workshop on Machine Discovery. Klahr, D. and Dunbar, K. 1988. Dual space search during scientific reasoning. Cognitive Science 12:1-48. Kocabas, Sakir 1991. Conflict resolution as discovery in particle physics. Machine Learning 6(3):277-309. Langley, P.; Simon, H.A.; Bradshaw, G.L.; and Zytkow, J.M. 1987. Scientific Discovery: Computational Explo- rations of the Creative Processes. MIT Press. Lea, Glenn and Simon, Herbert A. 1974. Problem solving and rule induction: A unified view. In Gregg, Lee W., editor 1974, Knowledge and Cognition. Lenat, Douglas B. 1982. AM: Discovery in mathematics as heuristic search. In Davis, R. and Lenat, D.B., editors 1982, Knowledge-based systems in artificial intelligence. Lindsay, R.K.; Buchanan, B.G.; Feigenbaum, E.A.; and Lederberg, J. 1980. Applications of Artificial Intelligence for Organic Chemistry: The Dendval Project. Rose, D. and Langley, P. 1986. Chemical discovery as belief revision. Machine Learning 1(4):423-451. Rose, Donald 1989. Using domain knowledge to aid sci- entific theory revision. In Proc. of the 6th International Workshop on Machine Learning. Spirtes, P.; Glymour, C.; and Scheines, R. 1990. Causal- ity from probability. In Tiles, J.E. et al., editors 1990, Evolving Knowledge in Ncatuml Scienoz and Artificial In- telligence. Pitman. VaIdes-Perez, RauI E. Conjecturing hidden entities via simplicity and conservation laws: machine discovery in chemistry. Artificial Intelligence. In press. Valdes-Perez, Raul E. Discovery of conserved properties in particle physics: A comparison of two models. Machine Learning. Accepted for publication. Valdes-Perez, Raul E. 1991. A canonical representation of multistep reactions. Journal of Chemical Information and Computer Sciences 31(4):554-556. VaIdes-Perez, Raul E. 1992. Theory-driven discovery of reaction pathways in the MECHEM system. In Proc. of 10th National Conference on Artificial Intelligence. 63-69. Zytkow, J.M. and Simon, Herbert A. 1986. A theory of historical discovery: the construction of componential models. Machine Learning 1(1):107-139. Zytkow, J.M. 1987. Combining many searches in the FAHRENHEIT discovery system. In Proc. of the 4th In- ternational Workshop on Machine Learning. 281-287. 478 Valdes-PCrez | 1993 | 71 |
Subsets and Splits
SQL Console for Seed42Lab/AI-paper-crawl
Finds papers discussing interpretability and explainability in machine learning from after 2010, offering insight into emerging areas of research focus.
Interpretability Papers Since 2011
Reveals papers from the AAAI dataset after 2010 that discuss interpretability or explainability, highlighting key research in these areas.
SQL Console for Seed42Lab/AI-paper-crawl
Searches for papers related to interpretability and explainability in NIPS proceedings after 2010, providing a filtered dataset for further analysis of research trends.
AI Papers on Interpretability
Finds papers discussing interpretability or explainability published after 2010, providing insight into recent trends in research focus.
ICML Papers on Interpretability
Retrieves papers from the ICML dataset after 2010 that mention interpretability or explainability, offering insights into trends in model transparency research.
ICLR Papers on Interpretability
Retrieves papers from the ICLR dataset that discuss interpretability or explainability, focusing on those published after 2010, providing insights into evolving research trends in these areas.
ICCV Papers on Interpretability
Finds papers from the ICCV dataset published after 2010 that discuss interpretability or explainability, providing insight into trends in research focus.
EMNLP Papers on Interpretability
Retrieves papers related to interpretability and explainability published after 2010, providing a focused look at research trends in these areas.
ECCV Papers on Interpretability
The query retrieves papers from the ECCV dataset related to interpretability and explainability published after 2010, providing insights into recent trends in these research areas.
CVPR Papers on Interpretability
Retrieves papers from the CVPR dataset published after 2010 that mention 'interpretability' or 'explainability', providing insights into the focus on these topics over time.
AI Papers on Interpretability
Retrieves papers from the ACL dataset that discuss interpretability or explainability, providing insights into research focus in these areas.